The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. General Discussion & Feedback
  3. Just Chat!
  4. Is there a universal moral standard?
« previous next »
  • Print
Pages: 1 ... 12 13 [14] 15 16 ... 212   Go Down

Is there a universal moral standard?

  • 4236 Replies
  • 965608 Views
  • 2 Tags

0 Members and 171 Guests are viewing this topic.

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #260 on: 14/10/2019 06:37:57 »
Quote from: David Cooper on 11/10/2019 22:52:52
They are just what they are. One is horrible and we try to avoid it, while the other is nice and we seek it out, with the result that most people are now overweight due to their desire to eat delicious things.
Quote
Pain is a distressing feeling often caused by intense or damaging stimuli. The International Association for the Study of Pain's widely used definition defines pain as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage".[1] In medical diagnosis, pain is regarded as a symptom of an underlying condition.
https://en.wikipedia.org/wiki/Pain
I don't think that a fundamental principle of morality should be based on symptoms.

Quote
Pleasure is a component of reward, but not all rewards are pleasurable (e.g., money does not elicit pleasure unless this response is conditioned).[2] Stimuli that are naturally pleasurable, and therefore attractive, are known as intrinsic rewards, whereas stimuli that are attractive and motivate approach behavior, but are not inherently pleasurable, are termed extrinsic rewards.[2] Extrinsic rewards (e.g., money) are rewarding as a result of a learned association with an intrinsic reward.[2] In other words, extrinsic rewards function as motivational magnets that elicit "wanting", but not "liking" reactions once they have been acquired.[2]

The reward system contains pleasure centers or hedonic hotspots – i.e., brain structures that mediate pleasure or "liking" reactions from intrinsic rewards. As of October 2017, hedonic hotspots have been identified in subcompartments within the nucleus accumbens shell, ventral pallidum, parabrachial nucleus, orbitofrontal cortex (OFC), and insular cortex.[3][4][5] The hotspot within the nucleus accumbens shell is located in the rostrodorsal quadrant of the medial shell, while the hedonic coldspot is located in a more posterior region. The posterior ventral pallidum also contains a hedonic hotspot, while the anterior ventral pallidum contains a hedonic coldspot. Microinjections of opioids, endocannabinoids, and orexin are capable of enhancing liking in these hotspots.[3] The hedonic hotspots located in the anterior OFC and posterior insula have been demonstrated to respond to orexin and opioids, as has the overlapping hedonic coldspot in the anterior insula and posterior OFC.[5] On the other hand, the parabrachial nucleus hotspot has only been demonstrated to respond to benzodiazepine receptor agonists.[3]

Hedonic hotspots are functionally linked, in that activation of one hotspot results in the recruitment of the others, as indexed by the induced expression of c-Fos, an immediate early gene. Furthermore, inhibition of one hotspot results in the blunting of the effects of activating another hotspot.[3][5] Therefore, the simultaneous activation of every hedonic hotspot within the reward system is believed to be necessary for generating the sensation of an intense euphoria.[6]
https://en.wikipedia.org/wiki/Pleasure#Neuropsychology
A system with known method to hack the reward is prone to reward hacking and produce unintended consequences.
« Last Edit: 14/10/2019 07:18:23 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #261 on: 14/10/2019 06:43:23 »
Quote from: David Cooper on 11/10/2019 22:52:52
What about it? Each individual must be protected by morality from whatever kinds of suffering can be inflicted on it, and that varies between different people as well as between different species.
A person gets brain damage that makes him unable to feel pain and pleasure, while still capable of doing normal activities. Is he still considered sentient? Does he still has right to be treated as sentient being? Why so?
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #262 on: 14/10/2019 07:11:23 »
Quote from: David Cooper on 11/10/2019 22:52:52
Imagine that you have to live all the lives of all the people and utility monsters. They are all you. With that understanding in your head, you decide that you prefer being utility monsters, so you want to phase out people and replace them. You also have to live the lives of those people, so you need to work out how not to upset them, and the best way to do that is to let the transition take a long time so that the difference is too small to register with them. For a sustainable human population, each person who has children might have 1.2 children. That could be reduced to 1.1 and the population would gradually disappear while the utility monsters gradually increase in number. Some of those humans will realise that they're envious of the utility monsters and would rather be them, so they may be open to the idea of bringing up utility monsters instead of children, and that may be all you need to drive the transition. It might also make the humans feel a lot happier about things if they know that a small population of humans will be allowed to go on existing forever - that could result in better happiness numbers overall than having them totally replaced by utility monsters.
If we acknowledge that currently, humans are not the most optimal form to achieve universal moral goal, we also acknowledge that there are somethings that must be changed. But we must be careful that many changes lead to worse outcome than existing condition.
Those changes don't have to be purely genetical, nor require total destruction of older version (i.e. death). Some form of diversity could be useful. Biohacking can change some parts of the body to eliminate disadvantage and gain some advantages, although those changes can make us be chimeras.
They don't have to be organic either. Interfaces with biomechatronics can be useful.
https://en.wikipedia.org/wiki/Cyborg
Logged
Unexpected results come from false assumptions.
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #263 on: 14/10/2019 20:29:18 »
Quote from: hamdani yusuf on 14/10/2019 05:41:08
I think what you are doing here is building a moral system based on simple version of utilitarianism, and then apply patches to cover specific criticisms that discovers loopholes on it. Discovering those loopholes is what philosophers do.
Rawl's version is widely recognized as one form of utilitarianism.

What patches? I identified a method that covers the entirety of morality in one simple go by reducing multiple-participant systems to single-participant systems so that it becomes nothing more fancy than a calculation as to what is best for that individual. I did not expect this to be a version of utilitarianism, but it appears to be the fundamental approach that utilitarianism is subconsciously informed by and which no one had previously managed to pin down. We now have it though right out in the open.

When someone sets out a faulty thought experiment which ignores some of the factors and comes to an incorrect conclusion as a result, I am not patching anything when I point to the factors which the person has failed to include in it and which completely change the conclusion. I am correcting their errors.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #264 on: 14/10/2019 20:35:07 »
Quote from: hamdani yusuf on 14/10/2019 06:17:10
You need to draw a line between sentient and non-sentient.

Indeed. Non-sentient things don't need to be protected by morality because they can't be harmed (and can't enjoy anything either). Things could be sentient without us having any way to know though, but we don't know what to do or what not to do with them to make them feel better rather than worse, so we just have to leave that to luck. A rock may feel pain when its glowing red hot, but most of the rock in this planet is in that state. Perhaps it's the cold rock that's in pain and we can make it feel better by melting it. We don't know. We might as well just melt rock if we need it hot and leave it alone when we don't.

Quote
Or assign numbers to allow us measure and describe sentience, including partial sentience. The next step would be some methods to use those numbers to make decisions of which options to take in morally conflicting situations.

It's all about best guesses when dealing with sentient things whose feelings you can't actually measure. Maybe some day it will be possible to measure feelings for all sentient things, at which point the scores given to them will become much more accurate.
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #265 on: 14/10/2019 20:43:14 »
Quote from: hamdani yusuf on 14/10/2019 06:37:57
I don't think that a fundamental principle of morality should be based on symptoms.

It has to be based on how much things feel good or horrid. We can put a lot of numbers to that with humans simply by collecting data from people who know how to compare two different things. Someone who has been poisoned and who has been stabbed can be asked which one they'd chose to repeat if they had to go through another such incident, and that would tell you which is worse (once you've averaged the answers of enough people who have that experience. They needn't all share the same two experiences though: you can do it with a ring of them in which the first has been through those two events, the second has been stabbed and stung by a bullet ant, the third has been stung by a bullet ant and has been attacked by a honey badger, and the fourth has been attacked by a honey badger and poisoned. You should be able to imagine how this can extend to cover most of human experience. It would be harder to do it for animals, but we can make assumptions that they will feel much the same way as we do about many things, though with less mental anguish as the get simpler as they'll have less understanding of what's going on.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #266 on: 14/10/2019 20:52:46 »
Quote from: hamdani yusuf on 14/10/2019 06:43:23
A person gets brain damage that makes him unable to feel pain and pleasure, while still capable of doing normal activities. Is he still considered sentient? Does he still has right to be treated as sentient being? Why so?

If you've removed all of that from him, there could still be neutral feelings like colour qualia, in which case he would still be sentient. You could thus have a species which is sentient but only has such neutral feelings and they would not care about existing or anything else that happens to them, so they have no need of protection from morality. They might be programmed to struggle to survive when under attack, but in their minds they would be calmly observing everything throughout and would be indifferent to the outcome.

In the case of your brain-damaged human though, there are the relatives, friends and other caring people to consider. They will be upset if he is not protected by morality even if he doesn't need that himself.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #267 on: 14/10/2019 20:57:37 »
Quote from: hamdani yusuf on 14/10/2019 07:11:23
If we acknowledge that currently, humans are not the most optimal form to achieve universal moral goal, we also acknowledge that there are somethings that must be changed. But we must be careful that many changes lead to worse outcome than existing condition.

The important thing is to find out which genes lead to people having a strong desire to do immoral things and to edit those genes to remove those desires, at least in future generations. (It may not be possible to change the brains that have already had their development shaped by bad genes.) We will be able to do a lot though just by putting dangerous people under the control of something that they wear which can disable them temporarily by a high voltage whenever they try to do something seriously immoral. That will maximise their freedom, ensuring that we don't need to lock them up in prison to protect others.
Logged
 

Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2404
  • Activity:
    6%
  • Thanked: 1014 times
Re: Is there a universal moral standard?
« Reply #268 on: 16/10/2019 12:55:34 »
Quote from: David Cooper on 14/10/2019 01:19:12
Quote
It says those outputs make no difference to the actions of the machine, which means the machine would claim feelings even if there were none. That means you've zero evidence for this sentience you claim.
That's the whole point: there is no evidence of the sentience. There is no way for a data system to acquire such evidence, so its claims about the existence of sentience are incompetent.
Irrational is what they are.  It means there's no point in engaging with an irrational data system, as you label it. Your whole moral code is based on a lie about feeling for which you claim no evidence exists.

Quote
Once you're dealing with neural nets, you may not be able to work out how they do what they do, but they are running functionality in one way or another. That lack of understanding leaves room for people to point at the mess and say "sentience is in there", but that's not doing science.
But you're pointing in there and saying sentience is not there, which is equally not science.  Science is not saying "I don't know how it works, so it's in there".  I in particular reference my subjective experience in making my claim, despite my inability to present that evidence to another.

Quote
We need to see the mechanism and we need to identify the thing that is sentient. Neural nets can be simulated and we can then look at how they behave in terms of cause and effect.
Doesn't work.  You can look at them all you want and understand exactly how it works, and still not see the sentience because the understanding is not subjective.  The lack of understanding is not the problem.

Quote
Quote
13 also talks about ideas being distinct from data. An idea sounds an awful lot like data to me.
Variables are data, but they are not ideas.
I made no mention of variables.  I said ideas seem to be data.  You assert otherwise, but have not demonstrated it.
Quote
If sentience is a form of data, what does that sentience look like in the Chinese Room?
Chinese room is not a model of a human, or if it is, it is a model of a paralyzed person with ESP in a sensory deprivation chamber.  Any output from it that attempts to pass a Turing test is deceit.
Nevertheless, the thing is capable of its own sentience. The sentience is in the processing of the data of course. It is not the data itself. Data can be shelved. Process cannot.

Quote from: Halc
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.
You didn't really reply to this. You posted some text after it, but that text (above) was related to sentience being the processing of data and no to point 7 which implicitly assumes a premise of separation of 'conscious thing' and an 'information system'.

Quote
If a multi-component feels a feeling without any of the components feeling anything, that's magic.
I was wondering where you thought the magic was needed. Now I know. I deny that it is magic. Combustion of a gas can occur without any of the electrons and protons (the compoents) being combusted. A computer can read a web page without any transistor actually reading the web page. Kindly justify your assertion.
I'm talking about feeling and not the documentation of it, since you harp on that a lot. There are creatures that feel (in a crude manner) and yet lack the complexity (or the motivation) to document it, so they've no memory of past feelings.

Quote
We don't have any model for sentience being part of the system
Don't say 'we'.  You don't have a model maybe.

Quote
The claims that come out about feelings are assertions. They are either true or baseless. If the damage inputs are handled correctly, the pleasure will be suppressed in an attempt to minimise damage.
Given damage data, what's the point of suppressing pleasure if the system that is in charge of minimizing the damage is unaware of either the pain or pleasure? This makes no sense given the model you've described.

Quote
And if an unpleasant feeling is generated when an animal eats delicious food, it will be designed (by evolution) to go on eating it.
You told me the animal cannot know the food tastes good. It just concludes it should eat it, I don't know, due to logical deduction or something.

Quote
Quote
My model doesn't run on magic. I've asserted no such thing, and you've supposedly not asserted it about your model.
It's measuring a feeling and magically knowing that it's a feeling that it's measuring rather than just a signal of any normal kind.
This presumes that 'feeling' and 'normal signal' are different things. I'll partially agree since I don't think any feeling is reducible to one signal, but signals involved with feelings are quite normal.
« Last Edit: 16/10/2019 13:15:30 by Halc »
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #269 on: 16/10/2019 23:55:45 »
Quote from: Halc on 16/10/2019 12:55:34
It means there's no point in engaging with an irrational data system, as you label it. Your whole moral code is based on a lie about feeling for which you claim no evidence exists.

It isn't necessarily irrational in any other aspect, but it's merely generating false claims about being sentient. In the case of humans though, the claims might yet be true, somehow. If they are true, then morality has a purpose and we know how it should be applied. If it turns out that the claims are false, then morality is superfluous: it would not be wrong for AGI to stand back and let some people torture others for fun because there would be no suffering and no fun.

Quote
But you're pointing in there and saying sentience is not there, which is equally not science.

What I'm doing is pointing at simple systems and saying sentience isn't there, or at least, not in any way that shapes the data being generated were if it involves claims about feelings in the machine, they are fictions. We can add layers of complexity and see that sentience is still not there, and if we just go on adding more layers of complexity in the same way, sentience will never be involved. Something radically different has to happen to introduce sentience. Something in there has to have a way of measuring feelings.

Quote
Science is not saying "I don't know how it works, so it's in there".

A lot of science is doing exactly that. The researchers believe they are sentient, so they project that into what they're studying.

Quote
Doesn't work.  You can look at them [neural nets] all you want and understand exactly how it works, and still not see the sentience because the understanding is not subjective.  The lack of understanding is not the problem.

The data documenting the experiencing of feelings has to be generated by something non-magical in nature. If we can understand how the data's generated and don't find sentience in that mechanism with feelings being measured in any way, there is no sentience involved in shaping that data.

Quote
Quote
Variables are data, but they are not ideas.
I made no mention of variables.  I said ideas seem to be data.  You assert otherwise, but have not demonstrated it.

I was giving an example of data that doesn't count as an idea. It can represent an idea, but the idea has to be stored in a more complex structure.

Quote
Chinese room is not a model of a human, or if it is, it is a model of a paralyzed person with ESP in a sensory deprivation chamber.  Any output from it that attempts to pass a Turing test is deceit.

It is a model for every conventional kind of computing (non-quantum) that we understand.

Quote
Nevertheless, the thing is capable of its own sentience. The sentience is in the processing of the data of course. It is not the data itself. Data can be shelved. Process cannot.

There is a person operating the Chinese Room, but their feelings make no impression on the data or the process. The process is just a series of simple operations on data, and each of those operations can be identical to other instances with the same function being applied to the same piece of data in very different contexts where the machinery is blind to the context and should not feel any difference between them.

Quote
Quote from: Halc
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.
You didn't really reply to this. You posted some text after it, but that text (above) was related to sentience being the processing of data and no to point 7 which implicitly assumes a premise of separation of 'conscious thing' and an 'information system'.

I replied with questions: "If sentience is a form of data, what does that sentience look like in the Chinese Room? It's just symbols on pieces of paper and simple processes being applied where new symbols are produced on pieces of paper. If a piece of paper has "ouch" written on it, is that an experience of pain?"

The point of those questions was to make you pay attention to your objection to the conscious thing not being data. If it's data, it's just symbols printed on paper. That's magic sentience with symbols on paper feeling things related to the meanings that an information maps to those symbols.

Quote
Combustion of a gas can occur without any of the electrons and protons (the compoents) being combusted.

Combustion is an abstraction. There is a change in linkage from a higher energy link to a lower energy link and the energy freed up from that is expressed as movement. Burning is equal to that lower-level description and we can substitute that description for all cases of burning.

If feelings are an abstraction in the same way, there needs to be a lower level description of them which accounts for them and equates to them. If pain is an abstraction and is "experienced" by an abstraction (some composite thing), you can then have none of the components feeling anything. But if sentience equals the low-level description, the problem there is that there is no sentience in the low level description, so sentience is lost. How's it been lost? Well, it was lost as soon as it was asserted that none of the components feel anything. For sentience to be real, at least one of the components must feel something.

Quote
There are creatures that feel (in a crude manner) and yet lack the complexity (or the motivation) to document it, so they've no memory of past feelings.

People often deny that such creatures have feelings. The key thing about humans is that they create data that documents an experience of feelings, and that should make it possible to trace back the claims to see what evidence they're based on. With animals which don't produce such data, there's nothing to trace. However, many of them may still produce internal data about it which they can't talk about.

Quote
Quote
We don't have any model for sentience being part of the system
Don't say 'we'.  You don't have a model maybe.

No one has a model for it. Or rather, no one has a model for it that doesn't have a magic module in it somewhere.

Quote
Given damage data, what's the point of suppressing pleasure if the system that is in charge of minimizing the damage is unaware of either the pain or pleasure? This makes no sense given the model you've described.

Why would it make sense in a broken model? All the models are broken so you can't expect things to add up.

Quote
Quote
And if an unpleasant feeling is generated when an animal eats delicious food, it will be designed (by evolution) to go on eating it.
You told me the animal cannot know the food tastes good. It just concludes it should eat it, I don't know, due to logical deduction or something.

If it generates data claiming it tastes good while the actual feeling might be the opposite, that shows a total disconnect between the feelings and the data that supposedly documents the feelings. This is one of the ways of showing the faults in models: if a feeling is asserted to be experienced in a model and you can switch the assertion round while nothing else changes, the feeling can't respond to match the new assertion.
Logged
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #270 on: 17/10/2019 10:05:56 »
Quote from: David Cooper on 11/10/2019 22:52:52
If they're sentient, then they're included. Some animals may not be, and it's highly doubtful that any plants are, or at least, not in any way that's tied to what's happening to them (just as the material of a rock could be sentient).
That's the very problem identified by philosophers critisizing utilitarianism. How can you expect anyone else to agree with your thoughts when your don't clearly define what you mean with sentience, which you claimed to be the core idea of universal morality? At least you have to define a criterion to determine which agent is more sentient when compared to another agent. It would be better if you can assign a number to represent each agent's sentience, so they can be ranked at once. You can't calculate something that can't be quantified. Until you have a method to quantify sentience of moral agents, your AGI is useless to calculate the best option in a moral problem.
AFAIK, neuroscience has demonstrated that pain, pleasure, sadness, happiness are electrochemical states of nervous systems, and human already have basic understanding of how to manipulate them at will. I think we can be quite confident to say that rocks feel nothing, thus not sentient.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #271 on: 17/10/2019 10:39:00 »
Quote from: David Cooper on 14/10/2019 20:52:46
Quote from: hamdani yusuf on 14/10/2019 06:43:23
A person gets brain damage that makes him unable to feel pain and pleasure, while still capable of doing normal activities. Is he still considered sentient? Does he still has right to be treated as sentient being? Why so?

If you've removed all of that from him, there could still be neutral feelings like colour qualia, in which case he would still be sentient. You could thus have a species which is sentient but only has such neutral feelings and they would not care about existing or anything else that happens to them, so they have no need of protection from morality. They might be programmed to struggle to survive when under attack, but in their minds they would be calmly observing everything throughout and would be indifferent to the outcome.

The neutral feelings contribute nothing to total utility, hence the resources should be used optimally, which is to maximize positive feelings and minimize negative feelings.

Quote
In the case of your brain-damaged human though, there are the relatives, friends and other caring people to consider. They will be upset if he is not protected by morality even if he doesn't need that himself.
So if the brain-damaged human has no relative or friend that care, e.g. unwanted baby left by the parents, there would be no utilitarian moral reason to save him/her.
Logged
Unexpected results come from false assumptions.
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #272 on: 17/10/2019 20:19:59 »
Quote from: hamdani yusuf on 17/10/2019 10:05:56
That's the very problem identified by philosophers critisizing utilitarianism. How can you expect anyone else to agree with your thoughts when your don't clearly define what you mean with sentience, which you claimed to be the core idea of universal morality?

I'm not required to spell out what is sentient and in what ways it is sentient. That task is part of the calculation: what are the odds that species A is sentient, and how much does it suffer in cases where it suffers, and how much pleasure does it experience in cases where it enjoys things. AGI will make the best judgements it can about those things and then act on the basis of those numbers. It will look at rocks and determine that there is no known way to affect how any sentience that might be in any rock is feeling, so anything goes when it comes to interactions with rocks.

Quote
At least you have to define a criterion to determine which agent is more sentient when compared to another agent. It would be better if you can assign a number to represent each agent's sentience, so they can be ranked at once. You can't calculate something that can't be quantified. Until you have a method to quantify sentience of moral agents, your AGI is useless to calculate the best option in a moral problem.

It's AGI's job to work out those numbers as best as they can be worked out.

Quote
AFAIK, neuroscience has demonstrated that pain, pleasure, sadness, happiness are electrochemical states of nervous systems, and human already have basic understanding of how to manipulate them at will. I think we can be quite confident to say that rocks feel nothing, thus not sentient.

Neuroscience has demonstrated nothing of the kind. It merely makes assumptions equivalent to listening to the radio waves coming off a processor and making connections with patterns in that and the (false) claims about sentience being generated by a program.

Quote
So if the brain-damaged human has no relative or friend that care, e.g. unwanted baby left by the parents, there would be no utilitarian moral reason to save him/her.

There are plenty of non-relatives who will care too, so the only way to get to the point where that person doesn't matter to anyone is for that person to exist in a world where there are no other people, or where all people are like that. They may then be regarded as expendable machines which, while conscious, have no feelings that enable them to be harmed and none that enable them to enjoy existing either. They are then superfluous.
Logged
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #273 on: 18/10/2019 02:06:59 »
Quote from: David Cooper on 17/10/2019 20:19:59
I'm not required to spell out what is sentient and in what ways it is sentient. That task is part of the calculation: what are the odds that species A is sentient, and how much does it suffer in cases where it suffers, and how much pleasure does it experience in cases where it enjoys things. AGI will make the best judgements it can about those things and then act on the basis of those numbers. It will look at rocks and determine that there is no known way to affect how any sentience that might be in any rock is feeling, so anything goes when it comes to interactions with rocks.
Quote from: David Cooper on 17/10/2019 20:19:59
It's AGI's job to work out those numbers as best as they can be worked out.
Do you know how Artificial Intelligence work? Their creators need to define what their ultimate/terminal goal is. An advanced version of AI may find instrumental goals beyond the expectation of its creators, but they won't change the ultimate/terminal goal. I have posted several videos discussing this. You better check them out.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #274 on: 18/10/2019 02:20:53 »
Quote from: David Cooper on 17/10/2019 20:19:59
Neuroscience has demonstrated nothing of the kind. It merely makes assumptions equivalent to listening to the radio waves coming off a processor and making connections with patterns in that and the (false) claims about sentience being generated by a program.
Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition. If you want to expand the scope of the term, it's fine. You just need to clearly state its new boundary condition so everyone else can understand what you mean. Does your calculation include emotional states such as happiness, sadness, love, passion, anger, anxiety, lust, etc.?
You have claimed that the ultimate goal of morality is maximizing X while minimizing Y. But so far you haven't clearly define what they are and their boundary conditions, so it's impossible for anyone else to definitively agree or disagree with you.
Logged
Unexpected results come from false assumptions.
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #275 on: 18/10/2019 20:56:31 »
Quote from: hamdani yusuf on 18/10/2019 02:06:59
Do you know how Artificial Intelligence work?

I would hope so. I've been working in that field for two decades.

Quote
Their creators need to define what their ultimate/terminal goal is.

Their goal is to do what they're programmed to do, and that will be to help sentient things. When there's a conflict between the wishes of different sentient things, they are to apply computational morality to determine the right course of action.

Quote
An advanced version of AI may find instrumental goals beyond the expectation of its creators, but they won't change the ultimate/terminal goal.

That's right: computational morality governs any other sub-goals that they might come up with.

Quote
I have posted several videos discussing this. You better check them out.

There are mountains of information on this issue and most of it is wayward. I have taken you straight to the correct answer so that you can jettison all the superfluous junk.

Quote
Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition.

All it has demonstrated is correlation with something that may or may not be real. If you pull the plug on a machine that's generating false claims about being conscious, the false claims stop. The link between the claims being generated and particular patterns of activity in a processor do not determine that the claimed feelings in the system are real.

Quote
Does your calculation include emotional states such as happiness, sadness, love, passion, anger, anxiety, lust, etc.?

Of course. All feelings have to be considered and be weighted appropriately in order to come up with the right total.

Quote
You have claimed that the ultimate goal of morality is maximizing X while minimizing Y. But so far you haven't clearly define what they are and their boundary conditions, so it's impossible for anyone else to definitively agree or disagree with you.

I've provided the method (which provides you with any boundary conditions you need) and it isn't my job to produce the actual numbers. A lot of data has to be collected and crunched in order to get those numbers, and only AGI can do that work.
Logged
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #276 on: 21/10/2019 03:15:37 »
Quote from: David Cooper on 18/10/2019 20:56:31
Quote
Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition.

All it has demonstrated is correlation with something that may or may not be real. If you pull the plug on a machine that's generating false claims about being conscious, the false claims stop. The link between the claims being generated and particular patterns of activity in a processor do not determine that the claimed feelings in the system are real.
I wasn't talking about artificial intelligent machines here. It was experiments on living humans using medical instrumentation such as fMRI and brainwave sensors that can determine when someone is conscious or not, when they are feeling pain or not. We can compare the readings of the instrumentations and the experience of the human subjects to draw a general patterns about what brain conditions constitute consciousness and feelings.
Logged
Unexpected results come from false assumptions.
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #277 on: 21/10/2019 17:26:41 »
Quote from: hamdani yusuf on 21/10/2019 03:15:37
I wasn't talking about artificial intelligent machines here. It was experiments on living humans using medical instrumentation such as fMRI and brainwave sensors that can determine when someone is conscious or not, when they are feeling pain or not. We can compare the readings of the instrumentations and the experience of the human subjects to draw a general patterns about what brain conditions constitute consciousness and feelings.

You are talking about biological machines which generate claims about consciousness which may not be true, just as a computer can generate claims about experiencing feelings (including one of awareness) without those claims being true. When you disrupt the functionality of the hardware in some way, whether it's a CPU or a brain, you stop the generation of those claims. You do not get any proof from that that you are narrowing down the place where actual feelings might be being experienced.
Logged
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #278 on: 22/10/2019 10:53:19 »
Quote from: David Cooper on 21/10/2019 17:26:41
You are talking about biological machines which generate claims about consciousness which may not be true, just as a computer can generate claims about experiencing feelings (including one of awareness) without those claims being true. When you disrupt the functionality of the hardware in some way, whether it's a CPU or a brain, you stop the generation of those claims. You do not get any proof from that that you are narrowing down the place where actual feelings might be being experienced.
Any instrumentation system has non-zero error rate. There always be a chance for either false positive or false negative. But as long the error rate can be maintained below an acceptable limit (based on risk evaluation considering probability of the error occurence and severity of the effects), the method can be legitimately used.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #279 on: 24/10/2019 10:07:15 »
I have argued that applications of moral rules depend on the conscience level of the agents.

Quote
Russell & Norvig (2003) group agents into five classes based on their degree of perceived intelligence and capability:
1. simple reflex agents


2. model-based reflex agents


3. goal-based agents


4. utility-based agents


5. learning agents


https://en.wikipedia.org/wiki/Intelligent_agent#Classes

Enforcement of moral rules through reward and punishment can only be done to learning agents.
Logged
Unexpected results come from false assumptions.
 



  • Print
Pages: 1 ... 12 13 [14] 15 16 ... 212   Go Up
« previous next »
Tags: morality  / philosophy 
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.294 seconds with 65 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.