0 Members and 1 Guest are viewing this topic.
They are just what they are. One is horrible and we try to avoid it, while the other is nice and we seek it out, with the result that most people are now overweight due to their desire to eat delicious things.
Pain is a distressing feeling often caused by intense or damaging stimuli. The International Association for the Study of Pain's widely used definition defines pain as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage".[1] In medical diagnosis, pain is regarded as a symptom of an underlying condition.
Pleasure is a component of reward, but not all rewards are pleasurable (e.g., money does not elicit pleasure unless this response is conditioned).[2] Stimuli that are naturally pleasurable, and therefore attractive, are known as intrinsic rewards, whereas stimuli that are attractive and motivate approach behavior, but are not inherently pleasurable, are termed extrinsic rewards.[2] Extrinsic rewards (e.g., money) are rewarding as a result of a learned association with an intrinsic reward.[2] In other words, extrinsic rewards function as motivational magnets that elicit "wanting", but not "liking" reactions once they have been acquired.[2]The reward system contains pleasure centers or hedonic hotspots – i.e., brain structures that mediate pleasure or "liking" reactions from intrinsic rewards. As of October 2017, hedonic hotspots have been identified in subcompartments within the nucleus accumbens shell, ventral pallidum, parabrachial nucleus, orbitofrontal cortex (OFC), and insular cortex.[3][4][5] The hotspot within the nucleus accumbens shell is located in the rostrodorsal quadrant of the medial shell, while the hedonic coldspot is located in a more posterior region. The posterior ventral pallidum also contains a hedonic hotspot, while the anterior ventral pallidum contains a hedonic coldspot. Microinjections of opioids, endocannabinoids, and orexin are capable of enhancing liking in these hotspots.[3] The hedonic hotspots located in the anterior OFC and posterior insula have been demonstrated to respond to orexin and opioids, as has the overlapping hedonic coldspot in the anterior insula and posterior OFC.[5] On the other hand, the parabrachial nucleus hotspot has only been demonstrated to respond to benzodiazepine receptor agonists.[3]Hedonic hotspots are functionally linked, in that activation of one hotspot results in the recruitment of the others, as indexed by the induced expression of c-Fos, an immediate early gene. Furthermore, inhibition of one hotspot results in the blunting of the effects of activating another hotspot.[3][5] Therefore, the simultaneous activation of every hedonic hotspot within the reward system is believed to be necessary for generating the sensation of an intense euphoria.[6]
What about it? Each individual must be protected by morality from whatever kinds of suffering can be inflicted on it, and that varies between different people as well as between different species.
Imagine that you have to live all the lives of all the people and utility monsters. They are all you. With that understanding in your head, you decide that you prefer being utility monsters, so you want to phase out people and replace them. You also have to live the lives of those people, so you need to work out how not to upset them, and the best way to do that is to let the transition take a long time so that the difference is too small to register with them. For a sustainable human population, each person who has children might have 1.2 children. That could be reduced to 1.1 and the population would gradually disappear while the utility monsters gradually increase in number. Some of those humans will realise that they're envious of the utility monsters and would rather be them, so they may be open to the idea of bringing up utility monsters instead of children, and that may be all you need to drive the transition. It might also make the humans feel a lot happier about things if they know that a small population of humans will be allowed to go on existing forever - that could result in better happiness numbers overall than having them totally replaced by utility monsters.
I think what you are doing here is building a moral system based on simple version of utilitarianism, and then apply patches to cover specific criticisms that discovers loopholes on it. Discovering those loopholes is what philosophers do.Rawl's version is widely recognized as one form of utilitarianism.
You need to draw a line between sentient and non-sentient.
Or assign numbers to allow us measure and describe sentience, including partial sentience. The next step would be some methods to use those numbers to make decisions of which options to take in morally conflicting situations.
I don't think that a fundamental principle of morality should be based on symptoms.
A person gets brain damage that makes him unable to feel pain and pleasure, while still capable of doing normal activities. Is he still considered sentient? Does he still has right to be treated as sentient being? Why so?
If we acknowledge that currently, humans are not the most optimal form to achieve universal moral goal, we also acknowledge that there are somethings that must be changed. But we must be careful that many changes lead to worse outcome than existing condition.
QuoteIt says those outputs make no difference to the actions of the machine, which means the machine would claim feelings even if there were none. That means you've zero evidence for this sentience you claim.That's the whole point: there is no evidence of the sentience. There is no way for a data system to acquire such evidence, so its claims about the existence of sentience are incompetent.
It says those outputs make no difference to the actions of the machine, which means the machine would claim feelings even if there were none. That means you've zero evidence for this sentience you claim.
Once you're dealing with neural nets, you may not be able to work out how they do what they do, but they are running functionality in one way or another. That lack of understanding leaves room for people to point at the mess and say "sentience is in there", but that's not doing science.
We need to see the mechanism and we need to identify the thing that is sentient. Neural nets can be simulated and we can then look at how they behave in terms of cause and effect.
Quote13 also talks about ideas being distinct from data. An idea sounds an awful lot like data to me.Variables are data, but they are not ideas.
13 also talks about ideas being distinct from data. An idea sounds an awful lot like data to me.
If sentience is a form of data, what does that sentience look like in the Chinese Room?
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.
If a multi-component feels a feeling without any of the components feeling anything, that's magic.
We don't have any model for sentience being part of the system
The claims that come out about feelings are assertions. They are either true or baseless. If the damage inputs are handled correctly, the pleasure will be suppressed in an attempt to minimise damage.
And if an unpleasant feeling is generated when an animal eats delicious food, it will be designed (by evolution) to go on eating it.
QuoteMy model doesn't run on magic. I've asserted no such thing, and you've supposedly not asserted it about your model.It's measuring a feeling and magically knowing that it's a feeling that it's measuring rather than just a signal of any normal kind.
My model doesn't run on magic. I've asserted no such thing, and you've supposedly not asserted it about your model.
It means there's no point in engaging with an irrational data system, as you label it. Your whole moral code is based on a lie about feeling for which you claim no evidence exists.
But you're pointing in there and saying sentience is not there, which is equally not science.
Science is not saying "I don't know how it works, so it's in there".
Doesn't work. You can look at them [neural nets] all you want and understand exactly how it works, and still not see the sentience because the understanding is not subjective. The lack of understanding is not the problem.
QuoteVariables are data, but they are not ideas.I made no mention of variables. I said ideas seem to be data. You assert otherwise, but have not demonstrated it.
Variables are data, but they are not ideas.
Chinese room is not a model of a human, or if it is, it is a model of a paralyzed person with ESP in a sensory deprivation chamber. Any output from it that attempts to pass a Turing test is deceit.
Nevertheless, the thing is capable of its own sentience. The sentience is in the processing of the data of course. It is not the data itself. Data can be shelved. Process cannot.
Quote from: HalcMy big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.You didn't really reply to this. You posted some text after it, but that text (above) was related to sentience being the processing of data and no to point 7 which implicitly assumes a premise of separation of 'conscious thing' and an 'information system'.
Combustion of a gas can occur without any of the electrons and protons (the compoents) being combusted.
There are creatures that feel (in a crude manner) and yet lack the complexity (or the motivation) to document it, so they've no memory of past feelings.
QuoteWe don't have any model for sentience being part of the systemDon't say 'we'. You don't have a model maybe.
Given damage data, what's the point of suppressing pleasure if the system that is in charge of minimizing the damage is unaware of either the pain or pleasure? This makes no sense given the model you've described.
QuoteAnd if an unpleasant feeling is generated when an animal eats delicious food, it will be designed (by evolution) to go on eating it.You told me the animal cannot know the food tastes good. It just concludes it should eat it, I don't know, due to logical deduction or something.
If they're sentient, then they're included. Some animals may not be, and it's highly doubtful that any plants are, or at least, not in any way that's tied to what's happening to them (just as the material of a rock could be sentient).
Quote from: hamdani yusuf on 14/10/2019 06:43:23A person gets brain damage that makes him unable to feel pain and pleasure, while still capable of doing normal activities. Is he still considered sentient? Does he still has right to be treated as sentient being? Why so?If you've removed all of that from him, there could still be neutral feelings like colour qualia, in which case he would still be sentient. You could thus have a species which is sentient but only has such neutral feelings and they would not care about existing or anything else that happens to them, so they have no need of protection from morality. They might be programmed to struggle to survive when under attack, but in their minds they would be calmly observing everything throughout and would be indifferent to the outcome.
In the case of your brain-damaged human though, there are the relatives, friends and other caring people to consider. They will be upset if he is not protected by morality even if he doesn't need that himself.
That's the very problem identified by philosophers critisizing utilitarianism. How can you expect anyone else to agree with your thoughts when your don't clearly define what you mean with sentience, which you claimed to be the core idea of universal morality?
At least you have to define a criterion to determine which agent is more sentient when compared to another agent. It would be better if you can assign a number to represent each agent's sentience, so they can be ranked at once. You can't calculate something that can't be quantified. Until you have a method to quantify sentience of moral agents, your AGI is useless to calculate the best option in a moral problem.
AFAIK, neuroscience has demonstrated that pain, pleasure, sadness, happiness are electrochemical states of nervous systems, and human already have basic understanding of how to manipulate them at will. I think we can be quite confident to say that rocks feel nothing, thus not sentient.
So if the brain-damaged human has no relative or friend that care, e.g. unwanted baby left by the parents, there would be no utilitarian moral reason to save him/her.
I'm not required to spell out what is sentient and in what ways it is sentient. That task is part of the calculation: what are the odds that species A is sentient, and how much does it suffer in cases where it suffers, and how much pleasure does it experience in cases where it enjoys things. AGI will make the best judgements it can about those things and then act on the basis of those numbers. It will look at rocks and determine that there is no known way to affect how any sentience that might be in any rock is feeling, so anything goes when it comes to interactions with rocks.
It's AGI's job to work out those numbers as best as they can be worked out.
Neuroscience has demonstrated nothing of the kind. It merely makes assumptions equivalent to listening to the radio waves coming off a processor and making connections with patterns in that and the (false) claims about sentience being generated by a program.
Do you know how Artificial Intelligence work?
Their creators need to define what their ultimate/terminal goal is.
An advanced version of AI may find instrumental goals beyond the expectation of its creators, but they won't change the ultimate/terminal goal.
I have posted several videos discussing this. You better check them out.
Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition.
Does your calculation include emotional states such as happiness, sadness, love, passion, anger, anxiety, lust, etc.?
You have claimed that the ultimate goal of morality is maximizing X while minimizing Y. But so far you haven't clearly define what they are and their boundary conditions, so it's impossible for anyone else to definitively agree or disagree with you.
QuoteNeuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition.All it has demonstrated is correlation with something that may or may not be real. If you pull the plug on a machine that's generating false claims about being conscious, the false claims stop. The link between the claims being generated and particular patterns of activity in a processor do not determine that the claimed feelings in the system are real.
I wasn't talking about artificial intelligent machines here. It was experiments on living humans using medical instrumentation such as fMRI and brainwave sensors that can determine when someone is conscious or not, when they are feeling pain or not. We can compare the readings of the instrumentations and the experience of the human subjects to draw a general patterns about what brain conditions constitute consciousness and feelings.
You are talking about biological machines which generate claims about consciousness which may not be true, just as a computer can generate claims about experiencing feelings (including one of awareness) without those claims being true. When you disrupt the functionality of the hardware in some way, whether it's a CPU or a brain, you stop the generation of those claims. You do not get any proof from that that you are narrowing down the place where actual feelings might be being experienced.
Russell & Norvig (2003) group agents into five classes based on their degree of perceived intelligence and capability:1. simple reflex agents2. model-based reflex agents3. goal-based agents4. utility-based agents5. learning agents