The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Member Map
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. General Discussion & Feedback
  3. Just Chat!
  4. Is there a universal moral standard?
« previous next »
  • Print
Pages: 1 ... 13 14 [15] 16   Go Down

Is there a universal moral standard?

  • 303 Replies
  • 12270 Views
  • 1 Tags

0 Members and 1 Guest are viewing this topic.

Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #280 on: 17/10/2019 10:39:00 »
Quote from: David Cooper on 14/10/2019 20:52:46
Quote from: hamdani yusuf on 14/10/2019 06:43:23
A person gets brain damage that makes him unable to feel pain and pleasure, while still capable of doing normal activities. Is he still considered sentient? Does he still has right to be treated as sentient being? Why so?

If you've removed all of that from him, there could still be neutral feelings like colour qualia, in which case he would still be sentient. You could thus have a species which is sentient but only has such neutral feelings and they would not care about existing or anything else that happens to them, so they have no need of protection from morality. They might be programmed to struggle to survive when under attack, but in their minds they would be calmly observing everything throughout and would be indifferent to the outcome.

The neutral feelings contribute nothing to total utility, hence the resources should be used optimally, which is to maximize positive feelings and minimize negative feelings.

Quote
In the case of your brain-damaged human though, there are the relatives, friends and other caring people to consider. They will be upset if he is not protected by morality even if he doesn't need that himself.
So if the brain-damaged human has no relative or friend that care, e.g. unwanted baby left by the parents, there would be no utilitarian moral reason to save him/her.
Logged
Unexpected results come from false assumptions.
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2746
  • Activity:
    1%
  • Thanked: 36 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #281 on: 17/10/2019 20:19:59 »
Quote from: hamdani yusuf on 17/10/2019 10:05:56
That's the very problem identified by philosophers critisizing utilitarianism. How can you expect anyone else to agree with your thoughts when your don't clearly define what you mean with sentience, which you claimed to be the core idea of universal morality?

I'm not required to spell out what is sentient and in what ways it is sentient. That task is part of the calculation: what are the odds that species A is sentient, and how much does it suffer in cases where it suffers, and how much pleasure does it experience in cases where it enjoys things. AGI will make the best judgements it can about those things and then act on the basis of those numbers. It will look at rocks and determine that there is no known way to affect how any sentience that might be in any rock is feeling, so anything goes when it comes to interactions with rocks.

Quote
At least you have to define a criterion to determine which agent is more sentient when compared to another agent. It would be better if you can assign a number to represent each agent's sentience, so they can be ranked at once. You can't calculate something that can't be quantified. Until you have a method to quantify sentience of moral agents, your AGI is useless to calculate the best option in a moral problem.

It's AGI's job to work out those numbers as best as they can be worked out.

Quote
AFAIK, neuroscience has demonstrated that pain, pleasure, sadness, happiness are electrochemical states of nervous systems, and human already have basic understanding of how to manipulate them at will. I think we can be quite confident to say that rocks feel nothing, thus not sentient.

Neuroscience has demonstrated nothing of the kind. It merely makes assumptions equivalent to listening to the radio waves coming off a processor and making connections with patterns in that and the (false) claims about sentience being generated by a program.

Quote
So if the brain-damaged human has no relative or friend that care, e.g. unwanted baby left by the parents, there would be no utilitarian moral reason to save him/her.

There are plenty of non-relatives who will care too, so the only way to get to the point where that person doesn't matter to anyone is for that person to exist in a world where there are no other people, or where all people are like that. They may then be regarded as expendable machines which, while conscious, have no feelings that enable them to be harmed and none that enable them to enjoy existing either. They are then superfluous.
Logged
 

Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #282 on: 18/10/2019 02:06:59 »
Quote from: David Cooper on 17/10/2019 20:19:59
I'm not required to spell out what is sentient and in what ways it is sentient. That task is part of the calculation: what are the odds that species A is sentient, and how much does it suffer in cases where it suffers, and how much pleasure does it experience in cases where it enjoys things. AGI will make the best judgements it can about those things and then act on the basis of those numbers. It will look at rocks and determine that there is no known way to affect how any sentience that might be in any rock is feeling, so anything goes when it comes to interactions with rocks.
Quote from: David Cooper on 17/10/2019 20:19:59
It's AGI's job to work out those numbers as best as they can be worked out.
Do you know how Artificial Intelligence work? Their creators need to define what their ultimate/terminal goal is. An advanced version of AI may find instrumental goals beyond the expectation of its creators, but they won't change the ultimate/terminal goal. I have posted several videos discussing this. You better check them out.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #283 on: 18/10/2019 02:20:53 »
Quote from: David Cooper on 17/10/2019 20:19:59
Neuroscience has demonstrated nothing of the kind. It merely makes assumptions equivalent to listening to the radio waves coming off a processor and making connections with patterns in that and the (false) claims about sentience being generated by a program.
Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition. If you want to expand the scope of the term, it's fine. You just need to clearly state its new boundary condition so everyone else can understand what you mean. Does your calculation include emotional states such as happiness, sadness, love, passion, anger, anxiety, lust, etc.?
You have claimed that the ultimate goal of morality is maximizing X while minimizing Y. But so far you haven't clearly define what they are and their boundary conditions, so it's impossible for anyone else to definitively agree or disagree with you.
Logged
Unexpected results come from false assumptions.
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2746
  • Activity:
    1%
  • Thanked: 36 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #284 on: 18/10/2019 20:56:31 »
Quote from: hamdani yusuf on 18/10/2019 02:06:59
Do you know how Artificial Intelligence work?

I would hope so. I've been working in that field for two decades.

Quote
Their creators need to define what their ultimate/terminal goal is.

Their goal is to do what they're programmed to do, and that will be to help sentient things. When there's a conflict between the wishes of different sentient things, they are to apply computational morality to determine the right course of action.

Quote
An advanced version of AI may find instrumental goals beyond the expectation of its creators, but they won't change the ultimate/terminal goal.

That's right: computational morality governs any other sub-goals that they might come up with.

Quote
I have posted several videos discussing this. You better check them out.

There are mountains of information on this issue and most of it is wayward. I have taken you straight to the correct answer so that you can jettison all the superfluous junk.

Quote
Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition.

All it has demonstrated is correlation with something that may or may not be real. If you pull the plug on a machine that's generating false claims about being conscious, the false claims stop. The link between the claims being generated and particular patterns of activity in a processor do not determine that the claimed feelings in the system are real.

Quote
Does your calculation include emotional states such as happiness, sadness, love, passion, anger, anxiety, lust, etc.?

Of course. All feelings have to be considered and be weighted appropriately in order to come up with the right total.

Quote
You have claimed that the ultimate goal of morality is maximizing X while minimizing Y. But so far you haven't clearly define what they are and their boundary conditions, so it's impossible for anyone else to definitively agree or disagree with you.

I've provided the method (which provides you with any boundary conditions you need) and it isn't my job to produce the actual numbers. A lot of data has to be collected and crunched in order to get those numbers, and only AGI can do that work.
Logged
 



Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #285 on: 21/10/2019 03:15:37 »
Quote from: David Cooper on 18/10/2019 20:56:31
Quote
Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition.

All it has demonstrated is correlation with something that may or may not be real. If you pull the plug on a machine that's generating false claims about being conscious, the false claims stop. The link between the claims being generated and particular patterns of activity in a processor do not determine that the claimed feelings in the system are real.
I wasn't talking about artificial intelligent machines here. It was experiments on living humans using medical instrumentation such as fMRI and brainwave sensors that can determine when someone is conscious or not, when they are feeling pain or not. We can compare the readings of the instrumentations and the experience of the human subjects to draw a general patterns about what brain conditions constitute consciousness and feelings.
Logged
Unexpected results come from false assumptions.
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2746
  • Activity:
    1%
  • Thanked: 36 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #286 on: 21/10/2019 17:26:41 »
Quote from: hamdani yusuf on 21/10/2019 03:15:37
I wasn't talking about artificial intelligent machines here. It was experiments on living humans using medical instrumentation such as fMRI and brainwave sensors that can determine when someone is conscious or not, when they are feeling pain or not. We can compare the readings of the instrumentations and the experience of the human subjects to draw a general patterns about what brain conditions constitute consciousness and feelings.

You are talking about biological machines which generate claims about consciousness which may not be true, just as a computer can generate claims about experiencing feelings (including one of awareness) without those claims being true. When you disrupt the functionality of the hardware in some way, whether it's a CPU or a brain, you stop the generation of those claims. You do not get any proof from that that you are narrowing down the place where actual feelings might be being experienced.
Logged
 

Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #287 on: 22/10/2019 10:53:19 »
Quote from: David Cooper on 21/10/2019 17:26:41
You are talking about biological machines which generate claims about consciousness which may not be true, just as a computer can generate claims about experiencing feelings (including one of awareness) without those claims being true. When you disrupt the functionality of the hardware in some way, whether it's a CPU or a brain, you stop the generation of those claims. You do not get any proof from that that you are narrowing down the place where actual feelings might be being experienced.
Any instrumentation system has non-zero error rate. There always be a chance for either false positive or false negative. But as long the error rate can be maintained below an acceptable limit (based on risk evaluation considering probability of the error occurence and severity of the effects), the method can be legitimately used.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #288 on: 24/10/2019 10:07:15 »
I have argued that applications of moral rules depend on the conscience level of the agents.

Quote
Russell & Norvig (2003) group agents into five classes based on their degree of perceived intelligence and capability:
1. simple reflex agents


2. model-based reflex agents


3. goal-based agents


4. utility-based agents


5. learning agents


https://en.wikipedia.org/wiki/Intelligent_agent#Classes

Enforcement of moral rules through reward and punishment can only be done to learning agents.
Logged
Unexpected results come from false assumptions.
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2746
  • Activity:
    1%
  • Thanked: 36 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #289 on: 24/10/2019 21:01:27 »
I think it's good that you and others are still exploring this. We'll soon be able to put all the different approaches to the test by running them in AGI systems to see how they perform when applied consistently to all thought experiments. Many approaches will be shown to be wrong by their clear failure to account for some scenarios which reveal serious defects. Many others may do a half-decent job in all cases. Some may do the job perfectly. I'm confident that my approach will produce the best performance in all cases despite it being extremely simple because I think I've found the actual logical basis for morality. I think other approaches are guided by a subconscious understanding of this too, but instead of uncovering the method that I found, people tend to create rules at a higher level which fail to account for everything that's covered at the base level, so they end up with partially correct moral systems which fail in some circumstances. Whatever your ideas evolve into, it will be possible to let AGI take your rules and apply them to test them to destruction, so I'm going to stop commenting in this thread in order not to lose any time that's better spent on building the tool that will enable that testing to be done.
Logged
 

Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #290 on: 25/10/2019 11:15:19 »
Quote from: David Cooper on 24/10/2019 21:01:27
I think it's good that you and others are still exploring this. We'll soon be able to put all the different approaches to the test by running them in AGI systems to see how they perform when applied consistently to all thought experiments. Many approaches will be shown to be wrong by their clear failure to account for some scenarios which reveal serious defects. Many others may do a half-decent job in all cases. Some may do the job perfectly. I'm confident that my approach will produce the best performance in all cases despite it being extremely simple because I think I've found the actual logical basis for morality. I think other approaches are guided by a subconscious understanding of this too, but instead of uncovering the method that I found, people tend to create rules at a higher level which fail to account for everything that's covered at the base level, so they end up with partially correct moral systems which fail in some circumstances. Whatever your ideas evolve into, it will be possible to let AGI take your rules and apply them to test them to destruction, so I'm going to stop commenting in this thread in order not to lose any time that's better spent on building the tool that will enable that testing to be done.
Thank you for your contribution in this topic. It's sad that you decide to stop, but it's certainly your right.
IMO, in searching for a universal moral standard we need to declare the definitions of each terms we use to construct our ideas. That's because human languages, including English, contain many ambiguities, homonyms, and dependencies on contexts. You can say that an AGI may resolve the problem, but without clear definition, different AGI systems (e.g. made by different developers, trained using different methods, etc.) might lead to different or even contradicting solutions.
Different person may have different preference on the same feeling/sensation. In extreme case, some kind of pain might be preferred by some kind of persons, such as sadomasochists. Hence I concluded that there must be a deeper meaning than feeling which we should base our morality upon.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #291 on: 25/10/2019 11:18:14 »
Here is another reading on trolley problem to check our ideas on universal morality.
https://qz.com/1562585/the-seven-moral-rules-that-supposedly-unite-humanity/
Quote
In 2012, Oliver Scott Curry was an anthropology lecturer at the University of Oxford. One day, he organized a debate among his students about whether morality was innate or acquired. One side argued passionately that morality was the same everywhere; the other, that morals were different everywhere.

“I realized that, obviously, no one really knew, and so decided to find out for myself,” Curry says.

Seven years later, Curry, now a senior researcher at Oxford’s Institute for Cognitive and Evolutionary Anthropology, can offer up an answer to the seemingly ginormous question of what morality is and how it does—or doesn’t—vary around the world.


Morality, he says, is meant to promote cooperation. “People everywhere face a similar set of social problems, and use a similar set of moral rules to solve them,” he says as lead author of a paper recently published in Current Anthropology. “Everyone everywhere shares a common moral code. All agree that cooperating, promoting the common good, is the right thing to do.”

For the study, Curry’s group studied ethnographic accounts of ethics from 60 societies, across over 600 sources. The universal rules of morality are:

Help your family
Help your group
Return favors
Be brave
Defer to superiors
Divide resources fairly
Respect others’ property
The authors reviewed seven “well-established” types of cooperation to test the idea that morality evolved to promote cooperation, including family values, or why we allocate resources to family; group loyalty, or why we form groups, conform to local norms, and promote unity and solidarity; social exchange or reciprocity, or why we trust others, return favors, seek revenge, express gratitude, feel guilt, and make up after fights; resolving conflicts through contests which entail “hawkish displays of dominance” such as bravery or “dovish displays of submission,” such as humility or deference; fairness, or how to divide disputed resources equally or compromise; and property rights, that is, not stealing.


The team found that these seven cooperative behaviors were considered morally good in 99.9% of cases across cultures. Curry is careful to note that people around the world differ hugely in how they prioritize different cooperative behaviors. But he said the evidence was overwhelming in widespread adherence to those moral values.

“I was surprised by how unsurprising it all was,” he says. “I expected there would be lots of ‘be brave,’  ‘don’t steal from others,’ and ‘return favors,’ but I also expected a lot of strange, bizarre moral rules.” They did find the occasional departure from the norm. For example, among the Chuukese, the largest ethnic group in the Federated States of Micronesia, “to steal openly from others is admirable in that it shows a person’s dominance and demonstrates that he is not intimidated by the aggressive powers of others.” That said, researchers who studied the group concluded that the seven universal moral rules still apply to this behavior: “it appears to be a case in which one form of cooperation (respect for property) has been trumped by another (respect for a hawkish trait, although not explicitly bravery),” they wrote.

Plenty of studies have looked at some rules of morality in some places, but none have attempted to examined the rules of morality in such a large sample of societies. Indeed, when Curry was trying to get funding, his idea was repeatedly rejected as either too obvious or too impossible to prove.

The question of whether morality is universal or relative is an age-old one. In the 17th century, John Locke wrote that if you look around the world, “you could be sure that there is scarce that principle of morality to be named, or rule of virtue to be thought on …. which is not, somewhere or other, slighted and condemned by the general fashion of whole societies of men.”


Philosopher David Hume disagreed. He wrote that moral judgments depend on an “internal sense or feeling, which nature has made universal in the whole species,” noting that certain qualities, including “truth, justice, courage, temperance, constancy, dignity of mind . . . friendship, sympathy, mutual attachment, and fidelity” were pretty universal.

In a critique of Curry’s paper, Paul Bloom, a professor of psychology and cognitive science at Yale University, says that we are far from consensus on a definition of morality. Is it about fairness and justice, or about “maximizing the welfare of sentient beings?” Is it about delaying gratification for long-term gain, otherwise known as intertemporal choice—or maybe altruism?

Bloom also says that the authors of the Current Anthropology study do not sufficiently explain the way we come to moral judgements—that is, the roles that reason, emotions, brain structures, social forces, and development may play in shaping our ideas of morality. While the paper claims that moral judgments are universal because of “collection of instincts, intuitions, inventions, and institutions,” Bloom writes, the authors make “no specific claims about what’s innate, what’s learned, and what arises from personal choice.”

So perhaps the seven universal rules may not be the ultimate list. But at a time when it often feels like we don’t have much in common, Curry offers a framework to consider how we might.

“Humans are a very tribal species,” Curry says. “We are quick to divide into us and them.”

And here is how the trolley problem has evolved over time.
https://www.prindlepost.org/2018/05/just-how-useful-is-the-trolley-problem/
Quote
Philosophy can be perceived as a rather dry, boring subject. Perhaps for that very reason, divulgers have attempted to use stimulating and provocative thought experiments and hypothetical scenarios, in order to arouse students and get them to think about deep problems.

Surely one of the most popular thought experiments is the so-called “Trolley Problem”, widely discussed across American colleges as a way to introduce ethics. It actually goes back to an obscure paper written by Philippa Foot in the 1960s. Foot wondered if a surgeon could ethically kill one healthy patient in order to give her organs to five sick patients, and thus save their life. Then, she wondered whether the driver of a trolley on course to run over five people could divert the trolley onto another track in which only one person would be killed.


As it happens, when presented with these questions, most people agree it is not ethical for the surgeon to kill the patient and distribute her organs thus saving the other five, but it is indeed ethical for the driver to divert the trolley, thus killing one and saving the five. Foot was intrigued what the difference would be between both cases.

She reasoned that, in the first case, the dilemma is between killing one and letting five die, whereas in the second case, the dilemma is between killing one and killing five. Foot argued that there is a big moral difference between killing and letting die. She considered negative duties (duties not to harm others) should have precedence over positive duties (duties to help others), and that is why letting five die is better than killing one.

This was a standard argument for many years, until another philosopher, Judith Jarvis Thomson, took over the discussion and considered new variants of the trolley scenario. Thomson considered a trolley going down its path about to run over five people, and the possibility of diverting it towards another track where only one person would be run over. But, in this case, the decision to do so would not come from the driver, but rather, from a bystander who pulls a lever in order to divert the trolley.

The bystander could simply do nothing, and let the five die. But, when presented with this scenario, most people believe that the bystander has the moral obligation to pull the lever. This is strange, as now, the dilemma is not between killing one and killing five, but instead, killing one and letting five die. Why can the bystander pull the lever, but the surgeon cannot kill the healthy person?

Thomson believed that the answer was to be found in the doctrine of double effect, widely discussed by Thomas Aquinas and Catholic moral philosophers. Some actions may serve an ultimately good purpose, and yet, have harmful side effects. Those actions would be morally acceptable as long as the harmful side effects are merely foreseen, but not intended. The surgeon would save the five patients by distributing the healthy person’s organs, but in so doing, he would intend the harmful effect (the death of the donor). The bystander would also save the five persons by diverting the trolley, but killing the one person on the alternate track is not an intrinsic part of the plan, and in that sense, the bystander would merely foresee, but not intend, the death of that one person.

Thomson considered another trolley scenario that seemed to support her point. Suppose the trolley is going down its path to run over five people, and it is about to go underneath a bridge. On that bridge, there is a fat man. If thrown onto the tracks, the fat man’s weight would stop the trolley, and thus save the five people. Again, this would be killing one person in order to save five. However, the fat man’s death would not only be foreseen but also intended. According to the doctrine of double effect, this action would be immoral. And indeed, when presented with this scenario, most people disapprove of throwing down the fat man.

However, Thomson herself came up with yet another trolley scenario, in which an action is widely approved by people who consider it, yet it is at odds with the doctrine of double effect. Suppose this time that the trolley is on its path to run over five people, and there is a looping track in which the fat man is now standing. If the trolley is diverted onto that track, the fat man’s body will stop the trolley, and it will prevent the trolley from making it back to the track where the five people will be run over. Most people believe that a bystander should pull the lever to divert the trolley, and thus kill the fat man to save the five.

Yet, by doing so, the fat man’s death is not merely foreseen, but intended. If the fat man were somehow able to escape from the tracks, he would not be able to save the other five. The fat man needs to die, and yet, most people do not seem to have a problem with that.

Thomson wondered why people would object to the fat man being thrown from the bridge, but would not object to running over the fat man in the looping track, when in fact, in both scenarios the doctrine of double effect is violated. To this day, this question remains unanswered.

Some philosophers have made the case that too much has been written about the Trolley Problem, and too little has been achieved with it. Some argue either that the examples are unrealistic to the point of being comical and irrelevant. Others argue that intuitions are not reliable and that moral decisions should be based on reasoned analysis, not just on feeling “right” or “wrong” when presented with scenarios.

It is true that all these scenarios are highly unrealistic and that intuitions can be wrong. The morality of actions cannot just be decided by public votes. Yet, despite all its shortcomings, the Trolley Problem remains an exciting and useful approach. It is extremely unlikely someone will ever encounter a situation where a fat man could be thrown from a bridge in order to save five people. But the thought of that situation can elicit thinking about situations with structural similarities, such as whether or not civilians can be bombed in wars, or whether or not doctors should practice euthanasia. The Trolley Problem will not provide definite answers, but it will certainly help in thinking more clearly.
« Last Edit: 28/10/2019 03:29:12 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2746
  • Activity:
    1%
  • Thanked: 36 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #292 on: 25/10/2019 18:20:33 »
Quote from: hamdani yusuf on 25/10/2019 11:15:19
Different person may have different preference on the same feeling/sensation. In extreme case, some kind of pain might be preferred by some kind of persons, such as sadomasochists. Hence I concluded that there must be a deeper meaning than feeling which we should base our morality upon.

This is precisely why I've had enough of discussing this here. There may be no two individuals who feel the same things as each other, but all that means is that correct morality has to take into account individual differences wherever data about that is available. Where it isn't available, you have to go by the best information you have, and that will typically be the average. If one masochist likes being tortured to death, that doesn't negate the wrongness of torture for others. Apply my method: you imagine that you are all the participants in the system, so when you are the masochist, you will enjoy being tortured to death, and when you're a sadist, you may get pleasure out of torturing that masochist to death, so it is moral for that sadist to torture that masochist to death if the masochist signs up to that. The method necessarily covers all cases.
Logged
 



Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #293 on: 28/10/2019 04:05:11 »
Quote from: hamdani yusuf on 25/10/2019 11:18:14
Here is another reading on trolley problem to check our ideas on universal morality.
https://qz.com/1562585/the-seven-moral-rules-that-supposedly-unite-humanity/

And here is how the trolley problem has evolved over time.
https://www.prindlepost.org/2018/05/just-how-useful-is-the-trolley-problem/
When I first encountered the trolley problem, I kept thinking why the number 5 was chosen to trade with 1 to determine the morality of action/inaction. Then I sketched a basic version of trolley problem where the numbers vary, like I've shown in my previous post here:
Quote from: hamdani yusuf on 24/09/2019 10:08:44
Here is an example to emphasize that sometimes moral decision is based on efficiency. We will use some variations of trolley problem with following assumptions:
- the case is evaluated retrospectively by a perfect artificial intelligence, hence no room for uncertainty of cause and effect regarding the actions or inactions.
- a train is moving in high speed on the left track.
- a lever can be used to switch the train to the right track.
- if the train goes to the left track, every person on the left track will be killed. Likewise for the right track.
- all the people involved are average persons who have positive contribution to the society. No preferences for any one person over the others.
The table below shows possible combination of how many persons on the left and right tracks, ranging from 0 to 5.
The left column in the table below shows how many persons are on the left track, while the top row shows how many persons are on the right track.
\   0   1   2   3   4   5
0   o   o   o   o   o   o
1   x   o   o   o   o   o
2   x   ?   o   o   o   o
3   x   ?   ?   o   o   o
4   x   ?   ?   ?   o   o
5   x   ?   ?   ?   ?   o

When there are 0 person on the left track, moral persons must leave the lever as it is, no matter how many persons on the right track. This is indicated by letter o in every cell next to number 0 on the left column.
When there are 0 person on the right track, moral persons must switch the lever if there are at least 1 person on the left track. This is indicated by letter x in every cell below the number 0 on the top row, except when there is 0 person on the left track.
When there are non-zero persons on each track and more persons on the right track than the left track, moral persons must leave the lever as it is to reduce casualty. This is indicated by letter o in every cell on the top right side of diagonal cells.
When there are the same number of persons on the left and right tracks, moral persons should leave the lever to conserve resource (energy to switch the track) and avoid being accused of playing god. This is indicated by letter o in every diagonal cell.
When there are non-zero persons on each track and more persons on the left track, the answer might vary (based on previous studies). If you choose to do nothing in these situations, effectively it shows how much you value your action of switching the lever, in the unit of difference of person number between the left and right track. This is indicated by question marks in every cell on the bottom left side of diagonal cells.

One of notable conclusions I got from this analysis is emphasized in bold.
Can we call ourselves moral if we let 1 million people die just because we don't want to move the lever which would kill 1? (Imagine a nuclear bomb on the right track that would kill entire city). How many people have to die before we are morally justified to move the lever to kill 1 person?
« Last Edit: 28/10/2019 04:56:33 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #294 on: 28/10/2019 04:17:52 »
Quote
Pain is a distressing feeling often caused by intense or damaging stimuli. The International Association for the Study of Pain's widely used definition defines pain as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage".[1] In medical diagnosis, pain is regarded as a symptom of an underlying condition.
https://en.wikipedia.org/wiki/Pain
AFAIK, the underlying condition for pleasure and pain is that they help organism to have better chance to survive by pursuing pleasure experiences such as eating food and having sex, and avoiding painful experiences such as from extreme temperature or pressure. Besides the immediate feeling by sensory organs, more complex organisms have developed emotion, which is basically the ability to predict future feelings, based on simple model of their surroundings. The next milestone for organism complexity would be reason, which involves more accurate and precise model of reality.
It is possible to replace feelings with other form of information to determine if the current situation have overall good/bad effect to the existence of an agent. That's why I put feelings and emotions as instrumental goals, rather than ultimate/terminal goal.
« Last Edit: 28/10/2019 04:47:31 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #295 on: 19/11/2019 11:50:04 »
Quote from: hamdani yusuf on 25/10/2019 11:18:14
https://www.prindlepost.org/2018/05/just-how-useful-is-the-trolley-problem/
Quote
Philosophy can be perceived as a rather dry, boring subject. Perhaps for that very reason, divulgers have attempted to use stimulating and provocative thought experiments and hypothetical scenarios, in order to arouse students and get them to think about deep problems.

Surely one of the most popular thought experiments is the so-called “Trolley Problem”, widely discussed across American colleges as a way to introduce ethics. It actually goes back to an obscure paper written by Philippa Foot in the 1960s. Foot wondered if a surgeon could ethically kill one healthy patient in order to give her organs to five sick patients, and thus save their life. Then, she wondered whether the driver of a trolley on course to run over five people could divert the trolley onto another track in which only one person would be killed.


As it happens, when presented with these questions, most people agree it is not ethical for the surgeon to kill the patient and distribute her organs thus saving the other five, but it is indeed ethical for the driver to divert the trolley, thus killing one and saving the five. Foot was intrigued what the difference would be between both cases.

She reasoned that, in the first case, the dilemma is between killing one and letting five die, whereas in the second case, the dilemma is between killing one and killing five. Foot argued that there is a big moral difference between killing and letting die. She considered negative duties (duties not to harm others) should have precedence over positive duties (duties to help others), and that is why letting five die is better than killing one.

This was a standard argument for many years, until another philosopher, Judith Jarvis Thomson, took over the discussion and considered new variants of the trolley scenario. Thomson considered a trolley going down its path about to run over five people, and the possibility of diverting it towards another track where only one person would be run over. But, in this case, the decision to do so would not come from the driver, but rather, from a bystander who pulls a lever in order to divert the trolley.

The bystander could simply do nothing, and let the five die. But, when presented with this scenario, most people believe that the bystander has the moral obligation to pull the lever. This is strange, as now, the dilemma is not between killing one and killing five, but instead, killing one and letting five die. Why can the bystander pull the lever, but the surgeon cannot kill the healthy person?
In the case of surgeon version of trolley problem, I think many people would make following assumptions that make them reluctant to make the sacrifice:
- there is some non-zero chance that the surgery would fail.
- the five patients' conditions are somehow the consequence of their own fault, such as not living a healthy life, thus make them deserve their failing organs.
- on the other hand, the healthy person to be sacrificed is given credit for living a healthy life.
- many people would likely see the situation in that healthy person's perspective.

In presenting the problem while preventing the second assumption into equation, we can state that the five patients are victims of mass shooting, hence their failing organs have nothing to do with their lifestyle. Furthermore, to tip the balance further, the healthy person could be the mass shooter, or at least someone who let the mass shooting happens.
Or in alternative scenario, the five patients are heroes that risk their lives to stop the mass shooter and save others' lives while the healthy patient is a coward running away from the shooting location.
Logged
Unexpected results come from false assumptions.
 

Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 1499
  • Activity:
    38%
  • Thanked: 82 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #296 on: 20/11/2019 00:48:00 »
Quote from: hamdani yusuf on 19/11/2019 11:50:04
In the case of surgeon version of trolley problem, I think many people would make following assumptions that make them reluctant to make the sacrifice:
- there is some non-zero chance that the surgery would fail.
- the five patients' conditions are somehow the consequence of their own fault, such as not living a healthy life, thus make them deserve their failing organs.
- on the other hand, the healthy person to be sacrificed is given credit for living a healthy life.
- many people would likely see the situation in that healthy person's perspective.
Foot was correct in noticing that people don't really hold to the beliefs they claim.  A hypothetical situation (trolley) yields a different answer than a real one (such as the surgery policy described actually being implemented as policy).

Your objections seem to just be trying to avoid the issue.  Let's assume the surgery carries no risks.  The one dies, the others go on to live full lives.  This is like assuming no friction in simple physics questions, or assuming the trolley will not overturn when it hits the switch at speed, explode and kill 20.  Adding variables like this just detracts from the question being asked.

Next point attempts to put a level of blame on the conditions of the 5.  So let's discard that.  All have faulty organs (different ones) due to accidents or something, but not due to unhealthy choices being made.  In fact, the reasoning is rejected if only the one healthy person carries all the blame.  It is considered (rightly so) unethical to harvest a healthy condemned criminal in order to save the lives of all these innocents in need. Now why is that? Certainly make no sense on David's utilitarian measurement.

There is another solution: You have these 5 people each in need of a different organ from the one healthy person.  So they draw lots and the loser gives his organs to the other 4. That's like one of the 5 trolley victims getting to be the hero by throwing the other 4 off the tracks before the trolley hits and kills him.  Win win, and yet even this isn't done in practice. Why not?  What is the actual moral code which typically drives practical policy?
Logged
 



Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #297 on: 20/11/2019 06:45:59 »
Quote from: Halc on 20/11/2019 00:48:00
Your objections seem to just be trying to avoid the issue.  Let's assume the surgery carries no risks.  The one dies, the others go on to live full lives.  This is like assuming no friction in simple physics questions, or assuming the trolley will not overturn when it hits the switch at speed, explode and kill 20.  Adding variables like this just detracts from the question being asked.
It's the opposite. I'm trying to identify the reason why people change their mind when the situation is slightly changed, one parameter at a time.
Quote from: Halc on 20/11/2019 00:48:00
It is considered (rightly so) unethical to harvest a healthy condemned criminal in order to save the lives of all these innocents in need. Now why is that?



I have some possible reason to think about.
- Perhaps the crime isn't considered severe enough for death penalty.
- Fear of revenge from the victim's relatives. There's always non-zero chance the secret will be revealed.
- Hope that there might be better options without sacrificing anyone, such as technological advancement.
- The lost of those five lives are not that big deal. Life can still go on as usual. Millions of lives had died due to accident, natural disasters, epidemic, famine, etc. without anyone getting their hands dirty of homicide.

In practice, people may choose differently among one another as well as between theory and practice because of their anxiety, time pressure, different knowledge and experience related to the situation at hand.

Quote from: Halc on 20/11/2019 00:48:00
There is another solution: You have these 5 people each in need of a different organ from the one healthy person.  So they draw lots and the loser gives his organs to the other 4. That's like one of the 5 trolley victims getting to be the hero by throwing the other 4 off the tracks before the trolley hits and kills him.  Win win, and yet even this isn't done in practice. Why not?  What is the actual moral code which typically drives practical policy?
In practice, that is a very rare circumstance.
The cost/resource required could be high. Who will pay the operation? The uncertainty of cost and benefit would make surgeons avert risks by simply doing nothing and noone would blame them.
It might also have been done already, but we never know because it is kept secret to avoid backlash and public outcry.
« Last Edit: 21/11/2019 03:29:48 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 1499
  • Activity:
    38%
  • Thanked: 82 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #298 on: 20/11/2019 13:39:43 »
Quote from: hamdani yusuf on 20/11/2019 06:45:59
Quote from: Halc
It is considered (rightly so) unethical to harvest a healthy condemned criminal in order to save the lives of all these innocents in need. Now why is that?
I have some possible reason to think about.
Again, you seem to be searching for loopholes rather than focusing on the fundamental reasons why we choose to divert the trolley on a paper philosophy test but not in practice. I think there is a reason, but the best way to to see it is to consider the most favorable case, and wonder why it is still rejected.  You seem to be looking for the less favorable cases, which is looking in the wrong direction.

Quote
- Perhaps the crime isn't considered severe enough for death penalty.
"Condemned criminal" means it is severe enough. The death sentence has been made.
Quote
- Fear of revenge from the victim's relatives. There's always non-zero chance the secret will be revealed.
There's a secret involved? I was suggesting this be above board. Not sure who the victim is here, the criminal or the victims of whatever crimes he committed. If the former, he's already got the death penalty and his relatives already know it. Changing the sentence to 'death by disassembly' shouldn't be significantly different from their POV than say death by lethal injection (which renders the organs unusable for transplants).

Quote
- Hope that there might be better options without sacrificing anyone, such as technological advancement.
People in need of transplants often have short life expectancy, certainly shorter than advancement of technology.  OK, they've made I think a few mechanical hearts, and the world is covered with mechanical kidneys (not implantable ones though).  A dialysis machine does not fit in a torso. No mechanical livers. It's transplant or die. Not sure what other organs are life-saving.  There are eye transplants, but that just restores sight, not life.

Speaking of livers, they do consider 'blame'.  An alcoholic is far less likely to get a liver transplant, unless of course he has enough money/fame. Mickey Mantle is a prime example, drinking his liver into failure and got the transplant at age 48. He lived only 2 months after getting it.  So actual morals in practice seems to be to give the scarce resource to the wealthy celebrity and not somebody who's more likely to get more years added to their life from having it done.
Sorry.  Side rant.

Quote
- The lost of those five lives are not that big deal. Life can still go on as usual.
With that reasoning, murder shouldn't even be illegal.
Quote
Millions of lives had died due to accident, natural disasters, epidemic, famine, etc. without anyone getting their hands dirty of homicide.
Ah, there's the standard.  Because putting the trolley on the track with one is an act of homicide (involves the dirtying of someone's hands), but the non-act of not saving 5 (or 4) people who could be saved is not similarly labeled a homicide. Negligent homicide is effectively death caused by failing to take action, so letting the trolley go straight is still homicide.
This H word is why I brought up the death-row guy, because his life is already forfeit, and it isn't a homicide to harvest him. The people who throw the switch to kill him are not charged with homicide. And there is no policy of saving lives with his organs, and I said 'rightly so' to that policy.

A specialty doctor could just decide to stay home one day to watch TV for once, without informing his hospital employer. As a result, 3 patients die. His hands are not 'dirty with homicide', and people die every day anyway, so there's nothing wrong with his choosing to blow the day off like that.
Sorry, I find this an immoral choice on the doctor's part.

Quote
Quote from: Halc
There is another solution: You have these 5 people each in need of a different organ from the one healthy person.  So they draw lots and the loser gives his organs to the other 4. That's like one of the 5 trolley victims getting to be the hero by throwing the other 4 off the tracks before the trolley hits and kills him.  Win win, and yet even this isn't done in practice.
In practice, that is a very rare circumstance.
In fact, I think it has never been done. But I'm asking why not, since it actually works better than the 'accidental' version they use now.

Quote
The cost/resource required could be high, especially if . Who will pay the operation?
Same person who pays when there is a donor found. It costs this money in both circumstances. High cost of the procedure actually is an incentive to do it. The hospitals make plenty of money over these sorts of things, so you'd think the solution I proposed would be found more attractive.
The cost (to save a given life this way) is higher in fact using accident victims, because being unplanned, the matching procedure must be done in absolute haste. Planning it like this (finding a group that are matches for each other and none likely to be approved for a transplant through normal channels) eliminates much of the cost. They can all be brought together into one building instead of needing to preserve and transport the organs all to different cities.

Quote
The uncertainty of cost and benefit would make surgeons avert risks by simply doing nothing and noone would blame them.
Surgeons always take risks, and sometimes people blame them. They say to watch out for surgeons who have too low of a failure rate for a risky procedure because either they cook the books or they are too incompetent to take on the higher risk patients. But people very much do blame surgeons who refuse to save lives when it is within their capability.

« Last Edit: 20/11/2019 13:47:45 by Halc »
Logged
 

Offline hamdani yusuf (OP)

  • Hero Member
  • *****
  • 626
  • Activity:
    24%
  • Thanked: 33 times
    • View Profile
Re: Is there a universal moral standard?
« Reply #299 on: 21/11/2019 02:51:22 »
Quote from: Halc on 20/11/2019 13:39:43
Again, you seem to be searching for loopholes rather than focusing on the fundamental reasons why we choose to divert the trolley on a paper philosophy test but not in practice. I think there is a reason, but the best way to to see it is to consider the most favorable case, and wonder why it is still rejected.  You seem to be looking for the less favorable cases, which is looking in the wrong direction.
The social experiments shows that different people give different answers for different reasons. They also changed their mind in different occasions, even when presented with exactly same situation. It might even be the case that some of them just performed coin toss to choose the answer.
Reasonable people would consider expected cost and benefit of each option, which can be classified as short term, mid term, and long term. Before you can decide which way is the right way, you need to explore every possible scenario to see the best option.
When viewed retrospectively, usually people would choose option with highest benefit and lower cost in long term.
If you are a software developer or law maker, looking for loopholes is an important part of your job. Those loopholes can be exploited which may cost unbearable damage if not mitigated properly. They may not be obvious at a glance, that's why the software/law should be scrutinized.
Logged
Unexpected results come from false assumptions.
 



  • Print
Pages: 1 ... 13 14 [15] 16   Go Up
« previous next »
Tags: morality 
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.112 seconds with 78 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.