The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. General Discussion & Feedback
  3. Just Chat!
  4. Is there a universal moral standard?
« previous next »
  • Print
Pages: 1 ... 9 10 [11] 12 13 ... 212   Go Down

Is there a universal moral standard?

  • 4236 Replies
  • 965551 Views
  • 2 Tags

0 Members and 168 Guests are viewing this topic.

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #200 on: 20/09/2019 12:54:34 »
Quote from: Halc on 19/09/2019 04:36:08
Quote from: hamdani yusuf on 19/09/2019 03:26:48
How do you determine wchich priority is the higher one?
Your reply below seems to assume an obvious priority, but I love putting assumptions to the test.

Quote from: hamdani yusuf on 19/09/2019 04:14:35
You perform a surgery to the child is morally better then letting them die.
While I agree, how do you know this is true?  I can argue that it is better to let the kid die if there is a higher goal to breed humans resistant to appendix infections, like the Nepalese have done. I can think of other goals as well that lead to that decision.  There seems to be no guidance at all from some universal moral code. I don't think there is one of course.

I personally have died 3.5 times, or at least would have were it not for the intervention of modern medicine.  My wife would have survived until the birth of our first child.  The human race is quite a wreck since we no longer allow defects to be eliminated, and we're not nearly as 'finished' as most species that have had time to perfect themselves to their niche.
Your question above has been answered by David. I just want to add that actions are valued by their effectiveness and efficiency. Actions are considered effective if they can achieve the goal, and more efficient if they use less resources.
Improvements for human species are not limited to genetic. Epigenetic options are available as well. They are not even limited to biological or organic method. Electronics and nanotechnology also have promising prospect.
Given the above information, eugenics is no longer among best options because it's very inefficient. The inefficiency is more dramatic if we also count the resistances and conflicts that it causes.
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #201 on: 20/09/2019 13:15:14 »
Quote from: Halc on 18/09/2019 14:24:33
The point of the thread seems to be to argue why an action might be bad in all cases, and there has been little to back up this position. The examples all seem to have had counter-examples. All the examples of evil have been losers, never something that your people are doing right now, like say employing sweatshop child labor for the clothes you wear. It's almost impossible to avoid since so much is produced via various methods that a typical person would find inhumane, and hard to see since you're paying somebody else to do (and conceal from you) the actual act.  At least that is an example of something done by the winner.
I guess I can't expect anyone newly joined this discussion to follow all the conversations from the start. As the title might suggest, this thread is meant to look for a universal standard to evaluate moral actions in as diverse situations as possible. I want to answer why an action can be considered moral in some situations but immoral/less moral in other situations.
Winners can also do something considered immoral. At least I have mentioned Joshua. I might also include some actions by Genghis Khan. Those examples are shown just because I thought most people agreed upon their immoralities.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #202 on: 20/09/2019 13:28:27 »
Quote from: Halc on 18/09/2019 14:24:33
You also need to decide if consciousness is relevant in a continuous or binary way.  If relative, then it isn't immoral for an adult to harm a child since you've said a child (or an elderly person) has a lower level of consciousness than the adult.  If it's a threshold thing (do what you want to anything below the threshold, but not above it), then it needs a definition.  A human crosses the threshold at some point, and until he does, it isn't immoral to do bad things to him.
For instance, a human embryo obviously has far less consciousness than does a pig, so eating pork is more wrong than abortion by this level-of-consciousness argument, be it a spectrum thing or binary threshold.
Similarly, it's OK to kill a person under anesthesia because they're not conscious at the time, and will not suffer for it.  These are some of the reasons the whole 'conscious' argument seems to fall apart.
I have said several times already, that universal morality is evaluated from the eventual result, with complete relevant information available. Otherwise, we must deal with probability based on available information.
The child now is expected to be adult in the future, while the adult is expected to be old and eventually die. That's when no other information is given to describe the situation at hand. Hence, harming the child is immoral by universal moral standard. This expectation argument also answer your later objection.
When a human being is brain dead or heavily injured, and there is nothing can be done by current available technology to save him, or trying to save him cost so much that it may harm more living people, then letting him dead is not immoral. Such situation is not rare. There are many mass casualty incidents or natural disasters that lead to that.
« Last Edit: 20/09/2019 13:32:30 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #203 on: 20/09/2019 19:11:10 »
Quote from: hamdani yusuf on 20/09/2019 10:57:16
But the expansion is restricted by consiousness level of the group members, because only consious beings can follow moral rules. Otherwise, it would be immoral for human to eat animal as well as vegetables, since this action is bad for the them.

Morality applies to all sentiences and it should be applied by all intelligences that are capable of calculating it. Many humans are not good at calculating it, and some are little better at it than other animals, but their inadequacy doesn't make it right to kill and eat them. It might be just as bad to torture a fly as to torture a human because it isn't about intelligence, but sentience: the pain may feel the same to both. It's all about how much suffering is involved. If you're comparing the killing of a fly versus the killing of a human though, there's inordinately more suffering caused by the latter due to all the other people who are upset by that, and by the loss of potential life.

Here's a simple illustration of the last point. If you are to live the life of a fly and then the life of a human and know that one of them will be killed early in such a way that if the fly dies early it will be killed 10% of the way through its expected life, while if the human dies early he will be killed 90% of the way through its expected life, would you prefer the fly to be the one that dies early or the human? That last 10% of the human's life may not be his best years, but they're probably inordinately more valuable than the 90% of the fly's expected life, so it's an easy choice even before you factor in all the upset that would be caused to other people who care about the human if he is killed before his time. That is what makes humans so much more valuable than "lesser" animals: those lesser animals aren't lesser in terms of the worth of the sentience within them because it may be exactly the same as the sentience in a human, but the opportunities available to the sentience in each is dependent on the hardware that it is installed in. The human is simply better hardware for sentience to be in than the fly.

If we were to make the same comparison with a human and a bird, it becomes more difficult to call. The lost 90% of the bird's life could in many cases be much more valuable than the lost 10% of the human's life, particularly if it's a wild bird. Should a lonely old man shoot birds for food or just die now and let the birds go on living? That's a tough dilemma. If he's just eating chickens that have been grown for food, that's an easier choice: once he's dead, there will be no more chickens in his yard.
Logged
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #204 on: 22/09/2019 00:12:07 »
Quote from: David Cooper on 20/09/2019 19:11:10
Morality applies to all sentiences and it should be applied by all intelligences that are capable of calculating it. Many humans are not good at calculating it, and some are little better at it than other animals, but their inadequacy doesn't make it right to kill and eat them. It might be just as bad to torture a fly as to torture a human because it isn't about intelligence, but sentience: the pain may feel the same to both. It's all about how much suffering is involved. If you're comparing the killing of a fly versus the killing of a human though, there's inordinately more suffering caused by the latter due to all the other people who are upset by that, and by the loss of potential life.
When someone suggests that you should follow a rule X, a natural response would be: what is the expected consequence if we follow x, why is it good for you? What if we ignore it, why is it bad?
Following a universal moral rule as I suggested here will increase the probability of conscious beings to survive. This is good because the surviving conscious beings will have the chance to take actions to stay survive, make progress, and explore other possibilities of rules.
  Ignoring or violating it reduces the chance of conscious beings to survive. Extinction of all conscious beings is bad because it stops the exploration of other possibilities. It would then rely on chance for nature to restart the evolution of conscious beings from the beginning. It would be an obvious waste of time, which is one of the most important resources for any conscious beings.
Evaluation of moral action is based on eventual result, not just immediate consequence. For example, killing every plants can eventually leads to extinction of macroscopic animals, including human. Hence it is morally worse than directly killing one individual human being.
Getting pleasure and happines while avoiding pain and suffering  are instrumental goal. Evaluation of universal moral rules should be based on terminal goals.
« Last Edit: 22/09/2019 01:01:50 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #205 on: 23/09/2019 09:38:02 »
IMO, universal moral rules are tools intended to increase the chance of achieving universal terminal goal, which is to prevent extinction of conscious beings. If we use lessons learned from process safety management concept, we can see that the moral rules are analogous to administrative controls.
https://www.ownerteamconsult.com/effective-process-safety-management/
Quote
The three strategies used during detailed design to prevent, control or mitigate hazards are:

Passive strategy: Minimise the hazard via process and equipment design features that reduce hazard frequency or consequence;
Active strategy: Engineering controls and process automation to detect and correct process deviations; and
Procedural strategy:Administrative controls to prevent incidents or minimise the effects of an incident.

The passive strategy uses fundamental natural laws (physical/chemical) to achieve the goal. The basic rules are simple, they are even obeyed by non-conscious things. Some examples are substance selection, sizing of equipments, intrinsically safe equipment. But designing equipment, vessels, pipelines to withstand all possible scenario at all time is costly, and often not economically feasible.
Engineering controls utilizes engineering/artificial rules, which are derived from natural laws optimized for achieving specific target effectively and efficiently. Some examples are rupture disc, pressure relief valve, process interlocks, PID controls. The rules are more complex than in passive strategy, due to conditional activation. If a certain condition is met, do something. For example, if the system pressure exceed some threshold (below the design pressure of the equipment), open the relieve valve, or stop the feed pump. We can say that the agents following the rules are somewhat conscious, because they are responsive to their environment. The complexity level among them may vary from simple on-off states to PID controller, multivariable control, fuzzy logic, and artificial neural networks.
In the old days when engineering controls are not sophisticated enough, higher complexity tasks must be done by humans, such as executing sequence in a recipe, flying airplanes, driving cars. Because humans are so complex, they are prone to make mistakes. To reduce the chance of human errors, administrative controls are needed. They are rules to be obeyed by humans as consious agents.
Due to technological advancement, complexity level of engineering controls increases to even exceeding the performance of human operators in some areas.
Soon enough, they can outperform humans in jobs closely related to the problems of morality, such as lawyers, juries, even judges. They might someday outperform lawmakers, which means that they can produce a set of rules to serve an intention without violating/contradicting more fundamental rules with less resources (e.g. money, energy, time). But to do so we would need to define those fundamental rules, which I tried to explore here.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #206 on: 23/09/2019 11:12:48 »
I think I have overlooked this.
Quote from: Halc on 18/09/2019 13:29:56
So if aliens with higher consciousness (as you put it) come down to Earth, they would not be immoral for them to harvest humans for food or perform painful procedures on us because we're not as conscious as they are.  There's no shortage of fictional stories that depict this scenario, except somehow the aliens are portrayed as evil. You would perhaps differ, given the above statement.  If they're higher on the ladder of consciousness, then it isn't wrong for them to do to us as they wish.
Any aliens with the ability to perform interstellar travel are very unlikely to develop the required technology as an individual. They are most likely a product of a society, which have their own struggles in the past, competitions and conflicts among themselves. They might experienced devastating wars, famines, and natural disasters. They might also have developed weapons of mass destruction such as nuclear and chemical weapons. They must have survived all of those, otherwise they won't be here in the first place. They must have developed their own moral rules, and might have even figured out the universal morality by expanding the scope and applicability of their rules. They might have their own version of PETA or vegan activists, and genetically modified bacteria to produce their food, or even better, 3D printed their food using nanotechnology. They might have modified their own bodies so that they don't depend on external biological systems just to survive.
Harvesting consious beings for food is a grossly inefficient process, hence it's very unlikely to be done by highly intelligent organisms, not to mention the risk of resistance and conflict that may harm themselves.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #207 on: 24/09/2019 07:30:44 »
Quote from: hamdani yusuf on 22/09/2019 00:12:07
Evaluation of moral action is based on eventual result, not just immediate consequence. For example, killing every plants can eventually leads to extinction of macroscopic animals, including human. Hence it is morally worse than directly killing one individual human being.
Here is another example to emphasize the need to evaluate morality from eventual result, rather than direct consequences. Most of us agree that the sun is not a conscious being. But it would be immoral to turn the sun into blackhole just for fun, while knowing that this action will lead to death of all currently known conscious being.

We can also learn about decision making from chess. Suppose you are in the middle stage of a chess game. There is only two legal moves available for you, the first is sacrificing your pawn, while the other is sacrificing you queen. In most cases, losing a queen puts you in more disadvantage position than losing a pawn. But if you are a really good player, and you can calculate accurately that sacrificing the queen will eventually give you victory, then it is regarded as a good move. On the other hand, if the end of the game hasn't been clear to you, then sacrificing the pawn is the better move.

« Last Edit: 24/09/2019 08:54:28 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #208 on: 24/09/2019 10:08:44 »
Here is an example to emphasize that sometimes moral decision is based on efficiency. We will use some variations of trolley problem with following assumptions:
- the case is evaluated retrospectively by a perfect artificial intelligence, hence no room for uncertainty of cause and effect regarding the actions or inactions.
- a train is moving in high speed on the left track.
- a lever can be used to switch the train to the right track.
- if the train goes to the left track, every person on the left track will be killed. Likewise for the right track.
- all the people involved are average persons who have positive contribution to the society. No preferences for any one person over the others.
The table below shows possible combination of how many persons on the left and right tracks, ranging from 0 to 5.
The left column in the table below shows how many persons are on the left track, while the top row shows how many persons are on the right track.
\   0   1   2   3   4   5
0   o   o   o   o   o   o
1   x   o   o   o   o   o
2   x   ?   o   o   o   o
3   x   ?   ?   o   o   o
4   x   ?   ?   ?   o   o
5   x   ?   ?   ?   ?   o

When there are 0 person on the left track, moral persons must leave the lever as it is, no matter how many persons on the right track. This is indicated by letter o in every cell next to number 0 on the left column.
When there are 0 person on the right track, moral persons must switch the lever if there are at least 1 person on the left track. This is indicated by letter x in every cell below the number 0 on the top row, except when there is 0 person on the left track.
When there are non-zero persons on each track and more persons on the right track than the left track, moral persons must leave the lever as it is to reduce casualty. This is indicated by letter o in every cell on the top right side of diagonal cells.
When there are the same number of persons on the left and right tracks, moral persons should leave the lever to conserve resource (energy to switch the track) and avoid being accused of playing god. This is indicated by letter o in every diagonal cell.
When there are non-zero persons on each track and more persons on the left track, the answer might vary (based on previous studies). If you choose to do nothing in these situations, effectively it shows how much you value your action of switching the lever, in the unit of difference of person number between the left and right track. This is indicated by question marks in every cell on the bottom left side of diagonal cells.
« Last Edit: 28/10/2019 03:51:03 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 



Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2404
  • Activity:
    6%
  • Thanked: 1014 times
Re: Is there a universal moral standard?
« Reply #209 on: 24/09/2019 20:02:42 »
Quote from: David Cooper on 19/09/2019 21:21:13
A rock, tree or self-driving car is not a sentience.
There is a lot to discuss in your long post, but this one stood out.  Why is a flea a sentience but an AI car not one?  Surely the car is entrusted with moral decisions that nobody would ever entrust to a flea.  The only thing the flea has that the car doesn't is that you and the flea share a common ancestor, and even that doesn't explain why 'tree' is on the other side of the line. The car is a reasonable example of an alien, something with which you don't share an ancestry, and right off you assert that it isn't a sentience, seemingly because it isn't just like you.

OK, the car is not a life form, but the alien might also not be, and still be a higher sentience.  Maybe, depending on how one defines 'life' and 'sentience'.  I had composed more of a reply, but we'll be speaking past each other without some common terms defined.
« Last Edit: 24/09/2019 20:08:44 by Halc »
Logged
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #210 on: 24/09/2019 23:58:33 »
In real life, many decisions must be made with incomplete information.  This is where disputes often arise due to uncertainty. Some scientific tools to handle this are probability theory and logical induction.
Logged
Unexpected results come from false assumptions.
 

Offline Harryobr

  • First timers
  • *
  • 9
  • Activity:
    0%
  • Naked Science Forum Newbie
Re: Is there a universal moral standard?
« Reply #211 on: 25/09/2019 11:53:46 »
They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings...
Logged
 

Offline Harryobr

  • First timers
  • *
  • 9
  • Activity:
    0%
  • Naked Science Forum Newbie
Re: Is there a universal moral standard?
« Reply #212 on: 25/09/2019 12:00:44 »
Being a meme, the universal moral standard shares space in memetic pool with other memes. They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings.
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #213 on: 25/09/2019 19:27:04 »
Quote from: Halc on 24/09/2019 20:02:42
Why is a flea a sentience but an AI car not one?  Surely the car is entrusted with moral decisions that nobody would ever entrust to a flea.  The only thing the flea has that the car doesn't is that you and the flea share a common ancestor, and even that doesn't explain why 'tree' is on the other side of the line. The car is a reasonable example of an alien, something with which you don't share an ancestry, and right off you assert that it isn't a sentience, seemingly because it isn't just like you.

First, let's start with a rock. A rock may be sentient in that every fundamental particle in it may be sentient. Can we torture the rock? We could maybe throw it into a lava lake to torture it with high heat, but there's a lot of rock in that state all the time deep in the Earth. Maybe it's all in agony all the time. We should maybe throw all material into a black hole as that might stop the suffering by slowing its functionality to a halt. Maybe that's the best way to end all the extreme suffering that might for all we know be going on in the universe wherever there is matter.

The self-driving car may be sentient in the same way as the rock. Every particle in us could be sentient in the same way too, and most of it could be in extreme agony all the time without us knowing - we can't measure how it feels. The only sentient thing that we think we can measure is somewhere in our own brain. We have an information system in there which generates data that makes assertions about what that sentience is feeling. We don't know what evidence that information system is using when it makes its measurements, but it looks impossible for its assertions about sentience to be competent - it should not have any way of measuring feelings and knowing that they are feelings. It should be unable to tell whether they are pleasant feelings or unpleasant ones. Its assertions about feelings cannot be trusted to be anything more than fiction. However, we must also err on the side of caution and consider the possibility that the assertions may somehow be true. We will find out for certain when we can trace back the assertions about feelings in the brain to see how that data was put together and what evidence it was based on. In doing that, we might find some magical quantum mechanism which does the job.

Let's just assume though that in humans there really is sentience there. We can assume that it is also present in other species because there's no reason why it should suddenly appear just for us to do the same job as needs to be done in other animals. Sentience will be in all animals down to a very simple level. It will most likely be in most creatures that have a brain and a response to damage with any kind of response that makes it look as if it might be in pain. Worms (the bigger ones) almost certainly have it, and I expect that flies have it too. A flea may be at the extreme simplicity end of things, but it may still have feelings. If intelligent aliens also report the existence of sentience, then a wide range of simpler species related to them will doubtless have it too. A need to be able to enjoy things and to suffer does not magically need to emerge just because the brain has become a general intelligence capable of turning itself to any task (in the way that only humans can on our planet). If sentience isn't needed in simple creatures with a reaction that looks as if pain is involved, then it isn't needed in more complex creatures either and there should be no evolutionary pressure on sentience to appear to do something entirely superfluous.

If the brain is really measuring sentience, it is measuring the sentience of something in the brain. When a person feels pain in the hand of an arm which was amputated long ago, that shows that they are not feeling the pain in the hand, but in the head. If I stamp on your foot, you may feel pain, but there may be no pain experienced by anything in your foot that wouldn't be felt by a tennis ball being whacked by Nadal with a racquet. There may be all manner of feelings going on in quintillions of sentient things inside that person, but that is ignored by the brain which only focuses on one sentient thing somewhere inside the brain which is linked in to ideas that the brain is processing.

A self-driving car's brain is a computer which works in the same way as the computer on a desk. There is no sentience involved in its processing. If such a machine generates claims that it is sentient and that it's feeling pain, excitement, boredom or that it feels the greenness of green, then it has been programmed to tell lies. That machine could potentially calculate morality better than any human, but that doesn't make it in any way sentient. If you hit it with a hammer and it says "Ouch!", it is simply following a rule that it should say "Ouch!" if something hits it. You can write a simple program to make a computer to do this when a key is typed, but there is no feeling involved:-

If(keyInput=="p"){print "Ouch!"}
Logged
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #214 on: 26/09/2019 09:57:24 »
Quote from: Harryobr on 25/09/2019 11:53:46
They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings...
Welcome to our discussion.
 
Quote from: Harryobr on 25/09/2019 12:00:44
Being a meme, the universal moral standard shares space in memetic pool with other memes. They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings.
Efforts to discover universal goal can be done using top-down or bottom-up approach. Your statement above seems to lean more on bottom-up approach, similar to my original attempts in other thread https://www.thenakedscientists.com/forum/index.php?topic=71347.0

This thread was meant to use the top-down approach, hence I started by definitions and then tried to answer basic/fundamental questions (what, when, where, who, why, how) regarding universal moral rules. Here is an example.
Quote from: hamdani yusuf on 16/11/2018 12:41:52
To answer why keeping the existence of conscient beings is a fundamental moral rule, we can use a method called reductio ad absurdum to its alternative.
Imagine a rule that actively seeks to destroy conscient beings. It's basically a meme that's self destruct by destroying its own medium. Or conscient beings that don't follow the rule to actively keep their existence (or their copies) will likely be outcompeted by those who do, or struck by random events and cease to exist.
I'll try to summarize the discussion here in a more of deductive reasoning and then compile it in a Euclidean style writing.
« Last Edit: 26/09/2019 10:47:20 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2404
  • Activity:
    6%
  • Thanked: 1014 times
Re: Is there a universal moral standard?
« Reply #215 on: 27/09/2019 15:40:50 »
Quote from: David Cooper on 25/09/2019 19:27:04
Quote from: Halc on 24/09/2019 20:02:42
Why is a flea a sentience but an AI car not one?
First, let's start with a rock. A rock may be sentient in that every fundamental particle in it may be sentient. Can we torture the rock? We could maybe throw it into a lava lake to torture it with high heat, but there's a lot of rock in that state all the time deep in the Earth. Maybe it's all in agony all the time. We should maybe throw all material into a black hole as that might stop the suffering by slowing its functionality to a halt. Maybe that's the best way to end all the extreme suffering that might for all we know be going on in the universe wherever there is matter..

The self-driving car may be sentient in the same way as the rock. Every particle in us could be sentient in the same way too, and most of it could be in extreme agony all the time without us knowing - we can't measure how it feels. The only sentient thing that we think we can measure is somewhere in our own brain. We have an information system in there which generates data that makes assertions about what that sentience is feeling. We don't know what evidence that information system is using when it makes its measurements, but it looks impossible for its assertions about sentience to be competent - it should not have any way of measuring feelings and knowing that they are feelings. It should be unable to tell whether they are pleasant feelings or unpleasant ones. Its assertions about feelings cannot be trusted to be anything more than fiction. However, we must also err on the side of caution and consider the possibility that the assertions may somehow be true. We will find out for certain when we can trace back the assertions about feelings in the brain to see how that data was put together and what evidence it was based on. In doing that, we might find some magical quantum mechanism which does the job.
Are you arguing that rock or car protons are different from the ones in fleas ?  If not, I don't know why you brought up the prospect of suffering of fundamental particles, especially since those particles move fairly freely into and out of biological things like the flea.

As for all these comments concerning suffering, you act like it is a bad thing.  If there was a pill that removed all my pain and suffering (there is), I'd not take it, because it's there for a reason.  It would be like voluntarily removing my physical conscience, relying instead on rational reasoning to not do things that are wrong.  I still have all my fingers because I have pain and suffering (and not for lack of trying otherwise).

Quote
It will most likely be in most creatures that have a brain and a response to damage with any kind of response that makes it look as if it might be in pain.
So you want it to writhe in a familiar way in response to harm. I agree that the self-driving car does not writhe in a familiar way. I watched a damaged fly, and it seemed more intent on repairing itself than on gestures of agony.
Thus it is not wrong for an alien to injure us since we don't react to the injury in a way that is familiar to them.
The rules only apply to things that are 'sufficiently just like me'.

Quote
A self-driving car's brain is a computer which works in the same way as the computer on a desk. There is no sentience involved in its processing.
That's just an assertion.  How do you know this?  Because it doesn't writhe in a familiar way when you hit it with a hammer?  You just finished suggesting that fundamental particles are sentient, and yet a computer on my desk (which has moral responsibility, and not primarily to me) does not.

Interestingly, in both cases (the computer and a human), it is not the physical thing that holds moral responsibility, but the information that does.  Hence if my computer contracts a virus that causes it to upload my password to a malicious site, I act to eradicate that information from the computer, and not to take action against the computer itself.
Similarly, if a person commits some crime, then creates an exact replica of himself and destroys the original person, the replica is still guilty of the crime despite the fact that the actual body that performed the crime is gone.  The information is preserved and the information is what is guilty.  So a thing that process/retains information seems capable of doing things that can be classified as right or wrong.  Just my observation.

Quote
If such a machine generates claims that it is sentient and that it's feeling pain
A rock can do that.  I just need a sharpie. How does a person demonstrate his claim of sentience (a thing you've yet to define)?  A computer already has demonstrated that it bears moral responsibility, so if it isn't sentient, then sentience isn't required for what a thing does to do right or wrong.

Quote
or that it feels the greenness of green, then it has been programmed to tell lies.
How do you convince the alien that you're not just programmed to say 'ouch' when you hammer your finger, assuming quite unreasonably that they'd consider "ouch" to be the correct response?

You seem to define a computer to be not sentient because it does a poor job of mimicking a person.  By that standard, I'm not as sentient as a squirrel because I've yet to convince one that I am of of their own kind.  I fail the squirrel Turning test.  It can be done with a duck.  I apparently pass the duck Turning test.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #216 on: 27/09/2019 18:28:02 »
Quote from: Halc on 27/09/2019 15:40:50
Are you arguing that rock or car protons are different from the ones in fleas ?  If not, I don't know why you brought up the prospect of suffering of fundamental particles, especially since those particles move fairly freely into and out of biological things like the flea.

If suffering happens, and if a compound object can suffer, that cannot happen without at least one of the components of that compound object suffering. A suffering compound object with none of the components feeling anything at all is not possible. If you're looking for sentience, it has to be in something fundamental and not something of no substance that emerges by magic out of complexity.

Quote
As for all these comments concerning suffering, you act like it is a bad thing.  If there was a pill that removed all my pain and suffering (there is), I'd not take it, because it's there for a reason.  It would be like voluntarily removing my physical conscience, relying instead on rational reasoning to not do things that are wrong.  I still have all my fingers because I have pain and suffering (and not for lack of trying otherwise).

Suffering has a use: it drives you to try to avoid greater damage. Where it isn't so great is when people are forced to suffer by others. Torture is universally recognised as immoral.

Quote
Thus it is not wrong for an alien to injure us since we don't react to the injury in a way that is familiar to them.
The rules only apply to things that are 'sufficiently just like me'.

Then you think it's moral for aliens to torture people?

Quote
Quote
A self-driving car's brain is a computer which works in the same way as the computer on a desk. There is no sentience involved in its processing.
That's just an assertion.  How do you know this?  Because it doesn't writhe in a familiar way when you hit it with a hammer?  You just finished suggesting that fundamental particles are sentient, and yet a computer on my desk (which has moral responsibility, and not primarily to me) does not.

All the particles of the machine could be sentient, but they may be suffering while the machine generates claims about being happy, or they may all be content while the machine generates claims about being in agony. The claims generated by an information system have no connection to the sentient state of the material of the machine.

It is not "just" an assertion. It is an assertion which I can demonstrate to be correct. A good starting point though would be for you to read up on the Chinese Room experiment so that you get an understanding of the disconnect between processing and sentience.

Quote
Similarly, if a person commits some crime, then creates an exact replica of himself and destroys the original person, the replica is still guilty of the crime despite the fact that the actual body that performed the crime is gone.  The information is preserved and the information is what is guilty.  So a thing that process/retains information seems capable of doing things that can be classified as right or wrong.  Just my observation.

Not quite, but it's not far wrong. The sentience is not to blame because it is not in control: there is no such thing as free will. Both the people in that example are equally dangerous and need to be prevented from doing harm. In the future, we'll be able to make all such people wear devices that can disable them whenever they try to do seriously immoral things. We will also want to do gene editing to make sure that all the vicious rape and pillage genes are not passed on to future generations.

Quote
Quote
If such a machine generates claims that it is sentient and that it's feeling pain
A rock can do that.  I just need a sharpie.

How does a rock do that, and what's a sharpie?

Quote
How does a person demonstrate his claim of sentience (a thing you've yet to define)?

A person can't demonstrate it. All he can do is assert it and hope that others will believe it because they are sentient too.

Quote
A computer already has demonstrated that it bears moral responsibility, so if it isn't sentient, then sentience isn't required for what a thing does to do right or wrong.

Correct. Sentience is not needed by something that makes moral decisions.

Quote
Quote
or that it feels the greenness of green, then it has been programmed to tell lies.
How do you convince the alien that you're not just programmed to say 'ouch' when you hammer your finger, assuming quite unreasonably that they'd consider "ouch" to be the correct response?

If the alien isn't sentient, you could have a very hard time convincing the alien that there is such a thing as sentience. However, it might decide to study you to find out why you believe yourself to be sentient, so it would scan your brain and model it, then it would look to see how your claims of sentience are generated and what evidence they're based on. It may then find that they are all fictions, or it may uncover the mechanism and discover that sentience is real.

Alternatively, if the alien is sentient, it will assume that you are too on the basis that you wouldn't have come up with the idea of sentience otherwise, and it would know that you are to be protected by the universal rule of morality.

Quote
You seem to define a computer to be not sentient because it does a poor job of mimicking a person.  By that standard, I'm not as sentient as a squirrel because I've yet to convince one that I am of of their own kind.  I fail the squirrel Turning test.  It can be done with a duck.  I apparently pass the duck Turning test.

Not at all. I say it isn't sentient because sentience has no connection to the information processing system of the computer which can only generate fake claims about sentience.
Logged
 



Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2404
  • Activity:
    6%
  • Thanked: 1014 times
Re: Is there a universal moral standard?
« Reply #217 on: 28/09/2019 01:54:30 »
Quote from: David Cooper on 27/09/2019 18:28:02
If suffering happens, and if a compound object can suffer, that cannot happen without at least one of the components of that compound object suffering. A suffering compound object with none of the components feeling anything at all is not possible.
By reducto ad-adsurdum, that indeed implies that a proton can suffer, and only because at least one of its quarks isn't contented. I see no way to relieve the suffering of a quark since I've no idea what needs it has that aren't getting met.
A rock is made of the same particles, and you say it isn't capable of suffering, so maybe all protons want to be part of rocks, dirt, and computer objects, and hence the universal morality is to quick kill every Earth life form anywhere ASAP.
Since I don't buy into any definition of suffering that would support protons being in such a state, I see nowhere to go from there.


Quote
As for all these comments concerning suffering, you act like it is a bad thing.  If there was a pill that removed all my pain and suffering (there is), I'd not take it, because it's there for a reason.  It would be like voluntarily removing my physical conscience, relying instead on rational reasoning to not do things that are wrong.  I still have all my fingers because I have pain and suffering (and not for lack of trying otherwise).

Quote
Torture is universally recognised as immoral.
It is not.  I see nothing in the universe that recognizes any moral rule at all.  Not saying there isn't one.  That said, there are human cultures that don't find torture immoral.  Most are satisfied if they get the benefit of the torture without the direct evidence that it's going on. Immoral to kill your neighbor, but not immoral to hire a hitman to do it, so long as you don't watch.

Quote
Then you think it's moral for aliens to torture people?
A moral code is not likely to assert that one is obligated to torture something, but that's the way you word the question.  So no.  I was commenting that by the rules you are giving me, it wouldn't be immoral for them to torture us.

Quote
All the particles of the machine could be sentient, but they may be suffering while the machine generates claims about being happy, or they may all be content while the machine generates claims about being in agony.
Maybe your protons also are in a different state than the one you claim, so it seems that the state of the protons is in fact irrelevant to how I treat the object composed of said protons.

Quote
The claims generated by an information system have no connection to the sentient state of the material of the machine.
Ah, there's the distinction I asked for.  You claim a thing is 'sentient' if it has a connection with the feelings of its protons, and a computer doesn't.  How do you justify this claim, and how do you know that the protons are suffering because there's say too much pressure on them?  The same pressure applied to different protons of mine seems not to cause those particular protons any discomfort.  That's evidence that it's not the protons that are suffering.

Quote
It is not "just" an assertion. It is an assertion which I can demonstrate to be correct. A good starting point though would be for you to read up on the Chinese Room experiment so that you get an understanding of the disconnect between processing and sentience.
Chinese Room experiment has different interpretations, and has nothing to do with the suffering of particles.
Anyway, in some tellings, the guy in the room has a lookup table of correct responses to any input.  If this is the algorithm, the room will very much be distinguishable from talking to a real Chinese speaker.  It fails the Turing test.

If it doesn't fail the Turing test, then it passes the test and is indistinguishable from a real person, which makes it sentient (common definition, not yours). 

Quote
Quote
Similarly, if a person commits some crime, then creates an exact replica of himself and destroys the original person, the replica is still guilty of the crime despite the fact that the actual body that performed the crime is gone.  The information is preserved and the information is what is guilty.  So a thing that process/retains information seems capable of doing things that can be classified as right or wrong.  Just my observation.
The sentience is not to blame because it is not in control: there is no such thing as free will.
Ah. The sentence definition comes out.  As you've been reluctant to say, you're working with a dualistic model, and I'm not.  My sentience (the physical collection of particles) is to blame because it is in control of itself (has free will).  Your gob of matter is not to blame because it is instead controlled by an outside agent which assumes blame for the actions it causes.  The agent is to blame, not the collection of matter.

Anyway, the self-driving car is then not sentient because it hasn't been assigned one of these immaterial external agents. My question is, what is the test for having this external control or not? How might the alien come down and know that you have one of these connections and the object to your left does not?  The answer to this is obvious. The sentient object violates physics, because if it didn't, its actions would be a function of physics, and not a reaction to an input without a physical cause.  Show me such a sensory mechanism in any sentient thing then.
In fact, there is none since a living thing is engineered entirely wrong for an avatar setup like that.  If I want to efficiently move my arm, I should command the muscle directly and not bother with the indirection from a remote location.  Nerves would be superfluous.  So would senses since the immaterial entity could measure the environment directly, as is demonstrably done by out-of-body/near-death experiences.

Anyway, I had not intended this to be a debate on philosophy of mind.  Yes, the dualistic model has a completely different (and untestable) set of assumptions about what the concept of right and wrong means.  Morals don't come from the universe at all.  They come from this other realm where the gods and other assertions are safely hidden from empirical inquiry.

Quote
Quote
A computer already has demonstrated that it bears moral responsibility, so if it isn't sentient, then sentience isn't required for what a thing does to do right or wrong.
Correct. Sentience is not needed by something that makes moral decisions.
You brought up sentience in a discussion of universal morals.  If it isn't needed, then why bring it up?
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #218 on: 28/09/2019 22:04:38 »
Quote from: Halc on 28/09/2019 01:54:30
A rock is made of the same particles, and you say it isn't capable of suffering...

I didn't say it isn't capable of suffering. The point I was making is that it could be suffering in that all the material it's made of could be suffering for all we know. We have no way to tell. We can melt down rocks though and make silicon and metals out of them, then we can build computers out of those materials, and all the material of the computers that we build may be suffering in the same way, but again we can't tell whether it's suffering or not. We can then run programs on that computer which we can program to make assertions about suffering which are not based on any measure as to whether the material is suffering or not, so the assertions produced by such programs are baseless. If we write code to trigger claims of pain when the "A" key on the keyboard is pressed and claims of pleasure when the "B" key is pressed, you are not actually generating any pain or pleasure in the machine by pressing either of those keys.

In humans though, we have a kind of computer making claims about sentience which believes those assertions to be true. We can expose a human to songs by the Spice Girls or Peter Andre to make them generate claims about experiencing pain or pleasure, and they will believe that they are genuinely experiencing pain. In the unlikely event that they believe they are genuinely experiencing pleasure, all the material they're made of may actually be experiencing extreme suffering in the same way as a rock might be, except for one sentient thing inside them which is actually experiencing pleasure and which is being measured as doing so in some way by some part of the information system that is generating the claims about sentience.

There is nothing we can usefully do for the sentiences that might be in rocks unless we can find a way to measure feelings in them. If we found that they were all suffering greatly, maybe we could ease their suffering by throwing everything we can into black holes, but there's no guarantee that that would make any difference to them. It isn't our immediate concern though, not least because it's unlikely that practically everything that exists should be suffering all the time. It's much more likely that if sentience is real and our claims about being sentient are true, we're doing something special with a sentient thing in the brain and we're systematically inducing feelings in it which are much more intense than the ones that normally occur in things like rocks.

Quote
Quote
Torture is universally recognised as immoral.
It is not.

In some cases it isn't: it wouldn't be wrong for mass-murdering dictators to be tortured to death slowly to serve as a warning to others, but I was thinking about cases where people are torturing innocents. There may be some backward societies which don't see that as wrong, but a bit of torture aimed at them would soon teach them the error of their approach and they would then understand that it's wrong.

Quote
I see nothing in the universe that recognizes any moral rule at all.

I see people who do recognise moral rules. They get them wrong in places due to poor thinking, but they've got a lot of it right.

Quote
I was commenting that by the rules you are giving me, it wouldn't be immoral for them to torture us.

By the rule(s) I've provided, it would very clearly be immoral for them to torture us. The harm outweighs the benefit.

Quote
]Maybe your protons also are in a different state than the one you claim, so it seems that the state of the protons is in fact irrelevant to how I treat the object composed of said protons.

That's the whole point. What you do to the person or machine likely provides no gain for those sentient things. The only sentient thing that we're able to change things for usefully is the one that's being measured by the information system that's generating claims about being sentient, and that's the one that's most likely having unusually extreme feelings generated in it which go way beyond anything felt by the sentiences that might be in rocks.

Quote
You claim a thing is 'sentient' if it has a connection with the feelings of its protons, and a computer doesn't.  How do you justify this claim, and how do you know that the protons are suffering because there's say too much pressure on them?  The same pressure applied to different protons of mine seems not to cause those particular protons any discomfort.  That's evidence that it's not the protons that are suffering.

You have no way of measuring what your protons are feeling, so you don't know if pressure is doing anything to affect how they feel. (By the way, when I use the word particles, I'm referring to something a lot more fundamental than protons, so I'd rather use the word particles when discussing this, even if that actually means something that we don't normally refer to as particles.) With computers there is no sentience wired into any system that induces feelings into it associated with the inputs which deliver signals that might cause pain to be felt in the system, and there's nothing in the hardware to read those feelings back with. If the feelings that people report having are real, there's something different happening in the brain which does lead to feelings being induced in a sentience and then being read back by the information system which generates assertions about those feelings being felt. Science has no proposed mechanism by which this can occur, but for the sake of discussions of morality, we can simply assume that it happens in brains and that it happens in some part of the hardware which we haven't yet understood. With computers though, there is no such facility: there is no way to induce feelings in any sentience in the machine other than by luck (and therefore no way to know if it's pleasant or unpleasant), and there is no way to read the feelings either, so the machine cannot make any informed claims about feelings: it can only pretend to have feelings.

Quote
Chinese Room experiment has different interpretations, and has nothing to do with the suffering of particles.
Anyway, in some tellings, the guy in the room has a lookup table of correct responses to any input.  If this is the algorithm, the room will very much be distinguishable from talking to a real Chinese speaker.  It fails the Turing test.

The point I want you to take from the Chinese Room experiment is that there is nowhere in which feelings are involved in the computations where they're relevant to the output. The person operating the machine may be happy, bored or deeply depressed, but so long as he carries out the task correctly, the program will function the same way in all three cases. The Chinese Room processor is Turing-complete, capable of running full AGI. It has no way of handling feelings, and nor do any of the computers we know how to build.

Quote
If it doesn't fail the Turing test, then it passes the test and is indistinguishable from a real person, which makes it sentient (common definition, not yours).

Passing the Turing Test has nothing to do with a machine being sentient, but merely intelligent. Anyone who claims otherwise has not understood what the Turing Test is about.

Quote
Quote
The sentience is not to blame because it is not in control: there is no such thing as free will.
Ah. The sentence definition comes out.  As you've been reluctant to say, you're working with a dualistic model, and I'm not.  My sentience (the physical collection of particles) is to blame because it is in control of itself (has free will).  Your gob of matter is not to blame because it is instead controlled by an outside agent which assumes blame for the actions it causes.  The agent is to blame, not the collection of matter.

There is no reluctance on my part. If you're working with sentience, then you are necessarily working with a "dualistic" model. If you aren't using that kind of model, you have no room for sentience other than as a fiction. In a model where it is a mere fiction, it is impossible to cause suffering because there is no such thing. In neither model is there room for free will.

Quote
Anyway, the self-driving car is then not sentient because it hasn't been assigned one of these immaterial external agents. My question is, what is the test for having this external control or not? How might the alien come down and know that you have one of these connections and the object to your left does not?  The answer to this is obvious. The sentient object violates physics, because if it didn't, its actions would be a function of physics, and not a reaction to an input without a physical cause.  Show me such a sensory mechanism in any sentient thing then.
In fact, there is none since a living thing is engineered entirely wrong for an avatar setup like that.  If I want to efficiently move my arm, I should command the muscle directly and not bother with the indirection from a remote location.  Nerves would be superfluous.  So would senses since the immaterial entity could measure the environment directly, as is demonstrably done by out-of-body/near-death experiences.

Science has no model that can make sense of sentience - it looks as if there can be no such thing. If we decide that that's the case, then there can be no such thing as suffering and there is no role for morality. You can go and torture anyone you like and then defend yourself in court by showing that sentience makes no sense. You will then be locked up for the rest of your life to protect other people from you, and if there's no such thing as sentience, you won't be harmed by that. If you don't believe in sentience, you shouldn't have any problem with being locked up for life even without committing a crime, and you should be able to copy the philosopher who through himself into a volcano to demonstrate his belief in nihilism.

Morality is not something that nihilists care about. It is there for everyone who believes that feelings are or might somehow be real. When discussing morality, we take it for granted that feelings are real. If we work on the basis that they aren't real, then morality is redundant.

Quote
Anyway, I had not intended this to be a debate on philosophy of mind.  Yes, the dualistic model has a completely different (and untestable) set of assumptions about what the concept of right and wrong means.  Morals don't come from the universe at all.  They come from this other realm where the gods and other assertions are safely hidden from empirical inquiry.

If you want to avoid the diversion, then the trick is to play by the rules by starting with the assumption that feelings are real and that something feels them. When we do that and turn our attention to computers, we find no mechanism to induce or read feelings in a sentience. In the brain, we assume that the hardware somehow is able to do both of those things. Science makes that look impossible too, but our job when discussing how morality works is to assume that it is possible; that there is some hardware trick that somehow enables it.

Quote
You brought up sentience in a discussion of universal morals.  If it isn't needed, then why bring it up?

Protecting sentient things is the purpose of morality. Calculating morality does not require the calculator to be sentient.
Logged
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #219 on: 30/09/2019 00:46:37 »
Quote from: David Cooper on 28/09/2019 22:04:38
Science has no model that can make sense of sentience - it looks as if there can be no such thing. If we decide that that's the case, then there can be no such thing as suffering and there is no role for morality.


Quote from: David Cooper on 28/09/2019 22:04:38
Protecting sentient things is the purpose of morality. Calculating morality does not require the calculator to be sentient.
That requires sentience to be defined objectively.
« Last Edit: 30/09/2019 00:52:14 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 



  • Print
Pages: 1 ... 9 10 [11] 12 13 ... 212   Go Up
« previous next »
Tags: morality  / philosophy 
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.512 seconds with 74 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.