0 Members and 1 Guest are viewing this topic.
Quote from: hamdani yusuf on 19/09/2019 03:26:48How do you determine wchich priority is the higher one?Your reply below seems to assume an obvious priority, but I love putting assumptions to the test.Quote from: hamdani yusuf on 19/09/2019 04:14:35You perform a surgery to the child is morally better then letting them die.While I agree, how do you know this is true? I can argue that it is better to let the kid die if there is a higher goal to breed humans resistant to appendix infections, like the Nepalese have done. I can think of other goals as well that lead to that decision. There seems to be no guidance at all from some universal moral code. I don't think there is one of course.I personally have died 3.5 times, or at least would have were it not for the intervention of modern medicine. My wife would have survived until the birth of our first child. The human race is quite a wreck since we no longer allow defects to be eliminated, and we're not nearly as 'finished' as most species that have had time to perfect themselves to their niche.
How do you determine wchich priority is the higher one?
You perform a surgery to the child is morally better then letting them die.
The point of the thread seems to be to argue why an action might be bad in all cases, and there has been little to back up this position. The examples all seem to have had counter-examples. All the examples of evil have been losers, never something that your people are doing right now, like say employing sweatshop child labor for the clothes you wear. It's almost impossible to avoid since so much is produced via various methods that a typical person would find inhumane, and hard to see since you're paying somebody else to do (and conceal from you) the actual act. At least that is an example of something done by the winner.
You also need to decide if consciousness is relevant in a continuous or binary way. If relative, then it isn't immoral for an adult to harm a child since you've said a child (or an elderly person) has a lower level of consciousness than the adult. If it's a threshold thing (do what you want to anything below the threshold, but not above it), then it needs a definition. A human crosses the threshold at some point, and until he does, it isn't immoral to do bad things to him.For instance, a human embryo obviously has far less consciousness than does a pig, so eating pork is more wrong than abortion by this level-of-consciousness argument, be it a spectrum thing or binary threshold.Similarly, it's OK to kill a person under anesthesia because they're not conscious at the time, and will not suffer for it. These are some of the reasons the whole 'conscious' argument seems to fall apart.
But the expansion is restricted by consiousness level of the group members, because only consious beings can follow moral rules. Otherwise, it would be immoral for human to eat animal as well as vegetables, since this action is bad for the them.
Morality applies to all sentiences and it should be applied by all intelligences that are capable of calculating it. Many humans are not good at calculating it, and some are little better at it than other animals, but their inadequacy doesn't make it right to kill and eat them. It might be just as bad to torture a fly as to torture a human because it isn't about intelligence, but sentience: the pain may feel the same to both. It's all about how much suffering is involved. If you're comparing the killing of a fly versus the killing of a human though, there's inordinately more suffering caused by the latter due to all the other people who are upset by that, and by the loss of potential life.
The three strategies used during detailed design to prevent, control or mitigate hazards are:Passive strategy: Minimise the hazard via process and equipment design features that reduce hazard frequency or consequence;Active strategy: Engineering controls and process automation to detect and correct process deviations; andProcedural strategy:Administrative controls to prevent incidents or minimise the effects of an incident.
So if aliens with higher consciousness (as you put it) come down to Earth, they would not be immoral for them to harvest humans for food or perform painful procedures on us because we're not as conscious as they are. There's no shortage of fictional stories that depict this scenario, except somehow the aliens are portrayed as evil. You would perhaps differ, given the above statement. If they're higher on the ladder of consciousness, then it isn't wrong for them to do to us as they wish.
Evaluation of moral action is based on eventual result, not just immediate consequence. For example, killing every plants can eventually leads to extinction of macroscopic animals, including human. Hence it is morally worse than directly killing one individual human being.
A rock, tree or self-driving car is not a sentience.
Why is a flea a sentience but an AI car not one? Surely the car is entrusted with moral decisions that nobody would ever entrust to a flea. The only thing the flea has that the car doesn't is that you and the flea share a common ancestor, and even that doesn't explain why 'tree' is on the other side of the line. The car is a reasonable example of an alien, something with which you don't share an ancestry, and right off you assert that it isn't a sentience, seemingly because it isn't just like you.
They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings...
Being a meme, the universal moral standard shares space in memetic pool with other memes. They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings.
To answer why keeping the existence of conscient beings is a fundamental moral rule, we can use a method called reductio ad absurdum to its alternative. Imagine a rule that actively seeks to destroy conscient beings. It's basically a meme that's self destruct by destroying its own medium. Or conscient beings that don't follow the rule to actively keep their existence (or their copies) will likely be outcompeted by those who do, or struck by random events and cease to exist.
Quote from: Halc on 24/09/2019 20:02:42Why is a flea a sentience but an AI car not one?First, let's start with a rock. A rock may be sentient in that every fundamental particle in it may be sentient. Can we torture the rock? We could maybe throw it into a lava lake to torture it with high heat, but there's a lot of rock in that state all the time deep in the Earth. Maybe it's all in agony all the time. We should maybe throw all material into a black hole as that might stop the suffering by slowing its functionality to a halt. Maybe that's the best way to end all the extreme suffering that might for all we know be going on in the universe wherever there is matter..The self-driving car may be sentient in the same way as the rock. Every particle in us could be sentient in the same way too, and most of it could be in extreme agony all the time without us knowing - we can't measure how it feels. The only sentient thing that we think we can measure is somewhere in our own brain. We have an information system in there which generates data that makes assertions about what that sentience is feeling. We don't know what evidence that information system is using when it makes its measurements, but it looks impossible for its assertions about sentience to be competent - it should not have any way of measuring feelings and knowing that they are feelings. It should be unable to tell whether they are pleasant feelings or unpleasant ones. Its assertions about feelings cannot be trusted to be anything more than fiction. However, we must also err on the side of caution and consider the possibility that the assertions may somehow be true. We will find out for certain when we can trace back the assertions about feelings in the brain to see how that data was put together and what evidence it was based on. In doing that, we might find some magical quantum mechanism which does the job.
Why is a flea a sentience but an AI car not one?
It will most likely be in most creatures that have a brain and a response to damage with any kind of response that makes it look as if it might be in pain.
A self-driving car's brain is a computer which works in the same way as the computer on a desk. There is no sentience involved in its processing.
If such a machine generates claims that it is sentient and that it's feeling pain
or that it feels the greenness of green, then it has been programmed to tell lies.
Are you arguing that rock or car protons are different from the ones in fleas ? If not, I don't know why you brought up the prospect of suffering of fundamental particles, especially since those particles move fairly freely into and out of biological things like the flea.
As for all these comments concerning suffering, you act like it is a bad thing. If there was a pill that removed all my pain and suffering (there is), I'd not take it, because it's there for a reason. It would be like voluntarily removing my physical conscience, relying instead on rational reasoning to not do things that are wrong. I still have all my fingers because I have pain and suffering (and not for lack of trying otherwise).
Thus it is not wrong for an alien to injure us since we don't react to the injury in a way that is familiar to them.The rules only apply to things that are 'sufficiently just like me'.
QuoteA self-driving car's brain is a computer which works in the same way as the computer on a desk. There is no sentience involved in its processing.That's just an assertion. How do you know this? Because it doesn't writhe in a familiar way when you hit it with a hammer? You just finished suggesting that fundamental particles are sentient, and yet a computer on my desk (which has moral responsibility, and not primarily to me) does not.
Similarly, if a person commits some crime, then creates an exact replica of himself and destroys the original person, the replica is still guilty of the crime despite the fact that the actual body that performed the crime is gone. The information is preserved and the information is what is guilty. So a thing that process/retains information seems capable of doing things that can be classified as right or wrong. Just my observation.
QuoteIf such a machine generates claims that it is sentient and that it's feeling painA rock can do that. I just need a sharpie.
How does a person demonstrate his claim of sentience (a thing you've yet to define)?
A computer already has demonstrated that it bears moral responsibility, so if it isn't sentient, then sentience isn't required for what a thing does to do right or wrong.
Quoteor that it feels the greenness of green, then it has been programmed to tell lies.How do you convince the alien that you're not just programmed to say 'ouch' when you hammer your finger, assuming quite unreasonably that they'd consider "ouch" to be the correct response?
You seem to define a computer to be not sentient because it does a poor job of mimicking a person. By that standard, I'm not as sentient as a squirrel because I've yet to convince one that I am of of their own kind. I fail the squirrel Turning test. It can be done with a duck. I apparently pass the duck Turning test.
If suffering happens, and if a compound object can suffer, that cannot happen without at least one of the components of that compound object suffering. A suffering compound object with none of the components feeling anything at all is not possible.
Torture is universally recognised as immoral.
Then you think it's moral for aliens to torture people?
All the particles of the machine could be sentient, but they may be suffering while the machine generates claims about being happy, or they may all be content while the machine generates claims about being in agony.
The claims generated by an information system have no connection to the sentient state of the material of the machine.
It is not "just" an assertion. It is an assertion which I can demonstrate to be correct. A good starting point though would be for you to read up on the Chinese Room experiment so that you get an understanding of the disconnect between processing and sentience.
QuoteSimilarly, if a person commits some crime, then creates an exact replica of himself and destroys the original person, the replica is still guilty of the crime despite the fact that the actual body that performed the crime is gone. The information is preserved and the information is what is guilty. So a thing that process/retains information seems capable of doing things that can be classified as right or wrong. Just my observation.The sentience is not to blame because it is not in control: there is no such thing as free will.
QuoteA computer already has demonstrated that it bears moral responsibility, so if it isn't sentient, then sentience isn't required for what a thing does to do right or wrong.Correct. Sentience is not needed by something that makes moral decisions.
A rock is made of the same particles, and you say it isn't capable of suffering...
QuoteTorture is universally recognised as immoral.It is not.
I see nothing in the universe that recognizes any moral rule at all.
I was commenting that by the rules you are giving me, it wouldn't be immoral for them to torture us.
]Maybe your protons also are in a different state than the one you claim, so it seems that the state of the protons is in fact irrelevant to how I treat the object composed of said protons.
You claim a thing is 'sentient' if it has a connection with the feelings of its protons, and a computer doesn't. How do you justify this claim, and how do you know that the protons are suffering because there's say too much pressure on them? The same pressure applied to different protons of mine seems not to cause those particular protons any discomfort. That's evidence that it's not the protons that are suffering.
Chinese Room experiment has different interpretations, and has nothing to do with the suffering of particles.Anyway, in some tellings, the guy in the room has a lookup table of correct responses to any input. If this is the algorithm, the room will very much be distinguishable from talking to a real Chinese speaker. It fails the Turing test.
If it doesn't fail the Turing test, then it passes the test and is indistinguishable from a real person, which makes it sentient (common definition, not yours).
QuoteThe sentience is not to blame because it is not in control: there is no such thing as free will.Ah. The sentence definition comes out. As you've been reluctant to say, you're working with a dualistic model, and I'm not. My sentience (the physical collection of particles) is to blame because it is in control of itself (has free will). Your gob of matter is not to blame because it is instead controlled by an outside agent which assumes blame for the actions it causes. The agent is to blame, not the collection of matter.
The sentience is not to blame because it is not in control: there is no such thing as free will.
Anyway, the self-driving car is then not sentient because it hasn't been assigned one of these immaterial external agents. My question is, what is the test for having this external control or not? How might the alien come down and know that you have one of these connections and the object to your left does not? The answer to this is obvious. The sentient object violates physics, because if it didn't, its actions would be a function of physics, and not a reaction to an input without a physical cause. Show me such a sensory mechanism in any sentient thing then.In fact, there is none since a living thing is engineered entirely wrong for an avatar setup like that. If I want to efficiently move my arm, I should command the muscle directly and not bother with the indirection from a remote location. Nerves would be superfluous. So would senses since the immaterial entity could measure the environment directly, as is demonstrably done by out-of-body/near-death experiences.
Anyway, I had not intended this to be a debate on philosophy of mind. Yes, the dualistic model has a completely different (and untestable) set of assumptions about what the concept of right and wrong means. Morals don't come from the universe at all. They come from this other realm where the gods and other assertions are safely hidden from empirical inquiry.
You brought up sentience in a discussion of universal morals. If it isn't needed, then why bring it up?
Science has no model that can make sense of sentience - it looks as if there can be no such thing. If we decide that that's the case, then there can be no such thing as suffering and there is no role for morality.
Protecting sentient things is the purpose of morality. Calculating morality does not require the calculator to be sentient.