0 Members and 1 Guest are viewing this topic.
We'll be able to correct defects by gene editing in the future, so there's no need for any approach like eugenics to improve the species.
As for a universal moral code, I've already provided it several times in this thread without anyone appearing to notice. Morality is mathematics applied to harm management and it's all about calculating the harm:benefit balance.
It only applies to sentient things, but it applies to all of them, fleas and intelligent aliens all included.
It's easy to understand the harm:benefit balance calculations for a single-participant system, and a multi-participant system can be reduced to a single-participant system just by considering all the sentient participants in it to be the same individual living all those lives in turn. The entirety of morality is right there.
Is there a way to compute harm without being relative to a peer group? Humans seem to be causing a lot more harm than benefit, with an estimated genocide of 80% of the species on the planet in the holocene extinction event. Any harm to a species like that would probably be viewed as a total benefit by all these other species
A rock is not considered an agent of choice. A tree might be, but it gets difficult to justify it. How about a self-driving car? It meets the definition of slave. Does a true slave carry any moral responsibility? I almost say no.
Does the species need to consider the harm done to the environment/other species, or only harm done to its own kind? What if it has no concept of 'species' or 'kind', or possibly not even 'individual' or 'agent'?
QuoteIt's easy to understand the harm:benefit balance calculations for a single-participant system, and a multi-participant system can be reduced to a single-participant system just by considering all the sentient participants in it to be the same individual living all those lives in turn. The entirety of morality is right there.I haven't read the entire thread. How has been the response to this. It's a good attempt. It's just that harm seems subjective. What good for X is not necessarily good for Y, so its measure seems context dependent.
Yes, by definition, actions with a bad consequences are wrong. How in any way is this relevant to the discussion? If a consequence is deemed bad only by some group, then it is wrong only relative to that group. If it is bad period, then it's universal, but you've made no argument for that case with the statement here. I'm trying to get the discussion on track.
Quote from: hamdani yusuf on 19/09/2019 03:26:48How do you determine wchich priority is the higher one?Your reply below seems to assume an obvious priority, but I love putting assumptions to the test.Quote from: hamdani yusuf on 19/09/2019 04:14:35You perform a surgery to the child is morally better then letting them die.While I agree, how do you know this is true? I can argue that it is better to let the kid die if there is a higher goal to breed humans resistant to appendix infections, like the Nepalese have done. I can think of other goals as well that lead to that decision. There seems to be no guidance at all from some universal moral code. I don't think there is one of course.I personally have died 3.5 times, or at least would have were it not for the intervention of modern medicine. My wife would have survived until the birth of our first child. The human race is quite a wreck since we no longer allow defects to be eliminated, and we're not nearly as 'finished' as most species that have had time to perfect themselves to their niche.
How do you determine wchich priority is the higher one?
You perform a surgery to the child is morally better then letting them die.
The point of the thread seems to be to argue why an action might be bad in all cases, and there has been little to back up this position. The examples all seem to have had counter-examples. All the examples of evil have been losers, never something that your people are doing right now, like say employing sweatshop child labor for the clothes you wear. It's almost impossible to avoid since so much is produced via various methods that a typical person would find inhumane, and hard to see since you're paying somebody else to do (and conceal from you) the actual act. At least that is an example of something done by the winner.
You also need to decide if consciousness is relevant in a continuous or binary way. If relative, then it isn't immoral for an adult to harm a child since you've said a child (or an elderly person) has a lower level of consciousness than the adult. If it's a threshold thing (do what you want to anything below the threshold, but not above it), then it needs a definition. A human crosses the threshold at some point, and until he does, it isn't immoral to do bad things to him.For instance, a human embryo obviously has far less consciousness than does a pig, so eating pork is more wrong than abortion by this level-of-consciousness argument, be it a spectrum thing or binary threshold.Similarly, it's OK to kill a person under anesthesia because they're not conscious at the time, and will not suffer for it. These are some of the reasons the whole 'conscious' argument seems to fall apart.
But the expansion is restricted by consiousness level of the group members, because only consious beings can follow moral rules. Otherwise, it would be immoral for human to eat animal as well as vegetables, since this action is bad for the them.
Morality applies to all sentiences and it should be applied by all intelligences that are capable of calculating it. Many humans are not good at calculating it, and some are little better at it than other animals, but their inadequacy doesn't make it right to kill and eat them. It might be just as bad to torture a fly as to torture a human because it isn't about intelligence, but sentience: the pain may feel the same to both. It's all about how much suffering is involved. If you're comparing the killing of a fly versus the killing of a human though, there's inordinately more suffering caused by the latter due to all the other people who are upset by that, and by the loss of potential life.
The three strategies used during detailed design to prevent, control or mitigate hazards are:Passive strategy: Minimise the hazard via process and equipment design features that reduce hazard frequency or consequence;Active strategy: Engineering controls and process automation to detect and correct process deviations; andProcedural strategy:Administrative controls to prevent incidents or minimise the effects of an incident.
So if aliens with higher consciousness (as you put it) come down to Earth, they would not be immoral for them to harvest humans for food or perform painful procedures on us because we're not as conscious as they are. There's no shortage of fictional stories that depict this scenario, except somehow the aliens are portrayed as evil. You would perhaps differ, given the above statement. If they're higher on the ladder of consciousness, then it isn't wrong for them to do to us as they wish.
Evaluation of moral action is based on eventual result, not just immediate consequence. For example, killing every plants can eventually leads to extinction of macroscopic animals, including human. Hence it is morally worse than directly killing one individual human being.
A rock, tree or self-driving car is not a sentience.
Why is a flea a sentience but an AI car not one? Surely the car is entrusted with moral decisions that nobody would ever entrust to a flea. The only thing the flea has that the car doesn't is that you and the flea share a common ancestor, and even that doesn't explain why 'tree' is on the other side of the line. The car is a reasonable example of an alien, something with which you don't share an ancestry, and right off you assert that it isn't a sentience, seemingly because it isn't just like you.
They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings...
Being a meme, the universal moral standard shares space in memetic pool with other memes. They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings.
To answer why keeping the existence of conscient beings is a fundamental moral rule, we can use a method called reductio ad absurdum to its alternative. Imagine a rule that actively seeks to destroy conscient beings. It's basically a meme that's self destruct by destroying its own medium. Or conscient beings that don't follow the rule to actively keep their existence (or their copies) will likely be outcompeted by those who do, or struck by random events and cease to exist.
Quote from: Halc on 24/09/2019 20:02:42Why is a flea a sentience but an AI car not one?First, let's start with a rock. A rock may be sentient in that every fundamental particle in it may be sentient. Can we torture the rock? We could maybe throw it into a lava lake to torture it with high heat, but there's a lot of rock in that state all the time deep in the Earth. Maybe it's all in agony all the time. We should maybe throw all material into a black hole as that might stop the suffering by slowing its functionality to a halt. Maybe that's the best way to end all the extreme suffering that might for all we know be going on in the universe wherever there is matter..The self-driving car may be sentient in the same way as the rock. Every particle in us could be sentient in the same way too, and most of it could be in extreme agony all the time without us knowing - we can't measure how it feels. The only sentient thing that we think we can measure is somewhere in our own brain. We have an information system in there which generates data that makes assertions about what that sentience is feeling. We don't know what evidence that information system is using when it makes its measurements, but it looks impossible for its assertions about sentience to be competent - it should not have any way of measuring feelings and knowing that they are feelings. It should be unable to tell whether they are pleasant feelings or unpleasant ones. Its assertions about feelings cannot be trusted to be anything more than fiction. However, we must also err on the side of caution and consider the possibility that the assertions may somehow be true. We will find out for certain when we can trace back the assertions about feelings in the brain to see how that data was put together and what evidence it was based on. In doing that, we might find some magical quantum mechanism which does the job.
Why is a flea a sentience but an AI car not one?
It will most likely be in most creatures that have a brain and a response to damage with any kind of response that makes it look as if it might be in pain.
A self-driving car's brain is a computer which works in the same way as the computer on a desk. There is no sentience involved in its processing.
If such a machine generates claims that it is sentient and that it's feeling pain
or that it feels the greenness of green, then it has been programmed to tell lies.