Naked Science Forum

General Discussion & Feedback => Just Chat! => Topic started by: hamdani yusuf on 14/11/2018 06:30:38

Title: Is there a universal moral standard?
Post by: hamdani yusuf on 14/11/2018 06:30:38
I consider this topic as a spinoff of my previous subject
https://www.thenakedscientists.com/forum/index.php?topic=71347.0
It is split up because morality itself is quite complex and can generate a discussion too long to be covered there. 
Before we start the discussion, it might be useful to have some background information to save our time and energy to prevent unnecessary debate.
https://en.wikipedia.org/wiki/Morality
Quote
Morality (from Latin: moralis, lit. 'manner, character, proper behavior') is the differentiation of intentions, decisions and actions between those that are distinguished as proper and those that are improper.[1] Morality can be a body of standards or principles derived from a code of conduct from a particular philosophy, religion or culture, or it can derive from a standard that a person believes should be universal.[2] Morality may also be specifically synonymous with "goodness" or "rightness".

I hope this topic can start a discussion which can eventually produce satisfactory answer to the question .
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/11/2018 06:35:46
I found older topics in this forum discussing morality, such as
https://www.thenakedscientists.com/forum/index.php?topic=21892.msg245282#msg245282
or
https://www.thenakedscientists.com/forum/index.php?topic=17732.msg370985#msg370985
I just don't want to jump in and hijack those topics.

There are also interesting debate between Youtubers arguing about objective morality.
and

Those videos may represent two positions in moral philosophy:
https://en.wikipedia.org/wiki/Morality#Realism_and_anti-realism

So they are actually discussing more about ethics :
https://en.wikipedia.org/wiki/Ethics
Quote
Ethics or moral philosophy is a branch of philosophy that involves systematizing, defending, and recommending concepts of right and wrong conduct.[1] The field of ethics, along with aesthetics, concern matters of value, and thus comprise the branch of philosophy called axiology.[2]

Ethics seeks to resolve questions of human morality by defining concepts such as good and evil, right and wrong, virtue and vice, justice and crime. As a field of intellectual inquiry, moral philosophy also is related to the fields of moral psychology, descriptive ethics, and value theory.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 14/11/2018 08:05:39
Without delving too deeply into the definition of morality or ethics, I think we can usefully approach the subject through "universal". The test is whether any person considered normal by his peers, would make the same choice or judgement as any other in a case requiring subjective evaluation.

This immediately  leads to a sampling question. "Turn the other cheek" would be considered normal and desirable in some peer groups, whilst "an eye for an eye" might be de rigeur for others. Both strategies have evolutionary validity: think rabbits, which outbreed their predators, and lions, where only the strongest male gets to breed.

Homo sapiens is an odd creature We breed too slowly to survive as prey, and are too weak to be predators, but a very complex collaboration allows us to farm and hunt all we need. That said, although we can see the value of large scale collaboration (like bees and ants) it takes a long time to acquire the knowledge and skills needed to participate, so the small "family" unit (including communes and kibbutzim) is a prerequisite of survival.

Thus we grow up with at least two loyalties, to the immediate family that supports us, and to the wider community that supports the family. No problem if we have infinite resources and unlimited choice, but the decisions we make in restricted circumstances are what defines our morality, and it is fairly clear from daily accounts of religious wars and magistrates' court proceedings that either  there is no universal concept of right and wrong, or that it can be set aside for personal gain.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/11/2018 08:11:21
To answer the question properly we need to define the boundary of the subject. We need to answer standard questions : what, where, when, who, why, how.

We can also explore the subject further using thought experiments and their variations such us trolley problem.
https://en.wikipedia.org/wiki/Trolley_problem
From those specific cases we may be able to conclude a general rule behind the decisions made in those cases. In my opinion, the trolley problem and its variations ask us what is the priority held by the decision maker, and what factors may influence it.

I found a trolley problem experiment in real life in this video:
Title: Re: Is there a universal moral standard?
Post by: Colin2B on 14/11/2018 09:41:04
From those specific cases we may be able to conclude a general rule behind the decisions made in those cases.
Probably not.
The example quoted by @alancalverd (eye for eye) shows the problem of trying to decide a universal ethic.
While some might go for the lesser evil, Alan is likely to go for population reduction and set the trolly on the 5.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/11/2018 12:04:46
Without delving too deeply into the definition of morality or ethics, I think we can usefully approach the subject through "universal". The test is whether any person considered normal by his peers, would make the same choice or judgement as any other in a case requiring subjective evaluation.

This immediately  leads to a sampling question. "Turn the other cheek" would be considered normal and desirable in some peer groups, whilst "an eye for an eye" might be de rigeur for others. Both strategies have evolutionary validity: think rabbits, which outbreed their predators, and lions, where only the strongest male gets to breed.

Homo sapiens is an odd creature We breed too slowly to survive as prey, and are too weak to be predators, but a very complex collaboration allows us to farm and hunt all we need. That said, although we can see the value of large scale collaboration (like bees and ants) it takes a long time to acquire the knowledge and skills needed to participate, so the small "family" unit (including communes and kibbutzim) is a prerequisite of survival.

Thus we grow up with at least two loyalties, to the immediate family that supports us, and to the wider community that supports the family. No problem if we have infinite resources and unlimited choice, but the decisions we make in restricted circumstances are what defines our morality, and it is fairly clear from daily accounts of religious wars and magistrates' court proceedings that either  there is no universal concept of right and wrong, or that it can be set aside for personal gain.
Thank you for spending your precious time to join this discussion. I realize that there are many theories on morality and ethics as described in Wikipedia links, and many of them are incompatible to each other. So far I haven't found a general consensus among modern philosopher on this topic. May be that's why we can find those mutual debunking videos from Youtubers who have similar world view and usually agree with each other on most other topics.

In this topic I'll try to figure out where are common grounds among those theories on morality and ethics, and where they start to diverge, and why.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/11/2018 12:10:50
Probably not.The example quoted by @alancalverd (eye for eye) shows the problem of trying to decide a universal ethic.While some might go for the lesser evil, Alan is likely to go for population reduction and set the trolly on the 5.
We won't find out if we don't even try, do we?
To resolve the problem with the dilemma, we need to be clear about the reason behind those decisions, and in what circumstances they are acceptable (or not).
Title: Re: Is there a universal moral standard?
Post by: Colin2B on 14/11/2018 12:22:05
We won't find out if we don't even try, do we?
That depends what you are trying to find out. Your question is asking about a universal ethic/morality, but @alancalverd shows that it doesn’t exist.
Perhaps you are trying to devise a methodology to determine the ethic/morality that drives a particular individual or group in specific circumstances.
Title: Re: Is there a universal moral standard?
Post by: syhprum on 14/11/2018 12:53:46
I think the oft quoted saying an eye for an eye was meant to limit revenge not to encourage it.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/11/2018 12:57:41
I'll try to answer standard questions, starting with "What". In most theories, morality can be seen as a method to distinguish between right and wrong, good and bad, proper and improper. It follows that to get to universal agreement on morality, we need first to agree on what is defined by the words right and wrong, good and bad, proper and improper. This inevitably lead us to the next question: who decides what's right and wrong, good and bad, proper and improper, and why?
Question of when and where can be more easily answered. A universal moral standard must be applicable anywhere and anytime.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/11/2018 13:11:49
That depends what you are trying to find out. Your question is asking about a universal ethic/morality, but @alancalverd shows that it doesn’t exist.Perhaps you are trying to devise a methodology to determine the ethic/morality that drives a particular individual or group in specific circumstances.
I think Alan's post only shows that morality can be subjective, limited by space and time (in answering standard questions of who, where and when), but doesn't show that it can't be collective. If some moral standards can be shown to be universally applicable, that will answer the question of the topic.
On the contrary, I try to find fundamental rules which drive us to diverse moral values that we have today. As analogy, in evolutionary biology, we have random mutation and natural selection that drive life to diversity of life forms.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/11/2018 13:14:01
I think the oft quoted saying an eye for an eye was meant to limit revenge not to encourage it.
I think it is both ways. It can also be used to discourage the offense in the first place.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/11/2018 13:23:39
To answer the question properly we need to define the boundary of the subject. We need to answer standard questions : what, where, when, who, why, how.

We can also explore the subject further using thought experiments and their variations such us trolley problem.
https://en.wikipedia.org/wiki/Trolley_problem
From those specific cases we may be able to conclude a general rule behind the decisions made in those cases. In my opinion, the trolley problem and its variations ask us what is the priority held by the decision maker, and what factors may influence it.

I found a trolley problem experiment in real life in this video:

I find that in real life experiment, there are something significant not considered in the thought experiments. Those are uncertainty about the assertions in the narrative of the situation. Is it true that doing nothing will cause something bad to happen? (in the experiment in the video, not really). Is it true that our action will give us a more desired (or less undesired) result?
In one variation we might ask, what is the probability that the fat man's body can really stop the train? Or what's the probability that those five men can save themselves?
From this finding, we can conclude that one factor of moral subjectivity comes from Bayesian inference.
https://en.wikipedia.org/wiki/Bayesian_inference
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 14/11/2018 13:39:10
"An eye for an eye, a tooth for a tooth" is nowadays considered by Jewish philosophers as a simplistic misinterpretation of "to the value of an eye....", that is, promoting restorative rather than retributive justice, but the underlying theme is always justice and caution* rather than forgiveness. My father told the story of a Jewish doctor treating a dying ex-SS patient who asked "can you forgive me", to which he answered "Being human, I can forget or ignore, but forgiveness is a matter for your god". I consider that a logical starting point, but those brought up in other faiths may think otherwise.

Intriguingly I find something in common between Jewish and Celtic law, where the individual is held liable without limit for his actions and the state exists to serve the citizen by prosecuting wrongs;  in contrast to Roman law where the citizen exists to serve the state and is granted rights in exchange, and its latterday substitution of absolution and penitence for sacrifice.

Not that I have much time for faith. As Dawkins pointed out, the only common theme among religions is that each teaches you to despise all the others.




*"Hit me once - shame on you. Hit me twice - shame on me." - sounds sharper in Yiddish, but I 'm a bit rusty.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 14/11/2018 13:54:51

I think Alan's post only shows that morality can be subjective, but doesn't show that it can't be collective. If some moral standards can be shown to be universally applicable, that will answer the question of the topic.

I spend some time sitting on medical research ethics committees. The general guidance seems to boil down to whether the balance of risk and benefit has been fully evaluated and presented such that the famous "man on the Clapham omnibus" would be able to make an informed decision to participate. But in making that judgement, we are often aware that even his brother on the Brooklyn omnibus has a slightly different perspective, and we can only guess at what the average Tokyo commuter might consider acceptable.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 14/11/2018 18:17:12
to begin, I understand the word "universal" to mean within the universe of this world.

I am not attempting to diminish the importance of the question; but who knows what standard of morality might exist in other worlds in the greater universe?...perhaps the moral standard of some planet in a galaxy far, far away might be to destroy any life form existing on any other planet in the universe (kind of like destroying potentially dangerous alien life forms).

however, if "universal" is intended to reference morality on the planet Earth, then again (hopefully I am not belaboring the intent of the question) I am once again presuming that the word "morality" is to be applied to the human species...presupposing that other life forms on this planet do not have a moral compass (altho there appears to be some evidence of other animals being in possession of a type of morality applicable to their species.)

finally, if "universal" and "morality" is meant to apply to the human species of this planet, in my opinion there is no universal morality; due to the religious and cultural influences of each unique society. as extreme examples, it appears that in the Muslim religion, it is morally acceptable to kill individuals of any other religion unwilling to convert to Islam; while in some more passive religious groups, killing of any human being is forbidden.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/11/2018 22:08:22

...perhaps the moral standard of some planet in a galaxy far, far away might be to destroy any life form existing on any other planet in the universe (kind of like destroying potentially dangerous alien life forms).
In this topic, I'm focusing on the search of similar values among different societies,  because it is the requirement of something being universal.  In your hypothetical case, destroying any life forms in other planet cannot be  the universal moral standard,   because it only applies when the lifeforms in that particular planet realize that there are other planets,  and there exist other lifeforms there. Until then, this moral value has no guidance function,  hence useless as moral standard.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/11/2018 22:19:49

I spend some time sitting on medical research ethics committees. The general guidance seems to boil down to whether the balance of risk and benefit has been fully evaluated and presented such that the famous "man on the Clapham omnibus" would be able to make an informed decision to participate. But in making that judgement, we are often aware that even his brother on the Brooklyn omnibus has a slightly different perspective, and we can only guess at what the average Tokyo commuter might consider acceptable.
I think you posted this before I finished editing my post about Bayesian inference that causes subjectivity in real life judgment of moral actions. The next question is,  are there residual subjective factors when bayesian inference is excluded from the equation?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/11/2018 23:03:57
I'll try to answer standard questions, starting with "What". In most theories, morality can be seen as a method to distinguish between right and wrong, good and bad, proper and improper. It follows that to get to universal agreement on morality, we need first to agree on what is defined by the words right and wrong, good and bad, proper and improper. This inevitably lead us to the next question: who decides what's right and wrong, good and bad, proper and improper, and why?
Question of when and where can be more easily answered. A universal moral standard must be applicable anywhere and anytime.
I'll refine the answer to what question later. Now I'll try address the who question.
Moral standard can only be imposed to agents/systems with capability of doing planned action. In other words,  they must have some internal algorithm to determine how to react in certain circumstances.  Something who can do purely reflective actions or reactions has no moral obligation.
That's why there are debates about ethics for autonomous systems with artificial intelligence.
Between reflective systems and complex systems with deep cognitive functions,  there are a spectrum of cognitive capabilities.  Hence there are various degrees of moral obligations for them. As an example from more familiar cases,  we impose different level of moral obligations between human babies and adults.
This leads us to the next question,  why?
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 14/11/2018 23:47:11
Universal morality as in universally applied by people/aliens - no. Universal morality as in absolute morality - yes. There is an absolute morality, and most attempts at formulating moral rules are attempts to produce that underlying absolute morality. The reason we find so much in common between different attempts at formulating systems of moral rules is that they are all tapping into an underlying absolute morality which they are struggling to pin down precisely, but it is there.

What is absolute morality? The idea of "do unto others as you'd have them do unto you" captures most of it, but it's not quite right. "Always try your best to minimise harm (if that harm isn't cancelled out by the gains for the one who suffers it)" was one of my attempts to formulate the rule properly, and it does the job a lot better, but I'm not sure it's completely right. The correct solution is more of a method than a rule: it's to imagine that you are all the people (and indeed all the sentient beings) involved in a situation and to make yourself as happy as possible with the result of whatever action is determined to produce that maximum happiness. You must imagine that you will have to live each of their lives in turn, so if one of them kills one of the others, you will be both the killer and the one killed, but that killing will be the most moral action if it minimises your suffering and maximise your pleasure overall.

This is how intelligent machines will attempt to calculate what's moral in any situation, but they will often be incapable of accessing or crunching enough data in the time available to make ideal decisions - they can only ever do the best they can with what is available to them, playing the odds.

(This is a kind of utiliratrianism. The strongest objection I've seen to utilitarianism is the Mere Addition Paradox, but there's a major mathematical fault in that paradox and anyone rational should throw it in the bin where it belongs.)
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 15/11/2018 05:21:22
Universal morality as in universally applied by people/aliens - no. Universal morality as in absolute morality - yes. There is an absolute morality, and most attempts at formulating moral rules are attempts to produce that underlying absolute morality. The reason we find so much in common between different attempts at formulating systems of moral rules is that they are all tapping into an underlying absolute morality which they are struggling to pin down precisely, but it is there.

What is absolute morality? The idea of "do unto others as you'd have them do unto you" captures most of it, but it's not quite right. "Always try your best to minimise harm (if that harm isn't cancelled out by the gains for the one who suffers it)" was one of my attempts to formulate the rule properly, and it does the job a lot better, but I'm not sure it's completely right. The correct solution is more of a method than a rule: it's to imagine that you are all the people (and indeed all the sentient beings) involved in a situation and to make yourself as happy as possible with the result of whatever action is determined to produce that maximum happiness. You must imagine that you will have to live each of their lives in turn, so if one of them kills one of the others, you will be both the killer and the one killed, but that killing will be the most moral action if it minimises your suffering and maximise your pleasure overall.

This is how intelligent machines will attempt to calculate what's moral in any situation, but they will often be incapable of accessing or crunching enough data in the time available to make ideal decisions - they can only ever do the best they can with what is available to them, playing the odds.

(This is a kind of utiliratrianism. The strongest objection I've seen to utilitarianism is the Mere Addition Paradox, but there's a major mathematical fault in that paradox and anyone rational should throw it in the bin where it belongs.)
I realize that there are already diverse moral values followed by human on earth, even though we know that humanity is just a small portion of universe in terms of time and space. Finding out a moral standard which is applicable universally seems even more improbable.
For a starting point,  we can shieve through known moral values applied in most cases,  such as the golden rule you mentioned above. But we know that it has some exceptions,  such as in sadomasochism. Hence we know that there are more fundamental reason why it works in most normal cases. We might also want to scrutinize other moral values known to humanity, find out their applicability and exceptions. From there we can conclude why they are sometimes applicable, and some other times aren't.
We must also try to think out of the box, open to new suggestions never offered by previous philosophers. Maybe  we haven't had consensus on universal moral standard because it hasn't been discovered by previous thinkers.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 15/11/2018 07:58:11
The "golden rule" is subject to sampling error.

It is fairly obvious that a "family" group (could be a biological family or a temporary unit like a ship's crew) will function better if its members can trust each other. The military understand this: selection for specialist duties includes checks for honesty and recognition that you are fighting for your mates first, your country second. So the "greatest happiness for the greatest number" (GHGN) metric is fairly easy to determine where N < 50, say.

Brexit provides a fine example of the breakdown of GHGN for very large N. There is no doubt that a customs union is good for business: whether you are an importer or an exporter, N is small and fewer rules and tariffs means more profit . But if the nation as a whole (N is large) imports more than it exports, increased business flow overall means more loss, hence devaluation and reduced public budgets.  At its simplest, you could model a trading nation as consisting of just two businesses of roughly equal size and turnover Nimp ≈ Nexp. Good news for any sample of size ≤ 2N is bad news for the whole population if  Nimp > Nexp by even a small amount, hence the interesting conundrum "EU good for British business, bad for Britain".
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 15/11/2018 11:13:22
Known moral values have their limitations and applicability. Some have broad application, while some others only work in very special cases. Some rules are more fundamental than the others,  so we can apply a hierarchy to determine which rule to follow in case of conflicts among them.
The most fundamental rule which is applicable universally must take highest priority,  hence     override other rules when they are in conflict. Those other rules can be thought of as shorthands to process information faster to get to a moral decision quickly.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 15/11/2018 11:42:41
But there's your problem - there is no universally applicable rule! Witness the ecstatic joy of the Hitler Jugend, and the total misery they wrought on everyone, including, eventually, themselves.
Title: Re: Is there a universal moral standard?
Post by: guest4091 on 15/11/2018 18:32:21
Truth will never be decided by opinion polls.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 15/11/2018 21:25:45
I did say the Golden Rule is faulty. That's why I came up with a better rule (the harm minimisation one) which removes the major problems with it, but I'm not sure it is perfect. What does appear to be perfect is the method of considering yourself to be all the people involved in a scenario. Let's apply it to the Trolley Problem. You are the person lying on one track which the trolley is not supposed to go down. In other lives, you are the ten people lying on another track which the trolley is scheduled to go down. In another life you are the person by the lever who has to make a decision. How many of yourself do you want to kill/save in this situation? Should you save the ten idiot versions of yourself who have lain down on a track which the trolley is scheduled to go down, or should you save the lesser idiot version of yourself who has lain down on the other track in the stupid assumption that the trolley won't go that way? It's a calculation that needs a lot of guessing unless you have access to a lot of information about the eleven people in question so that you can work out whether it's better to die ten times as mega-morons or once as a standard moron, but it's still a judgement that can be made on the basis of self-interest. All scenarios can be converted into calculations about self-interest on the basis that you are all of the players. This doesn't make the calculations easy, but it does provide a means for producing the best answer from the available information.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 15/11/2018 21:50:45
But there's your problem - there is no universally applicable rule! Witness the ecstatic joy of the Hitler Jugend, and the total misery they wrought on everyone, including, eventually, themselves.
We cannot prove the nonexistence of something. But we can prove that something that is offered is absurd,  paradoxical, superfluous or suboptimum to explain some phenomena or to achieve desired results.
The fact that there were many followers of those out of date moral rules can be taken as indication that there are underlying assumptions and reason behind them which were accepted by their followers. I'd like to find out what they are.
Title: sRe: Is there a universal moral standard?
Post by: hamdani yusuf on 16/11/2018 00:39:29
Truth will never be decided by opinion polls.

They are merely stepping stones to get closer to the truth. They rely on the assumption that the constituents are mostly rational.
Democracy is the most consequential version of this. If a democratic society fulfills the assumption, it will thrive.  Otherwise it will be left behind by other societies that do fulfill it.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/11/2018 02:33:26
I did say the Golden Rule is faulty. That's why I came up with a better rule (the harm minimisation one) which removes the major problems with it, but I'm not sure it is perfect. What does appear to be perfect is the method of considering yourself to be all the people involved in a scenario. Let's apply it to the Trolley Problem. You are the person lying on one track which the trolley is not supposed to go down. In other lives, you are the ten people lying on another track which the trolley is scheduled to go down. In another life you are the person by the lever who has to make a decision. How many of yourself do you want to kill/save in this situation? Should you save the ten idiot versions of yourself who have lain down on a track which the trolley is scheduled to go down, or should you save the lesser idiot version of yourself who has lain down on the other track in the stupid assumption that the trolley won't go that way? It's a calculation that needs a lot of guessing unless you have access to a lot of information about the eleven people in question so that you can work out whether it's better to die ten times as mega-morons or once as a standard moron, but it's still a judgement that can be made on the basis of self-interest. All scenarios can be converted into calculations about self-interest on the basis that you are all of the players. This doesn't make the calculations easy, but it does provide a means for producing the best answer from the available information.
If we propose minimizing harm as a fundamental moral rule, we need to agree first on its definition. If it's about inflicting pain, then giving painkiller should solve the problem,    which is not the case.
If it's about causing death, then death penalty and euthanasia are in direct violation.
Hence there must be a more fundamental reason why this proposed rule works in most cases,  but still have some exceptions.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/11/2018 10:25:28
Before answering the why question, I'll refine the answer to what question first, related to the answer of who question in my previous post. I said that moral rules only apply to agents or systems with planning capability, which means there are internal process inside them determining how they react to their surrounding situation. A plan requires a model, however crude that may be, that represents some portion of reality.
That model/simulation is basically a piece of information which requires resources like storage medium and processing system to keep its existence. Nowadays, it's often called meme.
Just like any other memes, a universal moral standard, if it exists, will compete for resources to keep it's own existence. And the resource for memes are agents or systems with planning capability. Agents or systems with good planning capabilities are conscient beings.
Hence, keeping the existence of conscient beings is one of the most fundamental moral rules, if not the most.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/11/2018 12:41:52
To answer why keeping the existence of conscient beings is a fundamental moral rule, we can use a method called reductio ad absurdum to its alternative.
Imagine a rule that actively seeks to destroy conscient beings. It's basically a meme that's self destruct by destroying its own medium. Or conscient beings that don't follow the rule to actively keep their existence (or their copies) will likely be outcompeted by those who do, or struck by random events and cease to exist.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 16/11/2018 16:09:42
The Trolley Problem should never be dismissed as an academic exercise.  Churchill's decision not to evacuate the Calais garrison in 1940 is a classic case of balancing the certain death of a few against the possible survival of many by delaying the German advance on Dunkirk. Imagine sending this signal:

Quote
Every hour you continue to exist is of the greatest help to the B.E.F.   Government has therefore decided you must continue to fight. Have greatest possible admiration for your splendid stand. Evacuation will not (repeat not) take place, and craft required for above purposes are to return to Dover. Verity and Windsor to cover Commander Mine-sweeping and his retirement.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/11/2018 22:41:25
The Trolley Problem should never be dismissed as an academic exercise.  Churchill's decision not to evacuate the Calais garrison in 1940 is a classic case of balancing the certain death of a few against the possible survival of many by delaying the German advance on Dunkirk. Imagine sending this signal:

Quote
Every hour you continue to exist is of the greatest help to the B.E.F.   Government has therefore decided you must continue to fight. Have greatest possible admiration for your splendid stand. Evacuation will not (repeat not) take place, and craft required for above purposes are to return to Dover. Verity and Windsor to cover Commander Mine-sweeping and his retirement.
I'll cover that into more detail when answering how question.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/11/2018 22:53:57
To answer why keeping the existence of conscient beings is a fundamental moral rule, we can use a method called reductio ad absurdum to its alternative.
Imagine a rule that actively seeks to destroy conscient beings. It's basically a meme that's self destruct by destroying its own medium. Or conscient beings that don't follow the rule to actively keep their existence (or their copies) will likely be outcompeted by those who do, or struck by random events and cease to exist.
Alternatively, imagine that there are rules more fundamental than preservation of conscient beings. To make sure that those rules are followed, it requires that there exist conscient beings. That makes the preservation of conscient beings a prerequisite rule, and takes higher priority. 
It's similar to a chess game, which says that your goal is to capture opponent's king. But you need to make sure that your king isn't captured first.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 16/11/2018 23:13:38
If we propose minimizing harm as a fundamental moral rule, we need to agree first on its definition.

Do you understand what harms you? You are, I assume, a sentient being with feelings, some of which are unpleasant, and the experience of those unpleasant ones is classed as suffering. If I do something that causes you to suffer, that is doing you harm. If you have done harm to me first, I can justify harming you in return, and by doing so, I might discourage you from harming me again. The result will be that both of us will be harmed less because we realise that harming the other, the other harms us in return, and that an attempt to kill the other would be risky as the other person could anticipate that attempt and pre-empt it in a lethal way. Also, we have friends who will seek to kill a killer of any member of the group by an outsider, so killing anyone is unsafe as there will be many people motivated to hunt you down and kill you in return. Morality comes out of self-interest - we maximise our quality of life by not harming each other, or at least keeping it to a minimum. I might annoy you by making a lot of noise while building my shelter, but when yours rots away and you have to build a new one, you'll be the one making a lot of noise, so we tolerate that kind of disturbance. Where morality becomes more universal is when we have the wit to recognise that it's okay for John to annoy Jack by building a house too, but that we should defend Jack from John when John seeks to steal material for his house by taking it off Jack's house. We include everyone in the system and treat them all as equally important. If we want this to be managed by intelligent machines (and we clearly do want that), then they need to make the same kind of judgements, weighing up the distribution of harm and minimising the kinds of harm that shouldn't be tolerated, so they will let us annoy each other by making a noise and competing for the same resources, but they won't let us hit each other over the head with clubs unless we're levelling the score against someone who has harmed someone unfairly.

I don't understand why so many people who want to discuss morality have difficulty understanding what harm is, but maybe they're incapable of suffering and just don't get the point of morality as a result.

Quote
If it's about inflicting pain, then giving painkiller should solve the problem,    which is not the case.

There are more ways to upset people than pain. If you lock someone up in a cage and give them painkillers and drugs to make them feel happy, they're still going to be dissatisfied with the fact that you've stolen their life from them, and billions of others will be upset about what you've done too, scared that you'll do it to them next. Anyone who thinks drugging people makes everything fine should lead by example and do it to themselves instead.

Quote
If it's about causing death, then death penalty and euthanasia are in direct violation.

There are occasions when killing people is moral where it reduces suffering and there isn't sufficient pleasure available to the people being killed to cancel out that suffering, or where killing them prevents them from causing unnecessary suffering to others.

Quote
Hence there must be a more fundamental reason why this proposed rule works in most cases,  but still have some exceptions.

Does it have any exceptions? Show me one.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/11/2018 23:48:22
Finally we get to the last question: how. There are some basic strategies to preserve information which I borrow from IT business:
Choosing robust media.
Creating multilayer protection.
Creating backups.
Create diversity to avoid common mode failures.
Title: Re: Is there a universal moral standard?
Post by: evan_au on 17/11/2018 00:34:35
Quote from: hamdani yusuf
Hence, keeping the existence of conscient beings is one of the most fundamental moral rules, if not the most.
There seems to be some debate about which are conscient (conscious?) beings to which this moral rule applies...
- Some apply it to just members of their own family or tribe
- Others apply it to just members of their own country or religion
- Thinking more broadly, are elephants conscious, or dolphins? How should we treat them?
- What about our pet dog or cat?
Title: aRe: Is there a universal moral standard?
Post by: hamdani yusuf on 17/11/2018 02:16:52
Does it have any exceptions? Show me one.
Imagine a genius who want to minimize suffering by creating a virus that makes people sterile. He prevents sufferings from countless number of people from the next generation.
Or the virus makes people don't want to have kids.
Or replace the virus with a meme.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/11/2018 02:37:28
Quote from: hamdani yusuf
Hence, keeping the existence of conscient beings is one of the most fundamental moral rules, if not the most.
There seems to be some debate about which are conscient (conscious?) beings to which this moral rule applies...
- Some apply it to just members of their own family or tribe
- Others apply it to just members of their own country or religion
- Thinking more broadly, are elephants conscious, or dolphins? How should we treat them?
- What about our pet dog or cat?

In my previous post answering what question I said that there are spectrum of consciousness. There are multidimensional level of consciousness. In the data processing capabilities alone, there are depth and breadth of the neural networks, also processing speed and data storage capacity. Also data validity/robustness and error correction capability. In input/output system, there could be various level of accuracy and precision. Those levels apply generally wether or not they're organic/biological systems.
 The universal rule should concern about the existence of consciousness in the eventual results, which is required by the timelessness of the rule.
I'll addres your other questions while refining my answer to the how question. There I will show how all known moral values are driven by deepest desire to follow the universal moral standard. The diversity comes from Bayesian inference held by the authors of those rules,  based on their knowledge at the time they are being conceived.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/11/2018 06:08:54
Finally we get to the last question: how. There are some basic strategies to preserve information which I borrow from IT business:
Choosing robust media.
Creating multilayer protection.
Creating backups.
Create diversity to avoid common mode failures.

In the opening of this topic I've said that it's a spin-off from my previous post titled universal utopia, which shown that consciousness is a product of natural process. The evolution of consciousness is a continuation/extention of biological evolution, which in turn a continuation of chemical and physical evolution. There I've said that creating copies is one important strategies to preserve a system's existence. It increases the chance of a system to survive random events in the environment. But it also requires more resources, which must be shared with other strategies to achieve goals effectively and efficiently.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/11/2018 10:49:50
Note that a system's copies are accounted for as its environment, hence can influence the results of its activities. A conscious being can be harmful to other conscious beings, and morality is a method to prevent that.

Apparently,  spending all available resources for creating copies is not the best strategy to achieve the goal.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/11/2018 12:17:17
Being a meme, the universal moral standard shares space in memetic pool with other memes.   They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings.
There also should be a mechanism to eradicate or suppress lethal or detrimental memes. The memes for this particular purpose is the moral rules.
Title: Re: aRe: Is there a universal moral standard?
Post by: David Cooper on 17/11/2018 20:25:40
Does it have any exceptions? Show me one.
Imagine a genius who want to minimize suffering by creating a virus that makes people sterile. He prevents sufferings from countless number of people from the next generation.
Or the virus makes people don't want to have kids.
Or replace the virus with a meme.

That is not a genius, but a selfish bastard who wants less enjoyment for others and more for himself (because he will feel happier if they don't exist). The reality is that people overwhelmingly enjoy existing, and the minority who don't enjoy it (usually because of difficult circumstances) live in the hope of better times to come. There is no valid excuse for eliminating them. They generally want to have children and can be deeply depressed if they are unable to do so. Modifying people not to want to have children is a monumental assault unless they willingly agree to it. You cannot simply convert an immoral action into a moral one by partially killing someone (by changing them to be less than they were before). If you kill someone, they don't mind being dead once their dead, but that's not an argument that painless murder is acceptable. Modifying people by force not to care about loss of capability is immoral in the extreme (except in extreme cases where it isn't, such as where a population needs to be reduced for environmental reasons, and even then it would need to be a case where some people need to stop breeding altogether in order to keep within sustainable limits - in such a case, you would have to do this to the people of lowest quality, and those should ideally be the ones with the lowest moral standards - there are a lot of rape-and-pillage genes which could do with eradication).
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/11/2018 21:31:03

That is not a genius, but a selfish bastard who wants less enjoyment for others and more for himself (because he will feel happier if they don't exist). The reality is that people overwhelmingly enjoy existing, and the minority who don't enjoy it (usually because of difficult circumstances) live in the hope of better times to come. There is no valid excuse for eliminating them. They generally want to have children and can be deeply depressed if they are unable to do so. Modifying people not to want to have children is a monumental assault unless they willingly agree to it. You cannot simply convert an immoral action into a moral one by partially killing someone (by changing them to be less than they were before). If you kill someone, they don't mind being dead once their dead, but that's not an argument that painless murder is acceptable. Modifying people by force not to care about loss of capability is immoral in the extreme (except in extreme cases where it isn't, such as where a population needs to be reduced for environmental reasons, and even then it would need to be a case where some people need to stop breeding altogether in order to keep within sustainable limits - in such a case, you would have to do this to the people of lowest quality, and those should ideally be the ones with the lowest moral standards - there are a lot of rape-and-pillage genes which could do with eradication).
There must be a reason why people want to reproduce, to feel joy and happiness, avoid pain, but also willing to conserve resources, make sacrifices, be altruistic, feeling empathy, eradicate unwanted things, create laws, etc. They seem to be unrelated scattered pieces of puzzle. Here I want to assemble them into one big picture using a universal moral standard.
Title: Re: Is there a universal moral standard?
Post by: guest45734 on 17/11/2018 21:54:19
There must be a reason why people want to reproduce, to feel joy and happiness, avoid pain, but also willing to conserve resources, make sacrifices, be altruistic, feeling empathy, eradicate unwanted things, create laws, etc. They seem to be unrelated scattered pieces of puzzle. Here I want to assemble them into one big picture using a universal moral standard.

This sounds like basic animal instincts without laws or religion, both of which evolve. It may be at some point in the future science becomes religion, and the laws protect all animals equally. This would of course involve irradicating all religious belief and accepting that all life forms were equal and food for the other. ?????
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/11/2018 00:31:57
There must be a reason why people want to reproduce, to feel joy and happiness, avoid pain, but also willing to conserve resources, make sacrifices, be altruistic, feeling empathy, eradicate unwanted things, create laws, etc. They seem to be unrelated scattered pieces of puzzle. Here I want to assemble them into one big picture using a universal moral standard.

This sounds like basic animal instincts without laws or religion, both of which evolve. It may be at some point in the future science becomes religion, and the laws protect all animals equally. This would of course involve irradicating all religious belief and accepting that all life forms were equal and food for the other. ?????
Science is a useful tool to achieve universal goals by improving accuracy and precision of models of reality,  hence conscious being can make better plans and reduce unexpected results.
Religious belief will still exist in history books as a reminder for future generations about gullibility of their predecessors.
Some form of life have better chance to survive than the others, hence resources should be distributed wisely to maximize chance for the survival of conscious beings.
The risk of common mode failure prevents us to focus on only one form of consciousness.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/11/2018 00:51:09
Finally we get to the last question: how. There are some basic strategies to preserve information which I borrow from IT business:
Choosing robust media.
Creating multilayer protection.
Creating backups.
Create diversity to avoid common mode failures.

Now I'll try to explain each of those strategies. For choosing robust media, biological evolution has provide brainy organisms. As far as I know, human species is the most successful one. In conjunction with other strategies, human developed written language,  books, computer with various physical media such as magnetic and optical disks,  also solid state memories.
For multilayer protection, there have been skull, clothes, caves,  tents,  houses,  bunker,  boat,  submarine, spaceships, ISS. Constitution, laws, morality, standards, and even religion are also some forms of this strategy, especially protecting consciousness from conscious beings themselves.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/11/2018 01:00:04
Reproduction is a way to create backup. Other ways include books,  DNA sequence, internet sites, backup drives. Mars or moon colonies can be seen as backup plans.
Those colonies can also play a role in diversity strategy, as an extension of human colonies on earth. This strategy protects existence of consciousness from natural disasters such as asteroid impact,  flood,  hurricane, earthquake, volcano, avalanche, drought, famine, disease.
If you know other useful strategies that I might have overlooked, please share them here.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/11/2018 09:28:25
In previous post I've answered when and where questions by simply anytime and anywhere. But since the applicability of moral rules are limited to conscious beings,  the answers can be narrowed down to when and where conscious beings exist.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 19/11/2018 07:13:34
Now that foundations for universal moral standard are generally complete, I'll demonstrate how to use them in real life case. Let's start with the famous trolley problem.
Quote
You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options:

Do nothing and allow the trolley to kill the five people on the main track.
Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the more ethical option?
Let's start with the most basic version, with following assumptions:
1. There's no uncertainty about the statements describing the situation.
2. The outcome solely depends on the choice made by the subject. Nothing else can interfere the course of the events.
3. All of those six people have equal positive contributions to the society.
4. The switching action requires negligible amount of resources.

Here the math shows that you should pull the lever.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 19/11/2018 17:52:11
The calculations might be different if one of those assumptions changes.
4. If the switching consumes a lot of resources which could be used to save even more people.
3. If the five people have negative impact on the society, eg. terrorists.
2. If you have a way to stop the trolley, or to tell those people to get away from the track.
1. If there's a significant uncertainty about the cause and effect relationship describing the situation, or about the assessment of the other assumptions.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 19/11/2018 22:03:03
It all comes down to how you handle the data to be as right as you can be for the available information. Add another fact and the answer can change - it can switch every time another piece of information is provided. Some of the information is prior knowledge of previous situations and the kinds of guesses that might be appropriate as a substitute for hard information. For example, if the only previous case involved a terrorist tying five old people to one line and a child to the other, that could affect the calculations a bit. Might it be a copycat terrorist? Was the previous case widely publicised or was it kept quiet? If the former, then the terrorist this time might have tied five children to one line and one old person to the other, hoping that the person by the lever will think, "I'm not falling for that trick - it'll be five old people and one child again, so I'll save the child," thereby leading to five children being killed.

The moral decision itself isn't hard - it's crunching the data to try to get the best outcome when there are lots of unknown factors that can make it close to random luck whether the less damaging outcome occurs, and if there's enough trickery involved, the best calculation could be guaranteed to result in the worse outcome simply because all the available data has been carefully selected to mislead the person (or machine) making the decision.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 19/11/2018 22:30:16
It all comes down to how you handle the data to be as right as you can be for the available information. Add another fact and the answer can change - it can switch every time another piece of information is provided. Some of the information is prior knowledge of previous situations and the kinds of guesses that might be appropriate as a substitute for hard information. For example, if the only previous case involved a terrorist tying five old people to one line and a child to the other, that could affect the calculations a bit. Might it be a copycat terrorist? Was the previous case widely publicised or was it kept quiet? If the former, then the terrorist this time might have tied five children to one line and one old person to the other, hoping that the person by the lever will think, "I'm not falling for that trick - it'll be five old people and one child again, so I'll save the child," thereby leading to five children being killed.

The moral decision itself isn't hard - it's crunching the data to try to get the best outcome when there are lots of unknown factors that can make it close to random luck whether the less damaging outcome occurs, and if there's enough trickery involved, the best calculation could be guaranteed to result in the worse outcome simply because all the available data has been carefully selected to mislead the person (or machine) making the decision.
That's right. That's why we need moral rules in the first place,  and we need a moral standard that we all can agree on. And we need to educate people about that, as young as possible to minimize damage they could do and maximize their contribution to the society.
We also need to educate people about how the world works through science. This can prevent moral people from doing immoral actions. For example,  human sacrifice of the Aztecs.
Trickery by other conscious beings is not the only thing that can mislead a conscious being to make decisions with unintended results. It can also come from false understanding of reality. Some alternative medicines based on pseudoscience are the examples.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 19/11/2018 22:41:45
morality is a standard established by a ruling class; primarily to benefit themselves.
i concieve of, and establish my own morality...i am a one man ruling class; and my morality benefits myself and any others i choose to protect.

all others concept of morality can kiss my ass.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 20/11/2018 00:48:59
The preference to save child over old people is based on following assumptions:
1. The old people will die soon anyway,  while the child still have a long life to go.
2. Social and physical enviroment is conducive to raise children.
3. The child can be raised well so he/she can contribute positively to the society.
Again,  if those assumptions can be proven false, the preference may change.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 20/11/2018 00:52:32
morality is a standard established by a ruling class; primarily to benefit themselves.
i concieve of, and establish my own morality...i am a one man ruling class; and my morality benefits myself and any others i choose to protect.

all others concept of morality can kiss my ass.

Your moral rule cannot be a universal standard.  Because they're limited in time and space. It doesn't apply when and where you don't have influence, such us before you born, after you die, or in other countries.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 20/11/2018 04:46:54
I'll show another variation of trolley problem, where the one sacrificed for the five was a relative or romantic partner. Survey data shows that respondents are much less likely to be willing to sacrifice their life.
IMO the respondents change their decision due to following assumptions:
1. The relative is known to have positive value for them, while the five people are stranger with unknown values. They might even be dangerous.
2. The loss of their relative will make them sad, which might hinder them to contribute positively to the society.

From the examples above I want to show that to prevent dispute or confusion in the decision making of moral cases, we need to state explicitly all the assumptions being made, including (especially) the hidden ones.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 20/11/2018 05:18:34
Here is an alternative case, due to Judith Jarvis Thomson,[3] containing similar numbers and results, but without a trolley:
A brilliant transplant surgeon has five patients, each in need of a different organ, each of whom will die without that organ. Unfortunately, there are no organs available to perform any of these five transplant operations. A healthy young traveler, just passing through the city the doctor works in, comes in for a routine checkup. In the course of doing the checkup, the doctor discovers that his organs are compatible with all five of his dying patients. Suppose further that if the young man were to disappear, no one would suspect the doctor. Do you support the morality of the doctor to kill that tourist and provide his healthy organs to those five dying persons and save their lives?

Here are explicit assumptions:
1. The transplant surgeon is brilliant, which means he/she can perform the operation with (almost) 100% success rate.
2. The five patients, each in need of a different organ, each of whom will die without that organ.
3. There are no organs available to perform any of these five transplant operations in the time limit before they die.
4. Traveler's organs are compatible with all five of his dying patients.
5. The traveler is a stranger, so if the young man were to disappear, no one would suspect the doctor.

Here are the implicit assumptions:
1. Apart from the failing organs, all of those people have the same life expectancy.
2. Each of them can make equal positive contribution to the society. Of course if each of them has equally negative contribution, the decision might be changed.

Let's assume that there are no uncertainty about all of those assumptions. At a glance, it seems to be obvious that the doctor should kill that tourist and provide his healthy organs to those five dying persons and save their lives.
In the study, people are less likely to choose the sacrifice, compared to the trolley problem. The author thinks that it 's due to involvement. But there's also another possible reason. I think it's likely that the respondents aren't convinced by the assumptions,  especially about the success rate of th operation, which is unrealistic.

But we need to be open minded to seek for the existence of better options and don't fall into false dilemma ( https://en.wikipedia.org/wiki/False_dilemma ).
For example, instead of killing the healthy traveler, the doctor could just kill one of the patients. This can reduce possibility of loss due to the risk operation's success rate. This also conserve resource since the doctor only need to perform transplantation 4 times instead of five, with the same end result, which is one dead person and 5 living ones. Most importantly, no one ends up worse than if the doctor do nothing instead. The sacrificed patient would lose 4 organs,  but he/she is gonna be dead anyway.

Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 20/11/2018 05:37:25
The universal rule should concern about the existence of consciousness in the eventual results, which is required by the timelessness of the rule.
Since universal moral standard concerns about long term results, it would take a lot of factor to calculate, which might not make it practical. Bad results might come before the decision is made due to long duration of the calculation, and the factors influencing the calculation might have change before the calculation is complete.
Hence we need to create shortcut, rule of thumb, or hash table to deal with frequently occurring situations. They must be reasonably easy to calculate and work in most cases. Their applications should align with the spirit of universal moral standard. This comparison might be made retrospectively when the decision has already been made before the calculation based on universal moral standard is finished. When they are in conflict, some exception should be made to the application of those shortcut rules.
For analogy, in chess game we have "Chess piece relative value".
Quote
In chess, the chess piece relative value system conventionally assigns a point value to each piece when assessing its relative strength in potential exchanges. These values help determine how valuable a piece is strategically. They play no formal role in the game but are useful to players and are also used in computer chess to help the computer evaluate positions.

Calculations of the value of pieces provide only a rough idea of the state of play. The exact piece values will depend on the game situation, and can differ considerably from those given here. In some positions, a well-placed piece might be much more valuable than indicated by heuristics, while a badly placed piece may be completely trapped and, thus, almost worthless.

Valuations almost always assign the value 1 point to pawns (typically as the average value of a pawn in the starting position). Computer programs often represent the values of pieces and positions in terms of 'centipawns' (cp), where 100 cp = 1 pawn, which allows strategic features of the position, worth less than a single pawn, to be evaluated without requiring fractions.

Edward Lasker said "It is difficult to compare the relative value of different pieces, as so much depends on the peculiarities of the position...". Nevertheless, he said that bishops and knights (minor pieces) were equal, rooks are worth a minor piece plus one or two pawns, and a queen is worth three minor pieces or two rooks (Lasker 1915:11).
.
In real chess game, we often see exceptions to this rule. A queen might be exchanged for a pawn to get better position to eventually win the game.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 20/11/2018 19:54:54
The preference to save child over old people is based on following assumptions:
1. The old people will die soon anyway,  while the child still have a long life to go.
2. Social and physical enviroment is conducive to raise children.
3. The child can be raised well so he/she can contribute positively to the society.
Again,  if those assumptions can be proven false, the preference may change.

Most old people tied to the track would choose to save the child if they had any say in the matter, unless they knew that the child is a bully/thug/vandal. They are well placed to judge the value of what's left of their lives compared with the child's potential future, but they also recognise that they're still more valuable than any bad child who will spend a lifetime doing harm to others.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 20/11/2018 20:13:10
I'll show another variation of trolley problem, where the one sacrificed for the five was a relative or romantic partner. Survey data shows that respondents are much less likely to be willing to sacrifice their life.

People who are emotionally attached to some of the individuals involved in a situation of that kind cannot be expected to make impartial moral judgements, or if they can, they can't be expected to apply them impartially - it is not wrong for someone to save someone they care about at the expense of more valuable people, but it is heroic if they don't. It's possible in many such situations that they'd be so tortured by making the "right" decision due to the loss that their own trauma would be bigger than that of all the relatives and friends of the other victims if the "wrong" decision was made, which could make the "wrong" decision potentially right. Fortunately though, the intelligent machines that we want to be making moral decisions will not have such biases and will not be traumatised at making rational decisions, so they will make be able to make correct decisions in all cases.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 20/11/2018 21:17:02
Let's assume that there are no uncertainty about all of those assumptions. At a glance, it seems to be obvious that the doctor should kill that tourist and provide his healthy organs to those five dying persons and save their lives.

No it doesn't - it is immediately obvious that one of the ill people can be sacrificed instead. However, you can introduce more information to rule that out - the healthy traveller's organs are compatible with all the others, but none of the others are compatible with each other. We now have a restored dilemma in which killing one person saves more. (This ignores organ rejection and decline - most transplanted hearts will fail within a decade, for example, but let's imagine that there's no such problem.

One of the important factors here is that no one wants to live in a world where they could be killed in such a way to save the lives of ill people (who wouldn't want to be saved in such a way either) - it's bad enough that you could die in accidents caused by factors outside of anyone's control, but you don't want to live in fear that you'll be selected for death to mend other people who may be to blame for their own medical problem or who may have bad genes which really shouldn't be passed on. You also don't want the fact that you've been careful to stay as healthy as possible to turn you into a preferred donor either - that could drive people to live unhealthy lives as it might be safer to risk being someone who needs a transplant than to be a good organ donor. However, if people's own morality is taken into account, it would serve someone right if they were used in this way if they've spent their life abusing others. As with all other moral issues, you have to identify as many factors as possible and then weight them appropriately so that the best outcome is more likely to be produced. A lot of the data needed to make ideal decisions isn't available yet though - it would take a lot of studying to find out how people feel in such situations and afterwards so that the total amount of harm can be counted up.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 20/11/2018 21:28:51
Since universal moral standard concerns about long term results, it would take a lot of factor to calculate, which might not make it practical. Bad results might come before the decision is made due to long duration of the calculation, and the factors influencing the calculation might have change before the calculation is complete.
Hence we need to create shortcut, rule of thumb, or hash table to deal with frequently occurring situations. They must be reasonably easy to calculate and work in most cases.

That's correct. When machines calculate the best course of action, they will apply all the most important factors in order of importance, getting better answers after each factor has been factored in, so when it reaches the point where they have to act, they'll do the best they can in the time available. It's harder for people to process the data in that way, but they will still have to make rapid decisions in many cases and will likely make a lot of bad decisions. If a set of simple rules can improve their performance, those rules should be used even if they're far from ideal. Different individuals should maybe have their own set of rules designed for them by intelligent machines, modifying them over time as they gain in experience, and as their mental powers decline too. AGI (the artificial general intelligence in future machines) will be able to design all those rules for individual people, so it isn't something we need to do directly - the important task for us is to make sure AGI is able to calculate morality correctly.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/11/2018 11:39:59
Let's assume that there are no uncertainty about all of those assumptions. At a glance, it seems to be obvious that the doctor should kill that tourist and provide his healthy organs to those five dying persons and save their lives.

No it doesn't - it is immediately obvious that one of the ill people can be sacrificed instead. However, you can introduce more information to rule that out - the healthy traveller's organs are compatible with all the others, but none of the others are compatible with each other. We now have a restored dilemma in which killing one person saves more. (This ignores organ rejection and decline - most transplanted hearts will fail within a decade, for example, but let's imagine that there's no such problem.

One of the important factors here is that no one wants to live in a world where they could be killed in such a way to save the lives of ill people (who wouldn't want to be saved in such a way either) - it's bad enough that you could die in accidents caused by factors outside of anyone's control, but you don't want to live in fear that you'll be selected for death to mend other people who may be to blame for their own medical problem or who may have bad genes which really shouldn't be passed on. You also don't want the fact that you've been careful to stay as healthy as possible to turn you into a preferred donor either - that could drive people to live unhealthy lives as it might be safer to risk being someone who needs a transplant than to be a good organ donor. However, if people's own morality is taken into account, it would serve someone right if they were used in this way if they've spent their life abusing others. As with all other moral issues, you have to identify as many factors as possible and then weight them appropriately so that the best outcome is more likely to be produced. A lot of the data needed to make ideal decisions isn't available yet though - it would take a lot of studying to find out how people feel in such situations and afterwards so that the total amount of harm can be counted up.
Thanks for contributing to this discussion. I agree with most of your post above, so I'll try to identify where we split opinions. It's likely that we took different assumptions.
That's why I opened the statements you quoted by removing uncertainty of the assumptions, and started the second sentence with "at a glance". Of course the decision might change when the assumptions are changed.
Apparently you reject the equality of life expectancy.
I admit that when proposing to sacrifice the ill people, I omitted the possibility that none of them is compatible with each other. That's because I considered that organ rejections are caused by immune system due to mismatched gene. Hence I made a hidden assumption that since the traveler's organs are compatible with all of those ill people, their organs are compatible with each other. The proposed plan is preferred because no one ends up worse than no action.
I only wanted to emphasize that we should be open minded and try to think out of the box to get the best results with minimum negative impact. Perhaps with future technology of 3D printed organs we would no longer have to deal with this kind of dilemma any more.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/11/2018 12:22:05
Since universal moral standard concerns about long term results, it would take a lot of factor to calculate, which might not make it practical. Bad results might come before the decision is made due to long duration of the calculation, and the factors influencing the calculation might have change before the calculation is complete. Hence we need to create shortcut, rule of thumb, or hash table to deal with frequently occurring situations. They must be reasonably easy to calculate and work in most cases. Their applications should align with the spirit of universal moral standard. This comparison might be made retrospectively when the decision has already been made before the calculation based on universal moral standard is finished. When they are in conflict, some exception should be made to the application of those shortcut rules.
Biological evolution has provide us with a basic and simple shortcut rule, which is to avoid pain. This can be done through reflex which is very fast since it doesn't involve central nervous system. A little bit more complex rules are our instinct to seek for pleasure and to avoid suffering. I  think hedonism and utilitarian are confusing the tool with the goal.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 21/11/2018 19:08:25
The mathematical resolution of the simplest trlley problem assumes that your universal moral standard is to maximise the number of live humans. Since this will inevitably lead to the starvation of our descendants, it is a questionable basis for ethics.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 21/11/2018 20:15:17
Biological evolution has provide us with a basic and simple shortcut rule, which is to avoid pain. This can be done through reflex which is very fast since it doesn't involve central nervous system. A little bit more complex rules are our instinct to seek for pleasure and to avoid suffering. I  think hedonism and utilitarian are confusing the tool with the goal.

I can't follow that. What's the tool there and what's the goal?
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 21/11/2018 20:23:02
The mathematical resolution of the simplest trlley problem assumes that your universal moral standard is to maximise the number of live humans. Since this will inevitably lead to the starvation of our descendants, it is a questionable basis for ethics.

The assumption is that saving more people is better. That's quite different from saying that there should be more and more people until everyone starves. For your objection to be relevant, you'd have to set the trolley problem in a place where people are starving and a cull would be a good idea, but you'd want that cull to take out the most immoral people rather than people who have likely been tied to a track by the most immoral person. So, you minimise the number of people killed by the trolley, then you count how many people still need to be culled and you take out that number of the most immoral ones.

The only place I've heard the idea that we should maximise the number of live humans even if that leads to them all being right on the edge of starvation is with the Mere Addition Paradox, but that paradox contains a major mathematical error which renders it a pile of nonsense.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 22/11/2018 06:06:07
The mathematical resolution of the simplest trlley problem assumes that your universal moral standard is to maximise the number of live humans. Since this will inevitably lead to the starvation of our descendants, it is a questionable basis for ethics.
A lot of disputes may arise if we don't agree with the scope of the subject of discussion. I've stated that universal moral standard is not limited as narrow as the existence of human beings, as long as there are conscious beings. It should have been in place before modern human exist, and it should still be in place when human has evolved into other species. as long as there exist conscious beings.
Universal moral standard concerned with the result in the long run, which means it covers an extended time scale.
If we have 10 billion people living happily in 1 generation but then go extinct in the next generation, it doesn't fulfill the goal of universal moral standard.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 22/11/2018 08:01:56
Biological evolution has provide us with a basic and simple shortcut rule, which is to avoid pain. This can be done through reflex which is very fast since it doesn't involve central nervous system. A little bit more complex rules are our instinct to seek for pleasure and to avoid suffering. I  think hedonism and utilitarian are confusing the tool with the goal.

I can't follow that. What's the tool there and what's the goal?
The goal is what is preferred in the long run. The rules used as the shortcut is the tool.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 22/11/2018 08:37:41
No matter what the species or timescale, if maximisation of the number of living organisms is the prime objective and it has the unfettered capacity to maximise, it will eventually run out of food or poison itself with its own excrement. Never mind humans, you can observe the endpoint with lemmings and yeast (which is why wine never exceeds 20% alcohol).

The consequence of "saving more people" as an ethical axiom is visible in a developed society where the National Health Service and HM Prison Service are forced by the courts to expend resources extending the lives of people who want to die, and in less developed countries where public health and tradition have increased the population to the point that it cannot be sustained by an inherently marginal agriculture.

"Do as you would be done by" looks like a more generally applicable motto, but the fact that it can't be applied to the trolley problem suggests that there may not be a single universal moral standard. And here's where my thinking became suddenly heretical and digressive:

In the absence of a universal principle, we often choose an arbitrary standard. "The man on the Clapham omnibus" serves for many legal questions but some people revert to a single figure and ask "what would Jesus do?" Sitting here, my first thought was "well, he wouldn't eat pork" (I've been refereeing a medical experiment that involves eating a standard fatty meal)...and then (apropos lemmings, I suppose) I wondered about the Gadarene swine. Who was herding pigs in Israel?

Anyway, returning to the relative value question, I had a fine example of this in my days in Civil Defence. An enthusiastic young lecturer from the Home Office was explaining post-nuclear-strike policy to our village command. The obvious priority, he said, was to feed and protect the elderly, pregnant women, and children. My boss, who had  actually led his artillery regiment in conflict, said that with at least 60% of the population dead or dying and an imminent threat of invasion, his priority was to feed and protect men aged 16 - 60 who could dig graves and fight. "With any luck, we will survive to make children and pensioners later, but the converse is impossible".
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 22/11/2018 13:18:02
No matter what the species or timescale, if maximisation of the number of living organisms is the prime objective and it has the unfettered capacity to maximise, it will eventually run out of food or poison itself with its own excrement. Never mind humans, you can observe the endpoint with lemmings and yeast (which is why wine never exceeds 20% alcohol).
I think you might want to revisit my answer to the what, who, when, where, why, and how questions about morality in post #9, #18, #29, #30, #33,#35, #39, #40, #41, #45 - $48.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 22/11/2018 19:55:43
Biological evolution has provide us with a basic and simple shortcut rule, which is to avoid pain. This can be done through reflex which is very fast since it doesn't involve central nervous system. A little bit more complex rules are our instinct to seek for pleasure and to avoid suffering. I  think hedonism and utilitarian are confusing the tool with the goal.

I can't follow that. What's the tool there and what's the goal?
The goal is what is preferred in the long run. The rules used as the shortcut is the tool.

So when you say utilitarianism is is confusing the tool with the goal, how is it confusing a shortcut with what's preferred in the long run? Where's the incompatibility between the two?
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 22/11/2018 20:20:16
No matter what the species or timescale, if maximisation of the number of living organisms is the prime objective and it has the unfettered capacity to maximise, it will eventually run out of food or poison itself with its own excrement. Never mind humans, you can observe the endpoint with lemmings and yeast (which is why wine never exceeds 20% alcohol).

If you've got things set up properly, everything works in cycles: composting toilets convert the worst kind of waste into beautiful soil for growing new food in - it all works fine so long as the sun's putting energy in. That kind of requirement has to govern the maximising of the population so that you don't get to a point where everyone drowns in sewage. It's about reaching a maximum stable population so that you don't end up like animals where they go through repeated population crashes as the food supply fluctuates. Of course, maximising a stable population isn't idea either if that reduces quality of life due to lack of other resources which can be essential for well-being, such as living in a pleasant environment rather than all being crammed together in filthy towns.

Quote
"Do as you would be done by" looks like a more generally applicable motto, but the fact that it can't be applied to the trolley problem suggests that there may not be a single universal moral standard. And here's where my thinking became suddenly heretical and digressive:

Who says it can't be applied to the trolley problem? Of course it can. If you're one of the people tied to the track, you want the person to make the trolley go the other way, and if you're the one by the lever, you want to make the trolley go the other way for the person tied to the track. You can either let down one person or five, so your moral duty is to let down one rather than five.

Quote
In the absence of a universal principle, we often choose an arbitrary standard. "The man on the Clapham omnibus" serves for many legal questions but some people revert to a single figure and ask "what would Jesus do?" Sitting here, my first thought was "well, he wouldn't eat pork" (I've been refereeing a medical experiment that involves eating a standard fatty meal)...and then (apropos lemmings, I suppose) I wondered about the Gadarene swine. Who was herding pigs in Israel?

The problem we have is not in finding the universal principle, but in getting stupid people to recognise that they should be accepting it. Many people only respect authority as they're incapable of thinking for themselves, but we have lots of fake authorities. That messes everything up, and it causes all manner of conflicts and other abuses. Religions and faulty ideologies are the main barrier to progress.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 27/11/2018 10:32:33
Biological evolution has provide us with a basic and simple shortcut rule, which is to avoid pain. This can be done through reflex which is very fast since it doesn't involve central nervous system. A little bit more complex rules are our instinct to seek for pleasure and to avoid suffering. I  think hedonism and utilitarian are confusing the tool with the goal.

I can't follow that. What's the tool there and what's the goal?
The goal is what is preferred in the long run. The rules used as the shortcut is the tool.

So when you say utilitarianism is is confusing the tool with the goal, how is it confusing a shortcut with what's preferred in the long run? Where's the incompatibility between the two?
To answer your question, I need first to continue my assertion about progress of increasing complexity of shortcut rules provided by biological evolution. With increasing complexity, more factors can be included in the calculation to generate actionable output. More complex rules can accommodate more steps into the future and at some point, they appear as planned actions.
We can put some milestones in the continuum of complexity of shortcut rules. The next step from instinct is emotion. Emotion includes anticipation of near future events. We can feel sad/happy/fear/angry before events which potentially cause pleasure/pain actually happens.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 28/11/2018 01:13:29
I have attempted to follow the wisdom of the posters herein.

I hoped to find an answer to the question.

unfortunately, I must answer the problem with a simple conclusion:

no, there is no "universal moral standard".
there can never be a "universal moral standard" until every sentient species in the universe agrees upon the standard of the combined species.

sorry.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 28/11/2018 20:45:12
unfortunately, I must answer the problem with a simple conclusion:

no, there is no "universal moral standard".
there can never be a "universal moral standard" until every sentient species in the universe agrees upon the standard of the combined species.

They don't need to agree on it - correct morality can be imposed on them regardless, and will be. If you have to live the life of a human, a dog, an alien and a cat, and if all four of these animals cross paths in some way, their interactions can be guided by a simple method. How should you as the human treat the others? How should you as the alien treat the others? As the dog and the cat, you won't have the wit to work out how to treat the others morally, but as the human and alien, you can. An AGI system can work out what's best for the dog and the cat to do too, so it can intervene to protect them from each other, just as the human and alien can intervene. Because you are to imagine living all four lives, you don't want the alien to enslave the human because you will gain less from that as an alien than you lose as the human. You don't want the dog to kill and eat the cat either for the same reason, and it doesn't matter that they can't understand morality.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 28/11/2018 21:04:02
David,

as always, your wisdom is awesome.
pardon me if I inject a bit of humor here:

what do you have against eating pussies?
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 29/11/2018 19:28:08
what do you have against eating pussies?

Roof rabbits (cats) certainly have their place in times of war when food's short, but if someone fancies a bit of ***** (edit: crikey - it doesn't like the singular of pussies!) at other times, they really need to identify the owner and enter careful negotiations with them.

(Test: beaver...)
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 29/11/2018 21:33:09
amazing...first time I have noted the resident "hall monitor" restrict a word/phrase.

I have been impressed with the latitude allowed herein.
Title: Re: Is there a universal moral standard?
Post by: Colin2B on 29/11/2018 22:49:23
[(edit: crikey - it doesn't like the singular of pussies!)
*****

Ha! You’re right!! It must be something built into the SMF system.
Title: Re: Is there a universal moral standard?
Post by: ATMD on 01/12/2018 23:21:51
For me, the problem with these dilemmas is that they are fictional dilemmas that have no basis in our world of cause and effect. For example, the trolley dilemma, what caused those five guys to be tied to the train tracks? They must have done something to be in that predicament. These things don't happen by themselves.

I believe in the laws of karma.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 01/12/2018 23:24:30
"I believe in the laws of karma."

I do not...why do they keep following me? :)
Title: Re: Is there a universal moral standard?
Post by: ATMD on 01/12/2018 23:33:42
"I believe in the laws of karma."

I do not...why do they keep following me? :)

Hahaha
Title: Re: Is there a universal moral standard?
Post by: ATMD on 02/12/2018 00:29:33
The "golden rule" is subject to sampling error.

It is fairly obvious that a "family" group (could be a biological family or a temporary unit like a ship's crew) will function better if its members can trust each other. The military understand this: selection for specialist duties includes checks for honesty and recognition that you are fighting for your mates first, your country second. So the "greatest happiness for the greatest number" (GHGN) metric is fairly easy to determine where N < 50, say.

Brexit provides a fine example of the breakdown of GHGN for very large N. There is no doubt that a customs union is good for business: whether you are an importer or an exporter, N is small and fewer rules and tariffs means more profit . But if the nation as a whole (N is large) imports more than it exports, increased business flow overall means more loss, hence devaluation and reduced public budgets.  At its simplest, you could model a trading nation as consisting of just two businesses of roughly equal size and turnover Nimp ≈ Nexp. Good news for any sample of size ≤ 2N is bad news for the whole population if  Nimp > Nexp by even a small amount, hence the interesting conundrum "EU good for British business, bad for Britain".

I see nothing wrong with the Golden Rule. A business operates on profit making rather than morality, if it does not profit, it ceases to be a business over time. It is for everyone's best interest that a business can continue to serve its customers. Otherwise, where are the customers going to get their goods and services? Customers have to be willing to give profits to businesses as incentive to keep them operating.

Premise 1: As a seller I want to maximize profit.
Premise 2: As a buyer, I want to minimize the seller's profit (pay the lowest price).

Let's look at the Golden Rule when applied to business.

If we follow this rule to its full extent, the seller would want to give as much discount to the buyer as possible (because that would be what he would have wanted if he were the buyer). Conversely, the buyer would not ask for a single discount (because that would be what he would have wanted if he were the seller)

When the golden rule is applied, both of these actions cancel themselves out.

In the sampling error illustration, the nation exporting to Britain receives the surplus profits. Yes Britain incurs a trade deficit, but this trade deficit is exactly offset by the trade surplus of the other country. There is no change in the system, simply an aggregate flow of money from Britain to the exporting nation. The trade deficit is comparable to the profit that we as buyers are willing to give sellers so that they would continue to operate and provide us the goods and services that we need.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 02/12/2018 02:31:29
ATMD,

how do you see a socialists view of the Golden Rule? do you think it is variance to that of a capitalist?
Title: Re: Is there a universal moral standard?
Post by: ATMD on 02/12/2018 03:15:07
Jimbobghost,

In a hypothetical world, a capitalist can be an absolutely moral person who respects the golden rule. He does not cheat, lie, or engage in dubious practices to take advantage of his customers. He asks a certain amount of profit to continue operating, and does not get greedy by cutting corners to get more profit. The consumer, out of his own interest, wants to support the capitalist by giving him exactly the profit that he asks for, so that the capitalist would continue providing the goods and services that he needs. The capitalist, by asking for a profit, is not breaking the Golden Rule at all. Profit gives him motivation and incentive to work hard. Without it, no goods or services are produced, both the capitalist and the consumer loses out. Profit creates a win-win situation, and the golden rule is respected.

This is of course easy to say, but extremely difficult to practice. In our world we almost equate capitalism with greed for money and power. The capitalist wants to earn more and more, never being satisfied with what he's got. His business ethics are out of his own self-interest rather than based on the golden rule. It is good for business when it has a reputation for being honest and engages in best practices. Customer complaints and lawsuits are bad for business. Profit making becomes number one priority, and the golden rule is nowhere in the equation.

Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 02/12/2018 03:22:38
an excellent view of the golden rule from the capitalist's view.

but you did not give the socialist's view of the golden rule...it seems to me it would be different.
Title: Re: Is there a universal moral standard?
Post by: ATMD on 02/12/2018 03:53:16
an excellent view of the golden rule from the capitalist's view.

but you did not give the socialist's view of the golden rule...it seems to me it would be different.

You are absolutely right. Of course it is different, socialism completely embraces the golden rule. I think it is much more difficult to defend capitalism in the name of the golden rule. Socialism stresses more on equality of distribution of resources, something that capitalism does not care about. Socialism ensures that even the weak and the poor are taken care of, capitalism stresses on competition and survival of the fittest.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 02/12/2018 18:49:16
ATMD,
thank you for your well stated comments.

i do not wish to push your patience with a follow up question...but because this topic involves morality, may i ask:

in a just world, in which morality (of the "good" kind) serves the greater of the people; and socialism follows more of a philosopy of "Due unto others" (IOW those with the most should share evenly with those with the least)...how is it that the richest (i.e. capitalist) countries are thriving, while socialist societies around the world are finding their people fleeing to find help from the wealthiest capitalist societies?

might it be that God's morality dictates that the poor should serve the wealthy?
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 02/12/2018 22:17:47
Done right, socialism and capitalism should become identical. Caring capitalism (as opposed to uncaring capitalism) will continually redistribute wealth to ensure that no one misses out due to their inability to compete on the same level as the better performers. Responsible socialism (as opposed to irresponsible socialism) recognises that people need to be rewarded for hard work, innovation and good management skills - if you don't make these things pay off, people won't bother to generate more wealth and everyone ends up worse off (as in Venezuela). It's hard to find either of those out in the real world, but that doesn't mean they aren't both possible and that they aren't identical in practice. Fortunately, AGI will fix this by providing proper management of everything, out-innovating the innovators, and out-working the hardest of workers, so we'll all be end up in the same boat - i.e. out of work and receiving a standard income from the state to allow us access to our fair share of resources.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 02/12/2018 22:49:40
" Fortunately, AGI will fix this by providing proper management of everything, out-innovating the innovators, and out-working the hardest of workers, so we'll all be end up in the same boat - i.e. out of work and receiving a standard income from the state to allow us access to our fair share of resources."

David,
your wit and wisdom are noted :)
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 02/12/2018 23:52:23
"God's morality" dictates whatever you want it to dictate. That is the reason for inventing gods.

There has been a lot of confusion here between socialism and a command economy - by no means synonymous. The essence of socialism is "from each according to his means, to each according to his needs". Provided that your tax system is fair and not a disincentive, and your benefit system meets all and only basic needs, you can rely on most people's desire for acquisition and comfort to propel a very pleasant and sustainable  society, as in Scandinavia.

Things go wrong when the state dictates rather than provides. Committees are fairly good at responding to demand, but not at predicting it, nor at innovating solutions or changing direction when things go wrong. The best disasters occur when the product must be presented to the dictator on a fixed date - prototype planes fall out of the sky and unripe corn is harvested to meet the command target, where in a less centralised society a test pilot or farmer can say "let's give it another week" and the shareholders realise that a duff product won't sell in a free market. Common sense (usually) prevails in our constitutional monarchy, where the "official opening date" is rarely set before the actual service commences.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 03/12/2018 00:16:00
 "Common sense (usually) prevails in our constitutional monarchy, where the "official opening date" is rarely set before the actual service commences."

Alan,
difinitely sounds superior to our normal processes in the USA.

here, leaders pronounce accomplishments long before they are possible, i.e. "we will build a wall" or "i will drain the swamp", or "we will send a man to the moon", or "we will rebuild Europe".

even our private companies are prone to such declarations "there will be a new model of Tesla", or "GE will show greater profitability".

sigh...such are the weaknesses of dreamers.
Title: Re: Is there a universal moral standard?
Post by: ATMD on 03/12/2018 23:02:40
ATMD,
thank you for your well stated comments.

i do not wish to push your patience with a follow up question...but because this topic involves morality, may i ask:

in a just world, in which morality (of the "good" kind) serves the greater of the people; and socialism follows more of a philosopy of "Due unto others" (IOW those with the most should share evenly with those with the least)...how is it that the richest (i.e. capitalist) countries are thriving, while socialist societies around the world are finding their people fleeing to find help from the wealthiest capitalist societies?

might it be that God's morality dictates that the poor should serve the wealthy?

There is currently no true socialist country in existence though, I wish there were some to show us at least how it would fare economically. The "socialist" countries that are economically poor are pure dictatorships, the complete opposite of socialist values.

Like Alancalverd mentioned, the Scandinavian countries and Finland (not a Scandinavian country but a socialist country to a high degree) are quite well off economically, and they rank among the highest in terms of human welfare.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 03/12/2018 23:11:03
small countries with people of common heritage are more suited to socialism.

however, potentially the most successful form of governance in small countries might be a benevolent dictatorship.
Title: Re: Is there a universal moral standard?
Post by: ATMD on 03/12/2018 23:16:51
"God's morality" dictates whatever you want it to dictate. That is the reason for inventing gods.

According to many theological doctrines, God's morality is absolute, it is not subjective. It is based on love, honesty, compassion and all the "good" values etc. It is the complete opposite of murdering, lying, stealing etc.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 03/12/2018 23:31:55
if religious belief gives someone comfort, I am happy for them.

my major concern is that the religious doctrines handed down to modern day people were written originally by superstitious people and modified/altered over time; so that to my thinking they are pretty much unreliable as a guide.
Title: Re: Is there a universal moral standard?
Post by: ATMD on 03/12/2018 23:45:40
if religious belief gives someone comfort, I am happy for them.

my major concern is that the religious doctrines handed down to modern day people were written originally by superstitious people and modified/altered over time; so that to my thinking they are pretty much unreliable as a guide.

You are right again, religious doctrine (such as the bible) is not an evidence of God nor a reliable guide. It is unfortunate that our current understanding of God is based on the knowledge from institutional doctrines. With such understanding, it is no wonder that the belief in God is seen as an act of ignorance or gullibility.
Title: Re: Is there a universal moral standard?
Post by: guest39538 on 04/12/2018 20:55:13
I think  bad  emotional  response  to  poor  morals  are  quite acceptable .    I think  others  might disagree ,  but  like  I  care  what  others think .   Moral  standards  begin  with  not  making  innocent  peoples  suffer ,  what  affects  one  affects  others .  Enough said  ,  it's  time  I  went  all  religious and bring back  the  strength  of  Gods  miracle , i.e  Me
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 07/12/2018 03:56:56
we have, today, an excellent opportunity to establish a new socialist community.

the migrants now stranded at the southern USA border came from socialist countries. they saw the failures of the system, and might establish rules that overcome it's weaknesses.

should the international community (perhaps as presently identifiable as the UN) deed a land area somewhere on the globe for the settlement of these migrants; there would be the opportunity for them to establish a new order of Socialism.

such a new order would be free of former corrupt leaders, and perhaps then be able to prove that socialism is a workable form of governance. perhaps Marx was right...what say people; should we give socialism just one more chance to prove it's viability?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 07/12/2018 07:02:57
The "golden rule" is subject to sampling error.

It is fairly obvious that a "family" group (could be a biological family or a temporary unit like a ship's crew) will function better if its members can trust each other. The military understand this: selection for specialist duties includes checks for honesty and recognition that you are fighting for your mates first, your country second. So the "greatest happiness for the greatest number" (GHGN) metric is fairly easy to determine where N < 50, say.

Brexit provides a fine example of the breakdown of GHGN for very large N. There is no doubt that a customs union is good for business: whether you are an importer or an exporter, N is small and fewer rules and tariffs means more profit . But if the nation as a whole (N is large) imports more than it exports, increased business flow overall means more loss, hence devaluation and reduced public budgets.  At its simplest, you could model a trading nation as consisting of just two businesses of roughly equal size and turnover Nimp ≈ Nexp. Good news for any sample of size ≤ 2N is bad news for the whole population if  Nimp > Nexp by even a small amount, hence the interesting conundrum "EU good for British business, bad for Britain".

I see nothing wrong with the Golden Rule. A business operates on profit making rather than morality, if it does not profit, it ceases to be a business over time. It is for everyone's best interest that a business can continue to serve its customers. Otherwise, where are the customers going to get their goods and services? Customers have to be willing to give profits to businesses as incentive to keep them operating.

Premise 1: As a seller I want to maximize profit.
Premise 2: As a buyer, I want to minimize the seller's profit (pay the lowest price).

Let's look at the Golden Rule when applied to business.

If we follow this rule to its full extent, the seller would want to give as much discount to the buyer as possible (because that would be what he would have wanted if he were the buyer). Conversely, the buyer would not ask for a single discount (because that would be what he would have wanted if he were the seller)

When the golden rule is applied, both of these actions cancel themselves out.

In the sampling error illustration, the nation exporting to Britain receives the surplus profits. Yes Britain incurs a trade deficit, but this trade deficit is exactly offset by the trade surplus of the other country. There is no change in the system, simply an aggregate flow of money from Britain to the exporting nation. The trade deficit is comparable to the profit that we as buyers are willing to give sellers so that they would continue to operate and provide us the goods and services that we need.
Golden rule relies on the assumption that both parties are rational agents with compatible preferences. It doesn't work when the assumption isn't fulfilled, such as one sided love.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 07/12/2018 07:09:17
I'd like to share this entertaining take on moral rules. I hope you enjoy this. George Carlin - 10 Commandments
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 07/12/2018 07:49:06
Going back a paragraph or two, the problem  with communist countries has been the introduction of a command economy, where production targets and product standards are set centrally. This leads to all sorts of problems including the failure to develop or embrace new technologies (because they aren't necessary to meet the current production standard), overworking of land (because reaching today's target is essential and the price is fixed, so you can't have a "bad harvest" or rotate the crops - just pour on the fertiliser or lie about the yield) and Really Big Cockups because the project can't be delayed or raise extra funding to cover unexpected problems (so the plane falls out of the sky in front of the Great Leader).

This is quite different from socialist states where essential public services and primary industries are tax-funded or part-owned by the government. It would be difficult to class any part of Scandinavia or indeed most of western Europe as non-viable: trains run cheaply, on time, and nobody is bankrupted by medical bills; but if you have an urge to make handbags or space rockets, or grow exotic mushrooms, you can sell shares in your dream and have a go at anything you like.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 07/12/2018 08:17:58
We can put some milestones in the continuum of complexity of shortcut rules. The next step from instinct is emotion. Emotion includes anticipation of near future events. We can feel sad/happy/fear/angry before events which potentially cause pleasure/pain actually happens.
The next steps from emotion are thoughtful actions, which require the systems to simulate their environments in their internal memory, and then choose the action based on the most preferred calculated result. More complex systems allow for more reliable results due to better precision and accuracy of the models in their memory, incorporating more factors, wider range in space and time. They can plan their actions to get best result further into the future.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 07/12/2018 14:40:29
morality tests such as trolley problem is used to sort priorities of moral rules based on which action leads to the more preferable conditions.
Moral rules themselves are strategies to protect conscient beings from destructive actions by other conscient beings. It's part of multilayer protection strategy.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 07/12/2018 15:24:07
We can put some milestones in the continuum of complexity of shortcut rules. The next step from instinct is emotion. Emotion includes anticipation of near future events. We can feel sad/happy/fear/angry before events which potentially cause pleasure/pain actually happens.
The next steps from emotion are thoughtful actions, which require the systems to simulate their environments in their internal memory, and then choose the action based on the most preferred calculated result. More complex systems allow for more reliable results due to better precision and accuracy of the models in their memory, incorporating more factors, wider range in space and time. They can plan their actions to get best result further into the future.
The progress of increasing complexity can be seen in development of human from fetus into an adult. Fetuses only have reflex. Babies have developed instincts. Toddlers may have shown emotions. Little kids can have planned actions for the results a few days ahead. Older kids can make longer term plans,  perhaps into the next few years. Adult humans can have plan for the next decades. Wise men may have plans for the next centuries or millennia.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 07/12/2018 20:26:00
the migrants now stranded at the southern USA border came from socialist countries. they saw the failures of the system, and might establish rules that overcome it's weaknesses.

They actually come from countries which were run for many decades by fascist dictatorships propped up by the US, and it's only in recent times that some of them have experimented with socialism, but the leaders of those experiments were seriously unhinged opponents of the old regimes who sought to take things to another extreme out of hatred for the previous rulers. Worse though, they've been up against more political interference in the form of a counter-productive war on drugs which has put so many guns in the hands of gangs that the police aren't in control of anything - all they do is spend their time avoiding being gunned down. That has led to the chaos from which refugees are fleeing, and none of them are experts in responsible socialism.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 07/12/2018 20:41:21
...snip…"and none of them are experts in responsible socialism."
David,

perhaps they might avail themselves of the guidance of such notable socialists as George Soros, Maxine Waters, Nancy Pelosi and the wide range of politically knowledgeable actors in Hollywood who have frequently voiced their views.
Title: Re: Is there a universal moral standard?
Post by: ATMD on 08/12/2018 11:26:00
I love George Carlin, he is considered as one of the best comedians of all time
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 08/12/2018 12:13:39
I love George Carlin, he is considered as one of the best comedians of all time
I agree.  Though some of his materials are considered too dark for pc culture people.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 08/12/2018 12:20:57
morality tests such as trolley problem is used to sort priorities of moral rules based on which action leads to the more preferable conditions.
Moral rules themselves are strategies to protect conscient beings from destructive actions by other conscient beings. It's part of multilayer protection strategy.
Unfortunately though that most social experiments involving trolley  problem or its variants don't produce scientifically objective conclusion on which option is considered morally correct. They merely mention which one is chosen by most respondents, which may give different result when asked to different population samples at different time. It's also unclear which moral values are represented  by each option.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 08/12/2018 21:39:49
We can put some milestones in the continuum of complexity of shortcut rules. The next step from instinct is emotion. Emotion includes anticipation of near future events. We can feel sad/happy/fear/angry before events which potentially cause pleasure/pain actually happens.
The next steps from emotion are thoughtful actions, which require the systems to simulate their environments in their internal memory, and then choose the action based on the most preferred calculated result. More complex systems allow for more reliable results due to better precision and accuracy of the models in their memory, incorporating more factors, wider range in space and time. They can plan their actions to get best result further into the future.
The progress of increasing complexity can be seen in development of human from fetus into an adult. Fetuses only have reflex. Babies have developed instincts. Toddlers may have shown emotions. Little kids can have planned actions for the results a few days ahead. Older kids can make longer term plans,  perhaps into the next few years. Adult humans can have plan for the next decades. Wise men may have plans for the next centuries or millennia.
Moral rules can't be applied to fetuses or babies, since they lack of thoughtful action capability. Any damages caused by their action/inaction are not their fault.
An extremely simplified moral rules might be applied to toddlers, although they are usually meant to protect themselves or perhaps their younger siblings. Damages due to their actions are not considered to be their fault. Instead they are under responsibility of their carer who created a situation that lead to those damages.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 08/12/2018 23:20:16
a simple version of moral rules with short term rewards and punishment can  be applied to kids. They aren't considered to have required mental capacity to process long term objectives.
Some cultures even deviced make up stories to make them obey the make up rules, based on more primitive reward and punishment system related to emotion and instinct.
Some examples are ghost stories to prevent kids from going to dangerous places, fairy tales,  santa clause that give rewards to well behaved kids.
Reward and punishment system applied by parents to their kids are extension of internal rewards and punishment system already incorporated in kids' nervous system,  which is pain and pleasure. They can increase the chance for survival of the kids. The extension can increase the scope of calculation of cause and effects for the benefits of the kids beyond their current capabilities to process information.
This external system requires external agents to implement it by observing kids' behavior and actively giving rewards and punishment consistently according to some simple rules that have been setup and communicated to the kids. Since  parents can't observe kids behavior all the time, some fictional characters might be made up to make kids feel like in continuous surveillance and behave accordingly despite their parents' absence.
Some society leaders observed that those fictional characters still works even for adults, especially the less sophisticated ones. More intelligent people may play along just to avoid troubles. Religions and cults may arise from that.
Title: Re: Is there a universal moral standard?
Post by: jimbobghost on 14/12/2018 17:29:20
"Religions and cults may arise from that."

I have long held that religions and cults are synonymous. religious beliefs start out as "cults", and if they survive condemnation and persecution, eventually are accepted as "religions".

belief that a man could rise from the dead, getting "clear" by paying large sums, or receiving knowledge from scrolls  readable only with magic spectacles, were at the beginning of their creation, considered cults.

now, many accept them as true religions.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/12/2018 11:04:18
"Religions and cults may arise from that."

I have long held that religions and cults are synonymous. religious beliefs start out as "cults", and if they survive condemnation and persecution, eventually are accepted as "religions".

belief that a man could rise from the dead, getting "clear" by paying large sums, or receiving knowledge from scrolls  readable only with magic spectacles, were at the beginning of their creation, considered cults.

now, many accept them as true religions.
This video shows the difference between cult and religion with some examples.
Spoiler: show
religions are cults that survive the death of their founder
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 02/01/2019 21:09:25
I'd like to share this entertaining take on moral rules. I hope you enjoy this. George Carlin - 10 Commandments
Carlin's first commandment about honesty works most of the time, but it has limiting conditions. We should not be honest when communicating with someone doing immoral things, such as a mass shooter asking about people's hiding places, or how to fix a jamming gun. This means that there are moral rules with higher priority than honesty.
Apart from the exception above, there must be some positive value of honesty to make it widely accepted as a moral guidance.
Similar limit also applies to the second commandment about not to kill someone. Most people agree that there are exceptions to this rule, but disagreements still arise on what moral rules with higher priority are followed to make those exceptions. Frequently debated exceptions to killing someone are death penalty, euthanasia, abortion, war, self defense, public safety. Trolley problems as discussed in previous posts are also related to this rule.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 04/01/2019 06:59:26
Apart from the exception above, there must be some positive value of honesty to make it widely accepted as a moral guidance.
In normal situations, being honest is the simplest way of communication. Dishonesty requires additional steps of information process.
Basically, dishonesty is an active effort to make someone to acquire false information, hence making false assumptions, which in turn making them to get different result than their expectation, in favor of the dishonest people. Honest communication can make useful information to be shared among society members, hence helping them to make plans and carry out actions to achieve their goals. Dishonesty can cancel out that advantage, may even turn the communication into a disadvantage compared to no communication at all.
Dishonesty can be revealed through investigations, but they require resources in the form of time and efforts. Hence at least, honesty can save resource to be used in other means to achieve collective goals.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 07/01/2019 11:18:36
Some thinkers like Sam Harris and Matt Dillahunty try to find objective morality based on well being, which is discussed in this video. Matt tries to define well being starting with three foundations : life is preferred to death, health is preferred to sickness, and happiness is preferred to suffering. He also acknowledges that there are exceptions to those foundations, although he doesn't continue to follow up his argument to identify higher priority rules as the justified basis to make those exceptions.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 10/01/2019 08:37:39
life is preferred to death, health is preferred to sickness, and happiness is preferred to suffering
Some questions naturally raises from those foundations of well being. Is there a priority among them? Which one has the highest priority? which is the lowest? how to determine that (what is the rule/criteria)? Is there any exception to that rule?
What if those foundations are compared against other moral rules currently applied in societies, such as golden rule, honesty, life preservation, justice, equality, fairness, kindness, love, altruism, utilitarianism, humanitarianism, loyalty, obedience, patriotism, nationalism, purity, etc. in dilemmatic situation like the trolley problem? How to determine that a rule can be justifiably violated in order to follow higher priority rules (again, what is the rule/criteria)? Is the preference constant, or it depends on other factors?
Can that criteria be used to evaluate other preferable behaviors which are more rarely related to moral values, such as discipline, diligence, carefulness, consistency, simplicity, courage, curiosity, creativity, cleverness, rationalism, empiricism, skepticism, enthusiasm, open mindedness, civility, politeness, empathy, sensitivity, tolerance, diversity, democracy, etc.?

Title: Re: Is there a universal moral standard?
Post by: David Cooper on 10/01/2019 18:44:26
The best way to explore morality is through thought experiments. Create a scenario and then apply moral rules to it to see if they produce outcomes that feel right (because there are no better alternatives). If they obviously fail that test, they're almost always wrong, but you'll be comparing them with some internalised method of judgement whose rules you don't consciously understand, so what feels right could be wrong. Correct morality depends on thinking the scenario through from the point of view of all the players involved in it in order to be fair to all, and if we consciously use that as our way of calculating morality as well as doing this subconsciously (where we generate a feel for what's right), the two things should be the same and will always match up.

Thought experiments cut through the waffle, showing which rules fall flat and which remain in play. Once some rules have been rejected in this way, they shouldn't keep being brought back in - they've been debunked already and shouldn't be left on the table.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 10/01/2019 21:55:30
The best way to explore morality is through thought experiments. Create a scenario and then apply moral rules to it to see if they produce outcomes that feel right (because there are no better alternatives). If they obviously fail that test, they're almost always wrong, but you'll be comparing them with some internalised method of judgement whose rules you don't consciously understand, so what feels right could be wrong. Correct morality depends on thinking the scenario through from the point of view of all the players involved in it in order to be fair to all, and if we consciously use that as our way of calculating morality as well as doing this subconsciously (where we generate a feel for what's right), the two things should be the same and will always match up.

Thought experiments cut through the waffle, showing which rules fall flat and which remain in play. Once some rules have been rejected in this way, they shouldn't keep being brought back in - they've been debunked already and shouldn't be left on the table.
Exactly. That's what I'll try to do next in this topic. I'll demonstrate how the universal moral rule that I've proposed previously can be used to answer the questions above.
Hence, instead of using gut feeling, which is subjective, we'll use an objective method.
We can use chess or go games as a comparison for life of conscious beings. Even though the rules and end goals are relatively simple, the practical strategy is extremely complex. The calculation for best strategy in real life is even harder due to its characteristic as non-zero sum game, imperfect information, and involve randomness.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/01/2019 21:54:39
Living a life while knowing the ultimate goal is like a journey climbing a mount. We can't always go straight to the top. In many occations we need to take roundabouts or even set backs, but at least we know that generally we have to go up.

If we don't know the ultimate goal, it will be more like getting lost in a foggy forest on relatively flat terrain. We might take a long way in the wrong direction, or make same mistakes several times before we can finally get out, and it will be much less efficient journey.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/01/2019 11:55:28
I'll recap my assertion into following points:
1. There exists law of causality. Otherwise everything happens randomly, hence there's no point in making plans or responding to anything. In making a plan, a goal must be set, and some rules must be defined to respond to expected situations while executing it, so the goal can be achieved effectively.
2. Moral rules only apply to conscious beings. Hence keeping the existence of conscious being is  one of the highest priority moral rules, if not the highest. If someone can propose another moral rule with even higher priority, it is necessary to have at least one conscious being to follow it. Hence keeping the existence of conscious being gets back as the highest priority.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/01/2019 21:59:51
Some religions assume that there are supernatural beings that will maintain human life for eternity in the afterlife. Hence the existence of conscious being is taken for granted, and there is no point to take action on it, unless commanded by the supernatural beings.
If we follow their logic consistently, the best strategy would include killing babies before they have the ability to make sins, which will make them live happily in heaven for eternity. For religions which acknowledge inherited sins, the killing must be delayed until they are baptised. The killer can then seek for forgiveness.
Fortunately, evolutionary processes have given us instincts to survive. So the scenarios above don't get very far into the mainstream.
Title: Re: Is there a universal moral standard?
Post by: Hadrian on 12/01/2019 22:09:49
To me it is a human construct and therefore even if two people agree on some moral issue or other, how each of them see it going to be different.   So I think putting a word like universal beside morality etc. is failing to understand what it is in the first place.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/01/2019 22:43:40
To me it is a human construct and therefore even if two people agree on some moral issue or other, how each of them see it going to be different.   So I think putting a word like universal beside morality etc. is failing to understand what it is in the first place.
If you limit the applicability of the moral rules to human only, then of course putting the word universal makes it an oxymoron. Besides,  you also have to define the boundaries of humanity itself, which separate human and non-human. Is a homo sapien fetus considered as human? What about other homo species such as Neanderthal and Denisovans? What about their hybrids with homo sapiens like many of us non-African people? What about future descendants of human who colonize Mars and evolve until their DNA no longer compatible with present human?
I have mentioned in the early posts of this topic that disagreements can arise even when all parties agree on the ultimate goal and universal standard. This is due to uncertainty of the future and imperfect information to calculate the best strategy to achieve the goal, which force us to rely on bayesian inference.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 13/01/2019 09:24:51
There must be some reason why humanity is considered a high priority moral value by human thinkers, despite the limitation I mentioned above. Currently, human is the only known extant species to develop formal moral rules, despite some mixed up in the past with other species from the same genus. Thanks to evolutionary processes happened on earth for the past few billion years. It's currently the only known form of conscious being who is self sustainable. The artificial intelligence in current form is still dependent on human to stay alive.
Title: nRe: Is there a universal moral standard?
Post by: hamdani yusuf on 13/01/2019 10:09:16
In preserving humanity we should be open minded to the idea that our current life form can be improved to increase the probability of our survival. As scientific researches in evolutionary biology told us, if we trace back far enough, we came from ancestors who are not human. If our primate ancestors decide that their life form is the best possible one, hence refuse to mutate and evolve into something else, or our bacterial ancestors develop mechanisms to stop mutations completely, we won't be here to discuss morality in the first place.
They can be seen as stepping stones or scaffoldings to give us a chance to exist. Perhaps our successors will see us the same way.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 13/01/2019 10:37:25
We can take some lessons from the development from Alpha Go to Alpha Zero. Alpha Go learnt to play the game based on experience of human players until it beaten the best human in the game. On the other hand, Alpha zero discards those experiences and starts from zero. It turns out that Alpha Zero is the winner.
Learning from human experience has advantages by discarding most of ineffective moves, hence the calculation to get best strategy can be done efficiently. But there is a drawback: it can miss some moves that don't seem to give advantages until far into the next steps of the game, beyond the calculation capability of human brains.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 13/01/2019 15:38:05
I'll recap my assertion into following points:
1. There exists law of causality. Otherwise everything happens randomly, hence there's no point in making plans or responding to anything. In making a plan, a goal must be set, and some rules must be defined to respond to expected situations while executing it, so the goal can be achieved effectively.
2. Moral rules only apply to conscious beings. Hence keeping the existence of conscious being is  one of the highest priority moral rules, if not the highest. If someone can propose another moral rule with even higher priority, it is necessary to have at least one conscious being to follow it. Hence keeping the existence of conscious being gets back as the highest priority.
3. We should evaluate action/decision based on their effect to the fulfillment of the ultimate goal. Due to imperfect information that we have and uncertainty of the far future, we may not be able to finish complete calculation in time. That's why we need rule of thumb, shortcut or simplified calculation to speed up the result while mostly produce correct answers. Hence the calculation output will take the form of probability or likelyhood.
4. The moral calculation should be done using scientific method, which is objective, reliable, and self correcting when new information is available. Good intentions when done in the wrong way will give us unintended results.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 13/01/2019 20:44:06
There is no special form of morality for humans - morality, when done correctly, is universal, applying to animals, aliens and to all sentient things. Any attempt to define morality which excludes some sentient things because they don't fit the rules of that system is wrong, as is any attempt that has a bias towards humans.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 13/01/2019 22:18:08
There is no special form of morality for humans - morality, when done correctly, is universal, applying to animals, aliens and to all sentient things. Any attempt to define morality which excludes some sentient things because they don't fit the rules of that system is wrong, as is any attempt that has a bias towards humans.
That's what I'm trying to prove here. Thanks for your contributions in this discussion. Critical thinkers like you are what I need to help me build a convincing argumentation by pointing out errors, uncover my blind spots, proposing possible alternatives and providing valuable new information.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 13/01/2019 22:50:36
When dealing with a dilemmatic situation, we should take the option which has better effects to the fulfillment of the ultimate goal by considering available resources. Those include time, energy, matter, tools, finance, labor/workforces, space, data processing power, knowledge or information. When the effects are equal or uncertain, we should take the option which uses less resources. Here we need to consider the economic law of diminishing marginal utility.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 14/01/2019 18:30:00
It's [homo sapiens] currently the only known form of conscious being who is self sustainable.

I thin you are using a very narrow definition of conscious and a very broad definition of self-sustainable. We survive by collaboration and exploitation, and nobody on these boards has ever, to my knowledge, offered a useful definition of "conscious" that excluded any other species of plant or animal.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 14/01/2019 19:02:50
...nobody on these boards has ever, to my knowledge, offered a useful definition of "conscious" that excluded any other species of plant or animal.

The key thing that matters is sentience, and it isn't certain that plants aren't sentient, or even that rocks aren't - sentience may be a property of all matter, so we should consider them in our system of morality. It's unlikely that we need to worry about the feelings of most matter too much though as it's extremely unlikely that we're doing anything to push it towards greater suffering, so we should worry most about things with brains where mechanisms are likely in place to generate feelings that can lead to suffering if they're triggered in unhelpful ways. We inherited mechanisms involving pain from simpler animals, and it's likely that these mechanisms are in place all the way down to tiny worms - if worms could manage without pain, the odds are that we would never have changed over to a system with extra, unnecessary complexity involving pain. Our default position should be to assume that anything with a brain might be able to suffer, and that things with no brain (like plants) probably don't suffer from being chopped up for the pot.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 15/01/2019 01:37:10
It's [homo sapiens] currently the only known form of conscious being who is self sustainable.

I thin you are using a very narrow definition of conscious and a very broad definition of self-sustainable. We survive by collaboration and exploitation, and nobody on these boards has ever, to my knowledge, offered a useful definition of "conscious" that excluded any other species of plant or animal.

You're right. It turns out very hard to point out what makes human so special among other life forms which grant them higher priority if morality rules. But still, most people will argue that if a stranger human being and any other life forms are on each side of trolley problem's track, they will choose to save the human. Choosing otherwise will make them branded as immoral.
I've tried to describe the continuum of consciousness in post #74 and #104 based on complexity of rules can be followed by a system. https://www.thenakedscientists.com/forum/index.php?topic=75380.msg561788#msg561788
We can compare consciousness among systems by their capability to make plans on time scale.
Continuum of consciousness spans from zero such as in rocks to infinity such as in Laplace's Demon.
Consciousness can even vary within a single individual, from when they were fetus, baby, kids, adult, elderly.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 15/01/2019 11:36:31
AFAIK, inteligent beings only exist on earth. Humans are the only extant species with adequate consciousness to define and follow moral rules. Only they have technological advancement to protect themselves from foreseeable mass extinction events such as asteroid strike or swelling of the sun.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 15/01/2019 20:16:40
What happens if aliens turn up and apply our moral standards to us with the roles reversed? If we complain about their insistence that they matter and that we don't, they'll just tell us that we're primitive animals because we were stupid enough to consider ourselves to be superior to them, whereas if we hadn't made that mistake, they'd have recognised us as their equals. Getting morality wrong is to sign your own death warrant.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 15/01/2019 21:54:19
Pleasure and pain are shortcut rules to simplify the moral calculation. They are so simple that even organisms with much lower level of consciousness than average human can follow, which are to seek for pleasure and avoid pain. They can be bypassed by tinkering with neurotransmitters, such as by using drugs or liquors. Physical pain can be reduced by ice pack.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 15/01/2019 22:27:14
What happens if aliens turn up and apply our moral standards to us with the roles reversed? If we complain about their insistence that they matter and that we don't, they'll just tell us that we're primitive animals because we were stupid enough to consider ourselves to be superior to them, whereas if we hadn't made that mistake, they'd have recognised us as their equals. Getting morality wrong is to sign your own death warrant.
High level of consciousness is manifested in the form of wisdom, which includes avoiding unnecessary risks. We should avoid mutual destruction, such as what we felt during cold war.
If the aliens really have high conscious level, they should know the answer to how question to achieve universal ultimate goal. One of them is embracing diversity to avoid common mode of failure. It requires collaboration among various beings, including other intelligent beings.
Humanity itself is a product of collaboration with gut bacteria. All multicellular organisms are product of endosymbiosis. The collaboration will include non-biological system to maximize the probability to achieve the goal.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 15/01/2019 23:32:53
What makes humans special is other humans. From the point of view of every other species (except dogs) we are either food, competition for food, or predators. Nothing special. Even dogs have an equivocal attitude: one or two familiar dogs may help you hunt or protect you, but "dog eats baby" is an everyday headline and a hungry pack will happily kill an adult.

Forming packs is nothing unusual. Termites and bees have a hugely structured society that plans ahead. Ants even farm other animals. Warfare between packs is usually rational (wolves defend their hunting territory against other packs) and occasionally irrational (marauding bands of male chimpanzees attack other families for no apparent reason) but only humans kill each other at long range because they think that their chosen enemy worships a different god - or none at all.

The extent to which humans will exert themselves to make poisons like tobacco or methamphetamine, to climb ice-covered rocks, or to jump out of aeroplanes, is unparalleled. The best definition of intelligence is "constructive laziness", and it's a surprisingly rare commodity, whereas its opposite is abundant and even revered as "art" or "philosophy".

Quote
But still, most people will argue that if a stranger human being and any other life forms are on each side of trolley problem's track, they will choose to save the human. Choosing otherwise will make them branded as immoral.
The default is to give strangers the benefit of the doubt and, like every other animal, to give preference to our own species in the absence of any other information. But given the choice between Donald Trump and a chicken, I'd save the chicken every time.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/01/2019 03:32:31
What makes humans special is other humans. From the point of view of every other species (except dogs) we are either food, competition for food, or predators. Nothing special. Even dogs have an equivocal attitude: one or two familiar dogs may help you hunt or protect you, but "dog eats baby" is an everyday headline and a hungry pack will happily kill an adult.Forming packs is nothing unusual. Termites and bees have a hugely structured society that plans ahead. Ants even farm other animals. Warfare between packs is usually rational (wolves defend their hunting territory against other packs) and occasionally irrational (marauding bands of male chimpanzees attack other families for no apparent reason) but only humans kill each other at long range because they think that their chosen enemy worships a different god - or none at all.The extent to which humans will exert themselves to make poisons like tobacco or methamphetamine, to climb ice-covered rocks, or to jump out of aeroplanes, is unparalleled. The best definition of intelligence is "constructive laziness", and it's a surprisingly rare commodity, whereas its opposite is abundant and even revered as "art" or "philosophy".
Humanity can be seen as successor of our ancestors. If we trace back far enough, they won't be recognized as human. Similarly, our far future successors may not be recognized as human. Currently, humans are the most advanced level of consciousness biological beings. The gap with the next group is quite significant.
Self preservation is one of important shortcut rule of morality. Due to the advantage of collaboration, the coverage can be expanded to include other beings having the same (or at least, compatible) goal.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/01/2019 04:30:00
The default is to give strangers the benefit of the doubt and, like every other animal, to give preference to our own species in the absence of any other information. But given the choice between Donald Trump and a chicken, I'd save the chicken every time.
As I mentioned above, currently, humans are our only hope to prevent catastrophic events from eliminating conscious beings. Hence, preservation of human is inline with the universal moral rule.
More number of human individuals can increase the probability of the achievement of ultimate goal through redundancy, and in lesser extent, diversity as its side effect. But due to economic law of diminishing marginal utility, at some point, increasing the number of human individuals are no longer beneficial to the overall achievement of ultimate goal. In some cases, it can even be beneficial to lower the threshold of death penalty.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 16/01/2019 08:34:30
As I mentioned above, currently, humans are our only hope to prevent catastrophic events from eliminating conscious beings.
Far from it.

If you believe in consensus, then humans are responsible for catastrophic climate change that will be as disastrous as the extinction of the dinosaurs.

If you believe in science, it is clear that the absence of humans from the Chernobyl exclusion zone has allowed every native species of mammal from mice to wolves, to flourish in a garden of robust plants.

If you believe in history, you will have noted the disastrous effect of arable farming in the American dustbowl, deforestation of Easter Island, and gradual loss of freshwater habitat in Bangladesh, all due to the unlimited presence of a relatively new species (hom sap) with no significant predators.

The solution to the preservation of life on earth is fewer humans.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 16/01/2019 12:16:19
. Currently, humans are the most advanced level of consciousness biological beings.

Please define consciousness.If humans represent the highest level of it, then consciousness appears to be defined by a tendency to self-harm, genocide, irrational belief, or the deliberate destruction of food to support market prices.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/01/2019 21:15:05
As I mentioned above, currently, humans are our only hope to prevent catastrophic events from eliminating conscious beings.
Far from it.

If you believe in consensus, then humans are responsible for catastrophic climate change that will be as disastrous as the extinction of the dinosaurs.

If you believe in science, it is clear that the absence of humans from the Chernobyl exclusion zone has allowed every native species of mammal from mice to wolves, to flourish in a garden of robust plants.

If you believe in history, you will have noted the disastrous effect of arable farming in the American dustbowl, deforestation of Easter Island, and gradual loss of freshwater habitat in Bangladesh, all due to the unlimited presence of a relatively new species (hom sap) with no significant predators.

The solution to the preservation of life on earth is fewer humans.

So you think fewer human is better. How low can you go? Is zero the best? What do you propose to get there? Do you agree with the genius who makes all people to stop reproducing as I mentioned in a previous post in this topic?
How do you define what's better or worst morally then?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/01/2019 21:31:26
. Currently, humans are the most advanced level of consciousness biological beings.

Please define consciousness.If humans represent the highest level of it, then consciousness appears to be defined by a tendency to self-harm, genocide, irrational belief, or the deliberate destruction of food to support market prices.

I've mentioned that consciousness is multidimensional. We can make comparison among conscious beings by how far ahead they can make plans or prepare their actions. Other key performance indicators are information processing speed, memory capacity and reliablility, which determine how well their mind represents reality, which in turn determine the success probability of their goal achievements. Their ability to filter incoming information is also important to prevent them from making false assumptions which lead to bad decisions and unexpected results.
Humans who destroy their environment don't think very far ahead, hence their consciousness level isn't much higher than other species. Intelligence of smart animals are often compared to that of human children at certain age.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/01/2019 21:56:23
Almost the whole volume of the universe is nearly empty space. Hence if we want to maximize the probability to survive, we need to adapt to live there. Freely and actively, not just dormant, independent from any naturally occuring heavenly body. It doesn't mean that we must be able to live there alone and naked. We can create artificial environment such as city size spaceships which are self sustainable. We can utilize symbiosis with other life forms, including non-biological ones. Shortly, whatever it takes to achieve universal ultimate goal.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/01/2019 11:04:14
Instead of reducing population by force, it would be much more effective and efficient to educate them properly. They should be introduced to logic and logical fallacies as soon as they understand languages. Hopefully they can get maximum advantages from incoming information using their logic, while avoiding erroneous conclusions from logical fallacies.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 07/09/2019 02:41:25
Here is some examples to demonstrate that moral judgment is closely related to knowledge and uncertainty.
You are in a tall and large building, and find a massive time bomb which makes it impossible to move before disarming it first. You can see red and blue wires on the detonator, and a counting down clock showing that there is only 2 minutes left before it explodes. You are an expert in explosives, sou you know for certain the following premises:
- if you cut the red wire, the bomb will be disarmed.
- If you cut the blue wire, the bomb will explode immediately, destroying the entire building and killing thousands inside.
- If you do nothing about the bomb, the timer will eventually trigger the bomb.
Which is the most moral decision you can take, which is the least moral, and why?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 07/09/2019 08:20:48
If you realize that the bomber can swap between the blue and red wire deliberately, hence reversing the results of cutting them, does the moral judgment change?
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 07/09/2019 18:51:39
You cut both wires at the same time and discover that the rules stated as certainties are actually impossible.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 08/09/2019 23:03:06
You cut both wires at the same time and discover that the rules stated as certainties are actually impossible.
In electronic, you can design the priority between those triggers. In RS flip flop, Reset command is dominant, while in SR flip flop, it's the set command. They are called bistable multivibrator.
If you know the configuration used in the bomb, you can be certain what would happen when they are both triggered simultaneously.
Title: Re: Is there a universal moral standard?
Post by: syhprum on 09/09/2019 11:48:37
Why not try and find the wires powering the timer if that was put out of action a more detailed examination could be made.
There are two possibilities the timer is supplying a signal to the detonator that stops it detonating or when the time runs out sends a signal to the detonator to make it explode if you stop the timer you can check which it is.
there is a worrying possibility wires powering the counter also power the don't detonate signal generator so try and find an alternative way to stop the counter !
If the timer is mechanical you could try zapping it with a CO2 fire extinguisher if one is handy , best of luck.
If I was building this device I would incorporate a small battery in the detonator box and make the signal from the timer "don't explode" and use the other wire to prime the device.
You would only have to provide a don't explode signal from the timer and cut the signal from the timer.
I am assuming only DC signals are used if one used AC signals and frequency sensitive detectors it would be a whole new ball game   
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 09/09/2019 18:45:45
Here is some examples to demonstrate that moral judgment is closely related to knowledge and uncertainty.
You are in a tall and large building, and find a massive time bomb which makes it impossible to move before disarming it first. You can see red and blue wires on the detonator, and a counting down clock showing that there is only 2 minutes left before it explodes. You are an expert in explosives, sou you know for certain the following premises:
- if you cut the red wire, the bomb will be disarmed.
- If you cut the blue wire, the bomb will explode immediately, destroying the entire building and killing thousands inside.
- If you do nothing about the bomb, the timer will eventually trigger the bomb.
Which is the most moral decision you can take, which is the least moral, and why?

The decision will depend on other information. Is the bomb on the top floor or the ground floor? Are there any other people in the building? If this is an office block and it's empty at night, no one sane would snip either wire when they can just run out of there in the two minutes that are available.

However, let's assume the higher floors of the building are full of people who can't possibly get out in two minutes (or even be warned within two minutes), that the bomb is on the ground floor, and the the building will collapse as soon as it blows. There is nothing immoral about not risking your own death in order to have a 50:50 chance of saving lots of other people, so you are entitled to run out of there and let it blow. If AGI is making the decision though, it could lock you in with the bomb so that you don't have a choice - that would be its moral decision. You would then cut one of the wires, randomly selected.

There are some other factors though. If the person who has to run or cut a wire is more valuable to humanity than the sum total worth of all the other people in the building, AGI will not lock him/her in the room, but will order him/her to get out of there and let the building blow. If the building is full of Nazis who are attending a conference, that could well happen.
Title: Re: Is there a universal moral standard?
Post by: Halc on 09/09/2019 19:05:58
Here is some examples to demonstrate that moral judgment is closely related to knowledge and uncertainty.
You are in a tall and large building, and find a massive time bomb which makes it impossible to move before disarming it first. You can see red and blue wires on the detonator, and a counting down clock showing that there is only 2 minutes left before it explodes. You are an expert in explosives, sou you know for certain the following premises:
- if you cut the red wire, the bomb will be disarmed.
- If you cut the blue wire, the bomb will explode immediately, destroying the entire building and killing thousands inside.
- If you do nothing about the bomb, the timer will eventually trigger the bomb.
Which is the most moral decision you can take, which is the least moral, and why?

If this topic is about universal morals, then the bomb question has not nearly enough information. Why mess with a device that has a clear purpose? It has not been stated that there is a goal to preserve the building. Maybe the bomb was put there by a demolition crew who was paid to take it down.
Suppose the building is full of puppies. It is a universal law that it is bad to damage something cute, correct? If so, you've already begged your answer. If not, how am I to know what to do with the bomb even if it has a simple 'off' switch available? The universe seems to provide no input for the situation at hand.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 10/09/2019 10:36:43
Why not try and find the wires powering the timer if that was put out of action a more detailed examination could be made.
There are two possibilities the timer is supplying a signal to the detonator that stops it detonating or when the time runs out sends a signal to the detonator to make it explode if you stop the timer you can check which it is.
there is a worrying possibility wires powering the counter also power the don't detonate signal generator so try and find an alternative way to stop the counter !
If the timer is mechanical you could try zapping it with a CO2 fire extinguisher if one is handy , best of luck.
If I was building this device I would incorporate a small battery in the detonator box and make the signal from the timer "don't explode" and use the other wire to prime the device.
You would only have to provide a don't explode signal from the timer and cut the signal from the timer.
I am assuming only DC signals are used if one used AC signals and frequency sensitive detectors it would be a whole new ball game
In this thread I don't want to go too deep into technical details. I think it's adequate to describe cause and effect relationships in the situation to determine which action to take to get the most desired possible result.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 10/09/2019 11:11:15
If this topic is about universal morals, then the bomb question has not nearly enough information. Why mess with a device that has a clear purpose? It has not been stated that there is a goal to preserve the building. Maybe the bomb was put there by a demolition crew who was paid to take it down.
Suppose the building is full of puppies. It is a universal law that it is bad to damage something cute, correct? If so, you've already begged your answer. If not, how am I to know what to do with the bomb even if it has a simple 'off' switch available? The universe seems to provide no input for the situation at hand.
Yes it is about universal morals. And yes, the situation was designed to show that moral judgement is closely related to knowledge and uncertainty.
Unfortunately, cuteness is not a universal value. Something cute to someone might be not cute for someone else.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 10/09/2019 11:22:11
However, let's assume the higher floors of the building are full of people who can't possibly get out in two minutes (or even be warned within two minutes), that the bomb is on the ground floor, and the the building will collapse as soon as it blows. There is nothing immoral about not risking your own death in order to have a 50:50 chance of saving lots of other people, so you are entitled to run out of there and let it blow. If AGI is making the decision though, it could lock you in with the bomb so that you don't have a choice - that would be its moral decision. You would then cut one of the wires, randomly selected.

There are some other factors though. If the person who has to run or cut a wire is more valuable to humanity than the sum total worth of all the other people in the building, AGI will not lock him/her in the room, but will order him/her to get out of there and let the building blow. If the building is full of Nazis who are attending a conference, that could well happen.
To determine what's the universally most moral action in a particular situation, we need first to determine what's the universal goal we want to achieve, and then calculate and compare the expected results we would get by taking available actions. We should take actions expected to get us closest to the universal goal.
Someone might have good intention when making a moral decision, but their decision may produce undesired result if it's based on false information, such as swapped wire of the time bomb.
Title: Re: Is there a universal moral standard?
Post by: syhprum on 10/09/2019 19:38:16
I was brung up as a technician and find designing bombs more interesting than pondering moral questions
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/09/2019 02:09:59
I was brung up as a technician and find designing bombs more interesting than pondering moral questions
I hope I can entertain you in another thread.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/09/2019 02:31:21
Here is some examples to demonstrate that moral judgment is closely related to knowledge and uncertainty.
You are in a tall and large building, and find a massive time bomb which makes it impossible to move before disarming it first. You can see red and blue wires on the detonator, and a counting down clock showing that there is only 2 minutes left before it explodes. You are an expert in explosives, sou you know for certain the following premises:
- if you cut the red wire, the bomb will be disarmed.
- If you cut the blue wire, the bomb will explode immediately, destroying the entire building and killing thousands inside.
- If you do nothing about the bomb, the timer will eventually trigger the bomb.
Which is the most moral decision you can take, which is the least moral, and why?
Let's say that you are the one who built the bomb, hence you know for certain that the premises above are true. Suppose you you designed the detonator as SR flipflop, so when both wires are cut, the bomb will explode immediately. As pointed out by David and Halc, to determine moral judgement for each option, we need to have information about further consequences brought by them. This thread will explore how they can be assessed if all required information is available.
Here is some possible scenarios which can bring you to above situation.
- You are hired by a building contractor to destroy an old building so they can build a new one. You just get the date/month wrong, perhaps you and your client used the different format.
- You are a national secret service agent ordered to destroy their enemy's headquarter. You are discovered by enemy guard when you tried to sneak out.
- You are a mercenary hired by a terrorist organization to destroy their enemy's economic center. You are waiting to get payment confirmation.
- You are a voluntary member of a terrorist organization to destroy their enemy's economic center. You are willing to die to execute the job.

Title: Re: Is there a universal moral standard?
Post by: Halc on 11/09/2019 04:15:17
Yes it is about universal morals. And yes, the situation was designed to show that moral judgement is closely related to knowledge and uncertainty.
Unfortunately, cuteness is not a universal value. Something cute to someone might be not cute for someone else.
Agree with all of this.  Suppose we have full knowledge of the situation.  We have the uncertainty if you want it, like an even chance that cutting a wire will halt or blow the bomb.
What we don't have is the worth of what we're saving. The universe places no worth on anything.  Maybe the building is full of 50 people that would die, or maybe 50 spiders. Are humans worth more than spiders? To humans, sure, but to the universe?

To determine what's the universally most moral action in a particular situation, we need first to determine what's the universal goal we want to achieve, and then calculate and compare the expected results we would get by taking available actions. We should take actions expected to get us closest to the universal goal.
Agree with this if a universal goal can be found, but I don't think there are objective goals. I absolutely agree that the goals should be considered first. What's good for one goal is not so good for others. The Catholic church's stance on birth control for example seems designed to bring about the demise of humanity in the shortest possible time. They don't seem to consider long term goals at all, or are counting on forcing God's hand, like that's ever worked.

Quote
Someone might have good intention when making a moral decision, but their decision may produce undesired result if it's based on false information, such as swapped wire of the time bomb.
That part seems irrelevant since it cannot be helped. A person cannot be faulted for having good intentions and attempting what seemed best. It seems irrelevant twice because if he chooses to cut no wire, everybody in the building still dies, so the wrong choice just takes out our hero, but nobody else that wasn't already doomed. I think he'd not forgive himself if he didn't try, but only if attempting the disarming was the right thing to do in the first place, and we haven't determined that.

Here is some possible scenarios which can bring you to above situation.
- You are hired by a building contractor to destroy an old building so they can build a new one. You just get the date/month wrong, perhaps you and your client used the different format.
- You are a national secret service agent ordered to destroy their enemy's headquarter. You are discovered by enemy guard when you tried to sneak out.
- You are a mercenary hired by a terrorist organization to destroy their enemy's economic center. You are waiting to get payment confirmation.
- You are a voluntary member of a terrorist organization to destroy their enemy's economic center. You are willing to die to execute the job.
In all 4 of these cases, you're taking your orders from your employer. You have a goal, and it isn't a universal one. You do your job. If you work for someone you find immoral, then you know you're helping them do immoral acts. Most terrorists/soldiers don't consider their acts immoral.

Alan (post 2) brought up morals coming from your peers, and all the above are examples of that.
I think we need some examples where your chosen peer group has nothing to say, and you're actually faced with wanting to do the right thing and not what some group to which you belong wants you to do.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/09/2019 06:53:16
Here is a talk on morality by Dr. Andy Thomson.
I think it can enhance our understanding about morality to enrich our discussion about universal morality.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/09/2019 07:05:15
If the building is full of Nazis who are attending a conference, that could well happen.
You assumed that the decision maker has the information that Nazis are bad and decide that the universe would be better off without them. Could you show how we could arrive to that conclusion?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/09/2019 07:31:12
Agree with this if a universal goal can be found, but I don't think there are objective goals. I absolutely agree that the goals should be considered first. What's good for one goal is not so good for others.
That's what this thread was started for in the first place. I have tried to find one by simply answering basic questions about morality (what, who, where, when, why, how) in my previous posts.
Perhaps the term objective morality is a bit oxymoron because the word objective implies independence from point of view, while morality can only apply to conscious beings who has exceeded certain consciousness level or mental capacity.
An event can be evaluated as objectively true or false even when the subject has no mental capacity, for example when some comets hit Jupiter. An action can not be judged to be morally wrong when the subject doesn't have the adequate mental capacity to differentiate between right and wrong things (which means they can simulate their available actions and estimate and compare the expected results/consequences, then choose the action which gives most desired expected result) for example when a malaria ridden mosquito bites a human toddler. You can pee and show your genital in public without being judged as immoral if you are a baby.
That's why I prefer the term universal instead of objective, which means that the ultimate goal we should use to evaluate morality is restricted to the point of view of conscious beings, but still applicable for any conscious beings that might exist in the universe. This restriction give us a reason to reject nihilism, which can make us struggle to answer the question "why don't you just kill yourself if you think that nothing really matters?"
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/09/2019 12:24:15
Here is another interesting insight about terminal and instrumental goal to help us understand moral reasoning.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/09/2019 13:47:44
That's why I prefer the term universal instead of objective, which means that the ultimate goal we should use to evaluate morality is restricted to the point of view of conscious beings, but still applicable for any conscious beings that might exist in the universe. This restriction give us a reason to reject nihilism, which can make us struggle to answer the question "why don't you just kill yourself if you think that nothing really matters?"
A universal terminal goal must be something extremely important, that any conscious beings with sufficient information should try to achieve that, to the extend that they are willing to sacrifice any other goals conceivable. For a starter, we can compare a proposed terminal goal with another thing that we usually placed at high priority, such as our own life. Are there something more important than our own life?
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 11/09/2019 19:28:57
If the building is full of Nazis who are attending a conference, that could well happen.
You assumed that the decision maker has the information that Nazis are bad and decide that the universe would be better off without them. Could you show how we could arrive to that conclusion?

Nazis are people who approve of killing others who are of an "impure race". Such people are so highly immoral that it is arguably immoral not to kill them: tolerating them leads to a lot of good people being killed. That's a hard one to weigh up though without a lot of careful checking and statistical analysis, and of course, the Nazis could claim that they were trying to do exactly the same thing by killing people they regarded as dangerous bigots. This is not something that people are fit to judge: it needs to be investigated by AGI which can crunch all the available data instead of small subsets of it which may be greatly biased.

Morality is completely resolved though: we know how it works. Blowing up a building with 1 good person in it will do magnitudes more harm than blowing up a building with a billion spiders in it. To work out what's moral, all you have to do is reduce a multi-participant system to a single-participant system, and then it's all just a harm:benefit calculation. Let's have two buildings: one with a billion spiders in it and one with one good person in it. Both of them will blow up unless we choose which one to sacrifice and press a button to select that. We treat this system in such a way that we imagine there is only one participant in it who will have to live the lives of all the participants in turn, so he will be the one that experiences all the suffering involved. He is not only the person in one building and the billion spiders in the other, but he is all the spiders on the planet and all the people. If we choose to blow up the building with the spiders in it, none of the other spiders on the planet care at all, and the ones that were fried hardly even noticed. They had no idea how long they could have lived, and they would have died anyway in ways that would likely have involved more suffering, not least because spiders "eat" each other (by paralysing them and then sucking them dry). If we choose to blow up the building with the person in it instead, there's no great gain from saving all those spiders, but we'll have a lot of devastated people about who knew and cared about that person who was blown up instead. Our single participant in this system would experience all that suffering because he will live the lives of all of them, and living longer lives as a billion spiders isn't much compensation.
Title: Re: Is there a universal moral standard?
Post by: Halc on 12/09/2019 04:53:06
Perhaps the term objective morality is a bit oxymoron because the word objective implies independence from point of view, while morality can only apply to conscious beings who has exceeded certain consciousness level or mental capacity.
Then I don't know what you're asking in this topic if not for a standard that is independent of any particular point of view.

As for conscious beings, I'm not sure how you define that, or how its relevant.  The usual definition is 'just like me', meaning it isn't immoral to mistreat the aliens when they show up because they're not just like us.

An example of moral beings (without requirement of having consciousness or mental capacity) is the individual cells of any creature's body, which work selflessly as a team for the benefit of the group.  There isn't a code that even begins to resemble the usual 10 commandments, but it does resemble the whole 'love thy brother like thyself' going on. Humans, for all their supposed intelligence, cannot see beyond themselves and work for a greater goal, or even name the goal for that matter. I'm just saying that if the aliens come, they'll notice that fact before they notice all our toys.

Quote
An action can not be judged to be morally wrong when the subject doesn't have the adequate mental capacity to differentiate between right and wrong things
So the subject doesn't know if what it's doing is right or wrong.  Does this epistemological distinction matter? If some action is wrong, then doing that action is wrong, period, regardless of whether the thing doing it knows it's wrong or not.

What does wrong mean, anyway?  Suppose I do something wrong, but don't know it. What does it mean that I've done a wrong thing? Sure, if there is some kind of consequence to be laid on me due to the action, then there's a distinction. I take the wrong turn in the maze and don't get the cheese. That makes turning left immoral, but only if there's a cheese one way and not the other? Just trying to get a bit of clarity on 'right/wrong/ought-to'.

Quote
You can pee and show your genital in public without being judged as immoral if you are a baby.
Showing genitals is not a peer-group specific thing?  Seems unlikely given the 99% majority of beings that are unconcerned with it, and even humans decorate just about anything with plant genitals (flowers). Sorry to jump on this, but I find it an unlikely candidate for a universal rule.

Quote
That's why I prefer the term universal instead of objective, which means that the ultimate goal we should use to evaluate morality is restricted to the point of view of conscious beings, but still applicable for any conscious beings that might exist in the universe.
Is a self-driving car conscious?  It certainly has better awareness than a human, and carries moral responsibility for its occupants, and makes real decisions based on such values. But the values are programmed in (not even learned like some AI systems), and are not drawn from 'the universe'.

Quote
This restriction give us a reason to reject nihilism, which can make us struggle to answer the question "why don't you just kill yourself if you think that nothing really matters?"
A nihilist doesn't deny that things matter, just that they don't matter universally.  My life definitely matters to me and mine and those with whom I interact. But I don't think the universe gives a hoot about my existence. Not sure if that makes me a nihilist.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/09/2019 06:36:40
Let's see what the dictionary says about nihilism. I just googled it
Quote
noun
the rejection of all religious and moral principles, in the belief that life is meaningless.
synonyms:   negativity, cynicism, pessimism; More
PHILOSOPHY
extreme skepticism maintaining that nothing in the world has a real existence.
HISTORICAL
the doctrine of an extreme Russian revolutionary party c. 1900 which found nothing to approve of in the established social order.   
Title: Re: Is there a universal moral standard?
Post by: Halc on 12/09/2019 12:21:26
Let's see what the dictionary says about nihilism. I just googled it
Quote
noun
the rejection of all religious and moral principles, in the belief that life is meaningless.
synonyms:   negativity, cynicism, pessimism;
I'm not one then since I very much thing there are moral principles, and some of them religious. I've already stated that life has meaning for me. I just don't think those principles that obviously exist to me are universal.  They're just a product of my parents and other people around me.

I said a lot in that post, this nihilist thing being only a side thought, since it concerns my personal beliefs and not any argument for or against a universal standard.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/09/2019 02:22:34
Nazis are people who approve of killing others who are of an "impure race". Such people are so highly immoral that it is arguably immoral not to kill them: tolerating them leads to a lot of good people being killed. That's a hard one to weigh up though without a lot of careful checking and statistical analysis, and of course, the Nazis could claim that they were trying to do exactly the same thing by killing people they regarded as dangerous bigots. This is not something that people are fit to judge: it needs to be investigated by AGI which can crunch all the available data instead of small subsets of it which may be greatly biased.
If a conscious being who has perfect knowledge of the relevant circumstances, including the understanding of universal terminal goal and moral standards, every immoral actions and behaviors can be identified as misinformation which leads to misplaced priorities. This means that the immoral actors choose actions which consequently deter the efforts to achieve universal terminal goal. Let's try to identify which priorities are misplaced by following immoral actions:
- holocaust
- Joshua genocide
- Serbian genocide
- Polpot genocide
- Aztec human sacrifice
- 9/11
- Mumbai attack
- Ted Bundy rape and murder
Title: Re: Is there a universal moral standard?
Post by: Halc on 17/09/2019 02:37:42
That's David's quote, not mine.  I would not have said that.

Edit: Thanks for fixing it.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/09/2019 04:11:29
Morality is completely resolved though: we know how it works. Blowing up a building with 1 good person in it will do magnitudes more harm than blowing up a building with a billion spiders in it. To work out what's moral, all you have to do is reduce a multi-participant system to a single-participant system, and then it's all just a harm:benefit calculation. Let's have two buildings: one with a billion spiders in it and one with one good person in it. Both of them will blow up unless we choose which one to sacrifice and press a button to select that. We treat this system in such a way that we imagine there is only one participant in it who will have to live the lives of all the participants in turn, so he will be the one that experiences all the suffering involved. He is not only the person in one building and the billion spiders in the other, but he is all the spiders on the planet and all the people. If we choose to blow up the building with the spiders in it, none of the other spiders on the planet care at all, and the ones that were fried hardly even noticed. They had no idea how long they could have lived, and they would have died anyway in ways that would likely have involved more suffering, not least because spiders "eat" each other (by paralysing them and then sucking them dry). If we choose to blow up the building with the person in it instead, there's no great gain from saving all those spiders, but we'll have a lot of devastated people about who knew and cared about that person who was blown up instead. Our single participant in this system would experience all that suffering because he will live the lives of all of them, and living longer lives as a billion spiders isn't much compensation.
I know from my Twitter feed that many people are willing to sacrifice trophy hunter to save their prey. They cheered when a matador was gored by the bull.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/09/2019 04:13:31
That's David's quote, not mine.  I would not have said that.
I used quote selected command from action button. I didn't realize that it gives the wrong quotation.
I've fixed it manually.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/09/2019 04:24:46
Then I don't know what you're asking in this topic if not for a standard that is independent of any particular point of view.

As for conscious beings, I'm not sure how you define that, or how its relevant.  The usual definition is 'just like me', meaning it isn't immoral to mistreat the aliens when they show up because they're not just like us.
As I said in the post, I restricted the use of moral rules to conscious being. You can not judge some action as immoral from the point of view of viruses, for instance.
Here is what I said in my post following the statement that you quoted:
Quote
That's why I prefer the term universal instead of objective, which means that the ultimate goal we should use to evaluate morality is restricted to the point of view of conscious beings, but still applicable for any conscious beings that might exist in the universe. This restriction give us a reason to reject nihilism, which can make us struggle to answer the question "why don't you just kill yourself if you think that nothing really matters?"

Without a universal terminal goal, we cannot set up universal moral rules. This will lead us to moral relativism. In its most extreme form, you cannot judge any action as immoral, because they are always right, at least from the stand point of the actor.

https://en.wikipedia.org/wiki/Moral_relativism
https://en.wikipedia.org/wiki/Moral_universalism
https://en.wikipedia.org/wiki/Ideal_observer_theory
Title: Re: Is there a universal moral standard?
Post by: Halc on 17/09/2019 05:09:26
I used quote selected command from action button. I didn't realize that it gives the wrong quotation.
I've fixed it manually.
You've found a legit bug.  I will bring it up.
If you select text from one person's post and then click quote-selected from the menu of a second post, it quotes the first text as if it were written by the 2nd.  The system shouldn't do that obviously.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/09/2019 05:18:38
As for conscious beings, I'm not sure how you define that, or how its relevant.  The usual definition is 'just like me', meaning it isn't immoral to mistreat the aliens when they show up because they're not just like us.

I have answered that question here https://www.thenakedscientists.com/forum/index.php?topic=75380.msg559662#msg559662
I think that your mentioned definition is not as usual as you think.

Quote
An example of moral beings (without requirement of having consciousness or mental capacity) is the individual cells of any creature's body, which work selflessly as a team for the benefit of the group.  There isn't a code that even begins to resemble the usual 10 commandments, but it does resemble the whole 'love thy brother like thyself' going on. Humans, for all their supposed intelligence, cannot see beyond themselves and work for a greater goal, or even name the goal for that matter. I'm just saying that if the aliens come, they'll notice that fact before they notice all our toys.
IMO, they are just automaton which lack the capability to estimate the consequence of their action. They act/react that way just because it helps them to survive, or at least doesn't lead them to extinction. They don't follow moral rules, hence they are not moral actions.
Our philosophers have tried to answer the questions of the greater goal and moral rules that try to help achieve that. I have proposed my answer in previous post here https://www.thenakedscientists.com/forum/index.php?topic=75380.msg565365#msg565365
Quote
I'll recap my assertion into following points:
1. There exists law of causality. Otherwise everything happens randomly, hence there's no point in making plans or responding to anything. In making a plan, a goal must be set, and some rules must be defined to respond to expected situations while executing it, so the goal can be achieved effectively.
2. Moral rules only apply to conscious beings. Hence keeping the existence of conscious being is  one of the highest priority moral rules, if not the highest. If someone can propose another moral rule with even higher priority, it is necessary to have at least one conscious being to follow it. Hence keeping the existence of conscious being gets back as the highest priority.
3. We should evaluate action/decision based on their effect to the fulfillment of the ultimate goal. Due to imperfect information that we have and uncertainty of the far future, we may not be able to finish complete calculation in time. That's why we need rule of thumb, shortcut or simplified calculation to speed up the result while mostly produce correct answers. Hence the calculation output will take the form of probability or likelyhood.
4. The moral calculation should be done using scientific method, which is objective, reliable, and self correcting when new information is available. Good intentions when done in the wrong way will give us unintended results.
Title: Re: Is there a universal moral standard?
Post by: Halc on 17/09/2019 05:33:03
Quote from: Halc
As for conscious beings, I'm not sure how you define that, or how its relevant.
As I said in the post, I restricted the use of moral rules to conscious being. You can not judge some action as immoral from the point of view of viruses, for instance.
If a virus does something against a universal moral code, then it has done something wrong, even if it lacks the ability to know about it.  Consciousness seems to play no role.  A frog for instance seems conscious of water and flies and such, but like the virus, it probably has little perception of universal right and wrong. The addition of consciousness seems not to helped it with this perception.
So they end up doing wrong things.  So what?  It seems to concern neither the frog nor the virus that they have done so.

Quote
Without a universal terminal goal, we cannot set up universal moral rules.
Sounds reasonable.
Quote
This will lead us to moral relativism. In its most extreme form, you cannot judge any action as immoral, because they are always right, at least from the stand point of the actor.
I beg to differ. I've done things I know are not right, even from my own standpoint. I feel free to judge myself and my peers, but not according to universal rules, because I am not aware of any, just as I am not aware of any universal terminal goals.
Title: Re: Is there a universal moral standard?
Post by: Halc on 17/09/2019 06:15:29
As for conscious beings, I'm not sure how you define that.  The usual definition is 'just like me'...

I have answered that question [in post 38].  I think that your mentioned definition is not as usual as you think.
You define it there as a spectrum (and I agree with that), but above you make it a binary thing where some critical threshold needs to be crossed.  Where is that threshold? Just above a virus? No? Just humans?  If so, how then is your definition not the usual one I mentioned?

Quote
IMO, [cells of a body] are just automaton which lack the capability to estimate the consequence of their action.
The consequence of immoral action is impairment/death of the group, so I think they're quite aware of the moral code, the need to work as a team.  Yes, they're automatons, as is any physical construct. I'm just a more complex one than a cell, but one far less in tune to any terminal goals of the larger group. I'm far less moral than are my cells.

Quote
They act/react that way just because it helps them to survive.
 They don't follow moral rules, hence they are not moral actions.
What are moral rules except rules that help the survival rate of the group that defines the morals?  That's not universal, that's morals of the group.  Cells follow morals of the body and not anything larger than that.

I'm not trying to be contradictory, just trying to illustrate the lack of difference between a human and anything else, and the complete lack of a code that comes from anywhere else except the group with which you relate. Yes, I'm a relativist, in far more ways that just moral relativism.

A pretty good rule that applies, well, at least to things kind of like us, seems to go along the lines of: Being true to your kind trumps being true to yourself.  But even that falls apart as a universal rule. There are things that don't have 'kind'. The rule only seems to fit well with K-strategists, not R-strategists. That doesn't bode well for a hypothesis of a universal standard.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/09/2019 09:01:41
I beg to differ. I've done things I know are not right, even from my own standpoint. I feel free to judge myself and my peers, but not according to universal rules, because I am not aware of any, just as I am not aware of any universal terminal goals.
How do you judge if an action is morally right or wrong? what is your highest priority? is there something more important than your own life that you are willing to sacrifice for it?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/09/2019 11:04:13
You define it there as a spectrum (and I agree with that), but above you make it a binary thing where some critical threshold needs to be crossed.  Where is that threshold? Just above a virus? No? Just humans?  If so, how then is your definition not the usual one I mentioned?
Not all moral rules have the same level of complexity. Some moral rules are simple enough to be followed by kids. We can't expect a moral agent to follow moral rules whose complexities are beyond their capability to comprehend.
As I mentioned before, humans can have different level of consciousness. Even a single individual can have various level of consciousness between life stages, from baby, kid, adult, and elderly. Brain damage can also alter their consciousness.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/09/2019 11:47:56
What are moral rules except rules that help the survival rate of the group that defines the morals?  That's not universal, that's morals of the group.  Cells follow morals of the body and not anything larger than that.

I'm not trying to be contradictory, just trying to illustrate the lack of difference between a human and anything else, and the complete lack of a code that comes from anywhere else except the group with which you relate. Yes, I'm a relativist, in far more ways that just moral relativism.
Have you tried to expand the group that defines the moral rules? Can you find a moral rule that's applicable for all human being? I have proposed to expand the group to all conscious beings if we want to find universal moral rules. I also have excluded non-conscious beings from the group that defines moral rules so that they don't fall back to just "anything goes".
As I mentioned early, consciousness appear as spectrum, from 0 as in rocks up to infinity as in Laplace's demon. Historically, highest consious beings have been increasing with time. I regard humanity, as well as their ancestors (apes, mammals, vertebrates, eucaryotes), as scaffoldings to build higher level of consciousness. It just happens that until recently, humanity have highest conciousness among other life forms. Who know how humans will evolve into in distance future. But universal moral rules should still apply to them.
By being relativist, do you think that perpetrators of 9/11 are moral in their own respect because they follow moral rules of their group? what about human sacrifice by the Aztecs? holocaust by Nazi? slavery by the confederacy?human cannibalism by some cultures?
Title: Re: Is there a universal moral standard?
Post by: syhprum on 17/09/2019 20:05:25
I think the Saudis who perpetrated the 9/11 incident were moral in as much as they were prepared to sacrifice their own lives for what they considered the greater good.
If the army group who were dissatisfied with the way Hitler was conducting the war and wished to replace him had been prepared to make a similar sacrifice the war could well have come to a better conclusuion
Title: Re: Is there a universal moral standard?
Post by: Halc on 18/09/2019 00:37:13
How do you judge if an action is morally right or wrong?
I've been taught them by parents, community, employer, etc.
Quote
Is there something more important than your own life that you are willing to sacrifice for it?
Of course. I'm a parent for one thing.

Have you tried to expand the group that defines the moral rules?
More than most do, yes.
Quote
Can you find a moral rule that's applicable for all human being?
One that they'd all agree on, probably not. One that they should, yes. But it's still applicable only to humans or something sufficiently similar. I've tried to expand the group past the limited 'just humans'. There are higher goals than human goals. Interesting to explore them.

What if the ebola virus were as sentient as us?  What would the moral code for such a species be like? Would it be wrong for them to infect and kill a creature? Only if it's a human? I read a book that included a sentient virus, and also a R-strategist intelligence and more. Much of the storytelling concerned the conflicts in the morals each group found obvious.

Quote
I have proposed to expand the group to all conscious beings
Why the word 'being'? What distinguishes a being from a non-being? Sure, it seems pretty straight forward with the sample of one that we have (it's a being if you're related to it), but that falls apart once we discover a new thing on some planet and have to decide if its a being or not.

Quote
Historically, highest consious beings have been increasing with time.
The Fermi paradox wouldn't be there if that were true.  Yes, it appears nothing on earth has been as sentient as us.  Can't say 'highest conscious', because we've no measure of that. There's plenty of species with larger brains or better senses, either of which arguably make them more conscious.

Quote
Who know how humans will evolve into in distance future.
If we survive the holocene extinction event, who knows indeed. Intelligence is currently trending downward, but that may reverse if it once again carries an advantage.

Quote
By being relativist, do you think that perpetrators of 9/11 are moral in their own respect because they follow moral rules of their group?
Yes, they considered their acts as the ultimate moral act, as did those that taught them it. They laid down their lives for this greater goal.
I am one of those people that question the teachings of my peers. Said teachings are contradictory with themselves, and seem to actually be designed to maximize suffering in the long run. As I said, few consider long term goals.

Quote
what about human sacrifice by the Aztecs? holocaust by Nazi? slavery by the confederacy?human cannibalism by some cultures?
I am not very familiar with the teachings of all these cultures, but one culture oppressing some other culture has been in the moral teachings of most groups I can think of, especially the religious ones. My mother witnessed the holocaust and current votes for it happening again. It only looks ugly in hindsight, and only if you lose. Notice everyone vilifies Hitler, but Lenin and Stalin get honored tombs, despite killing far more jews and others they felt were undesirables. Translation: It is immoral to lose.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/09/2019 04:15:50
Why the word 'being'? What distinguishes a being from a non-being? Sure, it seems pretty straight forward with the sample of one that we have (it's a being if you're related to it), but that falls apart once we discover a new thing on some planet and have to decide if its a being or not.
You can use other words such as 'things' if you'd like to. The main criteria is that they exist in objective reality, which can be verified by other intelligent things, not just in imagination. Hence if you discover a new thing on some planet, you can be sure that it is a thing, whether or not it is intelligent.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/09/2019 04:17:59
I've been taught them by parents, community, employer, etc.
How do you resolve when some of their teachings are contradictory to each other?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/09/2019 06:16:19
What if the ebola virus were as sentient as us?  What would the moral code for such a species be like? Would it be wrong for them to infect and kill a creature? Only if it's a human? I read a book that included a sentient virus, and also a R-strategist intelligence and more. Much of the storytelling concerned the conflicts in the morals each group found obvious.
I've said that consciousness is multidimensional. But one of the most important factor is capability to make plans for the future. This requires the agents to make simulation of objective reality in their mind, which means they have body parts dedicated to make arrangement in such a way to represent their environment, including other agents. Agents with self awareness have the capability to conceive representation of themselves in their mind.
Hence there would be some minimum amount of memory required to do that. If someday it can be demonstrated that some viruses can reach that level of complexity, than be it. It is in line with diversity strategy, which is meant to prevent common mode failure. But if they show the tendency to destroy other consious agents, especially with higher level of consciousness, they must be fought back. If possible, we must try to eliminate the destructive tendency only. Otherwise, the virus' life can be seen as collateral damage. Similar strategy should apply when dealing with other groups with destructive tendencies.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/09/2019 07:18:10
So the subject doesn't know if what it's doing is right or wrong.  Does this epistemological distinction matter? If some action is wrong, then doing that action is wrong, period, regardless of whether the thing doing it knows it's wrong or not.

What does wrong mean, anyway?  Suppose I do something wrong, but don't know it. What does it mean that I've done a wrong thing? Sure, if there is some kind of consequence to be laid on me due to the action, then there's a distinction. I take the wrong turn in the maze and don't get the cheese. That makes turning left immoral, but only if there's a cheese one way and not the other? Just trying to get a bit of clarity on 'right/wrong/ought-to'.
Actions with bad consequences are wrong. Actions known to have bad consequences, but are done anyway, are immoral.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/09/2019 10:33:35
I am not very familiar with the teachings of all these cultures, but one culture oppressing some other culture has been in the moral teachings of most groups I can think of, especially the religious ones. My mother witnessed the holocaust and current votes for it happening again. It only looks ugly in hindsight, and only if you lose. Notice everyone vilifies Hitler, but Lenin and Stalin get honored tombs, despite killing far more jews and other undesirables. Translation: It is immoral to lose.
Morality is indeed would look clearer when viewed as retrospection. But it is possible to make moral judgment in advance, providing that we have the sufficient amount of information, so we can make prediction what would happen if an action is done with sufficient accuracy and precision. A Laplace demon level conscious being can judge moral actions universally.
Immoral actions might be more tolerable by closer in-kind groups. But as increasing integration, communication and globalization, zeitgeist movement, people learn to see from out-group point of view. Hence even immoral actions by the winner might be tolerable now, they might not be tolerable anymore in the future. Just look at slavery, patriarchy, apartheid, torture, etc.
Title: Re: Is there a universal moral standard?
Post by: Halc on 18/09/2019 12:33:23
You can use other words such as 'things' if you'd like to.
I think 'agent' is a good work.  A rock has no particular agency. It needs the ability to make a choice and act on it. A slave arguably has no agency. If it does exactly as it is instructed, its moral responsibility rests on the instructor, not on the slave.
Quote
How do you resolve when some of their teachings are contradictory to each other?
By concluding that morals are not universal.  For one, a higher goal takes priority over a lower one when they indicate contradictory choices to be made.  Even simple devices work that way.
Quote
Actions with bad consequences are wrong. Actions known to have bad consequences, but are done anyway, are immoral.
In the case above, the high priority goal makes one choose an action that violates the lower priority goal, hence an action that is bad (for a greater good).  Your statement above asserts that such actions are immoral.  For instance, I injure a child (bad consequence) as a surgeon to prevent that child from dying of appendicitis. Your statement at face value says this is an immoral action. Better to do nothing and let the child die (worse consequence, but not due to explicit action on your part) leaving you morally intact, except doing nothing is also a choice. Maybe get a different surgeon to do the immoral thing of saving this kid's life.
Title: Re: Is there a universal moral standard?
Post by: Halc on 18/09/2019 13:29:56
If someday it can be demonstrated that some viruses can reach that level of complexity, than be it.
I'm not asserting that this is the case (although some use the facilities of the infected host, as does rabies).  You're missing the point of the question. Suppose a species has all these facilities, and knows that it is effectively a parasitical pestilence. Should that knowledge affect its choices, taking priority over its inherent nature?

Quote
But if they show the tendency to destroy other consious agents, especially with higher level of consciousness, they must be fought back.
So if aliens with higher consciousness (as you put it) come down to Earth, they would not be immoral for them to harvest humans for food or perform painful procedures on us because we're not as conscious as they are.  There's no shortage of fictional stories that depict this scenario, except somehow the aliens are portrayed as evil. You would perhaps differ, given the above statement.  If they're higher on the ladder of consciousness, then it isn't wrong for them to do to us as they wish.
Title: Re: Is there a universal moral standard?
Post by: Halc on 18/09/2019 14:24:33
Actions with bad consequences are wrong.
Yes, by definition, actions with a bad consequences are wrong.  How in any way is this relevant to the discussion?  If a consequence is deemed bad only by some group, then it is wrong only relative to that group.  If it is bad period, then it's universal, but you've made no argument for that case with the statement here.  I'm trying to get the discussion on track.

The point of the thread seems to be to argue why an action might be bad in all cases, and there has been little to back up this position. The examples all seem to have had counter-examples. All the examples of evil have been losers, never something that your people are doing right now, like say employing sweatshop child labor for the clothes you wear. It's almost impossible to avoid since so much is produced via various methods that a typical person would find inhumane, and hard to see since you're paying somebody else to do (and conceal from you) the actual act.  At least that is an example of something done by the winner.

You also need to decide if consciousness is relevant in a continuous or binary way.  If relative, then it isn't immoral for an adult to harm a child since you've said a child (or an elderly person) has a lower level of consciousness than the adult.  If it's a threshold thing (do what you want to anything below the threshold, but not above it), then it needs a definition.  A human crosses the threshold at some point, and until he does, it isn't immoral to do bad things to him.

For instance, a human embryo obviously has far less consciousness than does a pig, so eating pork is more wrong than abortion by this level-of-consciousness argument, be it a spectrum thing or binary threshold.
Similarly, it's OK to kill a person under anesthesia because they're not conscious at the time, and will not suffer for it.  These are some of the reasons the whole 'conscious' argument seems to fall apart.
Title: Re: Is there a universal moral standard?
Post by: syhprum on 18/09/2019 17:09:05
Halc

" despite killing far more jews and other undesirables." I certainly agree that the number of Jews that died as a result of the actions of Lenin and Stalin was as great as the number whose deaths were caused by Hitler and the NAZI regime but you seem to have labelled them as "undesirables" I think an edit might be appropriate
Title: Re: Is there a universal moral standard?
Post by: Halc on 18/09/2019 17:39:33
Halc

" despite killing far more jews and other undesirables." I certainly agree that the number of Jews that died as a result of the actions of Lenin and Stalin was as great as the number whose deaths were caused by Hitler and the NAZI regime but you seem to have labelled them as "undesirables" I think an edit might be appropriate
Changed it to "others they felt were undesirables", which is how I meant it.
I am currently witness to the negative propaganda campaign going on to blame all our problems on the non-citizen Hispanics, thus justifying a policy to arrest them and put them in camps, permanently separate families, close successful tax-paying businesses, etc. just as was done by Nazi Germany.  It is very similar to the propaganda used by Germany to get the general population to accept doing the same thing.  My mother still has an aversion to Jews due to this propaganda, despite not being from an Axis country.  They fed the propaganda to the occupied countries as well, and even the USA (the supposed height of morality in those days, per the history books) sent shiploads of Jews back to Europe where they were promptly killed.

Sorry for the rant.  It's actually quite off topic and has little to do with if this sort of racism is wrong in any universal sense.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 19/09/2019 03:26:48
By concluding that morals are not universal.  For one, a higher goal takes priority over a lower one when they indicate contradictory choices to be made.  Even simple devices work that way.
How do you determine wchich priority is the higher one? Have you found the highest one?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 19/09/2019 04:14:35
In the case above, the high priority goal makes one choose an action that violates the lower priority goal, hence an action that is bad (for a greater good).  Your statement above asserts that such actions are immoral.  For instance, I injure a child (bad consequence) as a surgeon to prevent that child from dying of appendicitis. Your statement at face value says this is an immoral action. Better to do nothing and let the child die (worse consequence, but not due to explicit action on your part) leaving you morally intact, except doing nothing is also a choice. Maybe get a different surgeon to do the immoral thing of saving this kid's life.
I have said in my previous posts that universal morality is based on eventual result. Some actions are morally better than others, and we should not fall into false dichotomy. You perform a surgery to the child is morally better then letting them die. It would be morally better if you could perform the medical procedure which does not injure the child.
Title: Re: Is there a universal moral standard?
Post by: Halc on 19/09/2019 04:36:08
How do you determine wchich priority is the higher one?
Your reply below seems to assume an obvious priority, but I love putting assumptions to the test.

You perform a surgery to the child is morally better then letting them die.
While I agree, how do you know this is true?  I can argue that it is better to let the kid die if there is a higher goal to breed humans resistant to appendix infections, like the Nepalese have done. I can think of other goals as well that lead to that decision.  There seems to be no guidance at all from some universal moral code. I don't think there is one of course.

I personally have died 3.5 times, or at least would have were it not for the intervention of modern medicine.  My wife would have survived until the birth of our first child.  The human race is quite a wreck since we no longer allow defects to be eliminated, and we're not nearly as 'finished' as most species that have had time to perfect themselves to their niche.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 19/09/2019 18:13:35
We'll be able to correct defects by gene editing in the future, so there's no need for any approach like eugenics to improve the species.

As for a universal moral code, I've already provided it several times in this thread without anyone appearing to notice. Morality is mathematics applied to harm management and it's all about calculating the harm:benefit balance. It only applies to sentient things, but it applies to all of them, fleas and intelligent aliens all included. It's easy to understand the harm:benefit balance calculations for a single-participant system, and a multi-participant system can be reduced to a single-participant system just by considering all the sentient participants in it to be the same individual living all those lives in turn. The entirety of morality is right there.
Title: Re: Is there a universal moral standard?
Post by: Halc on 19/09/2019 19:48:45
We'll be able to correct defects by gene editing in the future, so there's no need for any approach like eugenics to improve the species.
Gene editing is currently considered very unethical, but then not as much as the passive eugenics I suggested, so point taken.

Quote
As for a universal moral code, I've already provided it several times in this thread without anyone appearing to notice. Morality is mathematics applied to harm management and it's all about calculating the harm:benefit balance.
OK, that's at least an attempt to word things in some universal manner.
Is there a way to compute harm without being relative to a peer group? Humans seem to be causing a lot more harm than benefit, with an estimated genocide of 80% of the species on the planet in the holocene extinction event.  Any harm to a species like that would probably be viewed as a total benefit by all these other species.

On the flip side of that, there is precedent to what humans are doing: a prior extinction event caused by one new species, and all the complexity of life we know today is descended from that new species, or from those that managed to adapt to the new poisoned environment.
Quote
It only applies to sentient things, but it applies to all of them, fleas and intelligent aliens all included.
You list a flea as sentient, which is a refreshing contrast to the usual 'just like me' definition. Why?  Perhaps since it has a rudimentary mechanism to make choices. That's why I've used the word 'agent' in prior posts.  A rock is not considered an agent of choice. A tree might be, but it gets difficult to justify it. How about a self-driving car?  It meets the definition of slave. Does a true slave carry any moral responsibility?  I almost say no.

Does the species need to consider the harm done to the environment/other species, or only harm done to its own kind?  What if it has no concept of 'species' or 'kind', or possibly not even 'individual' or 'agent'?

Quote
It's easy to understand the harm:benefit balance calculations for a single-participant system, and a multi-participant system can be reduced to a single-participant system just by considering all the sentient participants in it to be the same individual living all those lives in turn. The entirety of morality is right there.
I haven't read the entire thread.  How has been the response to this. It's a good attempt. It's just that harm seems subjective.  What good for X is not necessarily good for Y, so its measure seems context dependent.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 19/09/2019 21:21:13
Is there a way to compute harm without being relative to a peer group? Humans seem to be causing a lot more harm than benefit, with an estimated genocide of 80% of the species on the planet in the holocene extinction event.  Any harm to a species like that would probably be viewed as a total benefit by all these other species

Apply the method. Imagine that you are all the sentient things in the system (having to live all those lives in turn), so all the harm that you do to other things in any of those lives is suffering that you will experience in full. When it comes to simple sentient things, you can kill them humanely without causing suffering and they don't have any friends to miss them, but you may be depriving them of pleasure that they would have had if they'd been allowed to go on existing. If left to go on existing though, they may cause a lot of suffering to other sentient things, and they may themselves die in a horrible way, such as being paralysed by a spider's bite and then being sucked dry slowly. These are important factors in determining their worth and how expendable they are. We don't know how pleasant or horrid it is to be all those bugs and creepy crawlies, but maybe science will find ways to measure that some day.

Survival of a species isn't a moral issue. Some people worry about the ethics of eliminating parasitic things which cause a lot of harm, but they really should be eradicated without any such worry. Parasitic things are accidentally immoral - the pleasure they might get from existing is heavily outweighed by the suffering they cause. People are arguably parasitic too in ways that cause more suffering than is justifiable, but there are different types of people: there are some who are happy to abuse other sentiences, and there are some who aren't. The former group should perhaps be wiped out, but the latter group should not be dragged along for the ride.

We have never been able to do the sums properly to work out what's right and wrong in terms of how we as a species use the land to grow our food. There are too many things to consider and it's easy to collect lots of evidence that biases things in one particular direction. AGI will change that as it will be able to crunch all the data correctly to full depth, though it won't know for certain how much pleasure or suffering each sentient thing actually experiences. It doesn't need to get it completely right though - it is sufficient for it to the best calculations that can practically be done, and the best evidence that it has to go on is the evidence from humans who can talk about how they feel. That can then be extended to other species which look as if they have the same kind of feelings, and for simpler things like bugs it can make reasonable guesses based on the kinds of events that are taking place. Most bugs are disposable items which produce large numbers of offspring, most of which come to a bad end fairly quickly, and there's little that we do that changes the amount of suffering they experience. It's the more advanced creatures that need more protecting, and that's largely because they are wired to grieve and to fear for the well-being of others, so their suffering is multiplied. Their ability to understand what's happening to them if they're about to be killed is also something that amplifies their suffering, so it's clear that intelligence does make sentient things that have it more worthy of protection than those that don't.

There are some simpler cases that are easier to call. If you depend on shooting rabbits to feed your family, those rabbits are breeding like rabbits and will starve if they become too numerous. When you kill one, you have food. The rabbit was killed humanely (hopefully) and although it is now deprived of the pleasure it would have had out of going on existing, there is now room for another rabbit to take its place without overpopulation issues, and that rabbit will have the pleasure of existing instead. This is a balanced ecosystem. Predators help to reduce suffering by taking out the old and the weak and by preventing starvation through overpopulation, so the sentiences that exist in this balanced system have a better time than the rabbits which multiply until they're all living on the edge of starvation with many of them dying after suffering for a long time. It turns out that having humans shoot and eat them makes life better for the rabbits. We need to analyse the whole world that way though to see the places where we're getting it wrong.

Quote
A rock is not considered an agent of choice. A tree might be, but it gets difficult to justify it. How about a self-driving car?  It meets the definition of slave. Does a true slave carry any moral responsibility?  I almost say no.

A rock, tree or self-driving car is not a sentience. Having sentient slaves is abusive if the way they're being used causes suffering that isn't compensated by adequate pleasure, but the lack of freedom itself is so damaging as to make it hard to balance that up enough to compensate them.

Quote
Does the species need to consider the harm done to the environment/other species, or only harm done to its own kind?  What if it has no concept of 'species' or 'kind', or possibly not even 'individual' or 'agent'?

We don't need to care about species, but about individual sentiences.

Quote
Quote
It's easy to understand the harm:benefit balance calculations for a single-participant system, and a multi-participant system can be reduced to a single-participant system just by considering all the sentient participants in it to be the same individual living all those lives in turn. The entirety of morality is right there.
I haven't read the entire thread.  How has been the response to this. It's a good attempt. It's just that harm seems subjective.  What good for X is not necessarily good for Y, so its measure seems context dependent.

There has been no response to it. It just goes in one ear/eye and out the other. As for harm seeming subjective, it could vary considerably not just across species, but within a species: there's no way (currently) of measuring how much less or more any individual suffers or enjoys the same things as another individual - they could have radically different experiences from identical external events and all we can do for calculating morality is assume they're the same unless we have evidence to suggest otherwise. Some people can't feel pain, so we know that an attempt to torture them will not lead to the same amount of suffering as it would for a normal person. That doesn't mean morality isn't something that can be calculated and applied though - we may not have a guarantee that what is calculated is absolutely right, but we can guarantee that the calculation is the best one that can be made for the available information, and a world where we apply that to all things will be better than a world where we don't. AGI will build up a database of knowlege of harm (and pleasure), putting the most likely values to the feelings that are generated by different experiences for different species and where possible for different individuals. The more it studies the world, the more accurate that data will be.

To put yourself in the position of an AGI system trying to impose morality on things, imagine that you have arrived on an alien planet where the local intelligent lifeforms ask you to become their ruler. You don't know how they feel because you aren't one of them, but they tell you that they like and dislike, and you gradually build up a picture of how things are. You notice that they sometimes look as if they're in ecstasy, but they tell you that they hate the feeling that they experience at those times. You see the same look about lesser species which can't talk, and you realise the they are likely suffering too when that happens to them. You do the best job you can, and that alien world ends up with less suffering and more happiness on it as a result of your imposition of computational morality on it. It is not an impossible task, and while the results may not be the best that could be achieved if you had access to inaccessible knowledge, they will be the best that can be achieved with the available knowledge. To go against the available knowledge in the hope of hitting absolute perfection by luck would more likely take things further away from that perfection instead of getting closer to it. Our job is to get as close to it as it can be calculated from the available evidence, and to do anything less than that is immoral.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 20/09/2019 10:57:16
Yes, by definition, actions with a bad consequences are wrong.  How in any way is this relevant to the discussion?  If a consequence is deemed bad only by some group, then it is wrong only relative to that group.  If it is bad period, then it's universal, but you've made no argument for that case with the statement here.  I'm trying to get the discussion on track.
I tried to make distinction between wrong and immoral. If you take only the first half of the statements, it is no surprise that it doesn't look relevant to the discussion.
As I stated in the beginning of this thread, here I wanted to discuss about the existence of universal moral standard. Hence we need to expand the group who contemplates the standard as far as possible to include as many groups as possible. But the expansion is restricted by consiousness level of the group members, because only consious beings can follow moral rules. Otherwise, it would be immoral for human to eat animal as well as vegetables, since this action is bad for the them.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 20/09/2019 12:54:34
How do you determine wchich priority is the higher one?
Your reply below seems to assume an obvious priority, but I love putting assumptions to the test.

You perform a surgery to the child is morally better then letting them die.
While I agree, how do you know this is true?  I can argue that it is better to let the kid die if there is a higher goal to breed humans resistant to appendix infections, like the Nepalese have done. I can think of other goals as well that lead to that decision.  There seems to be no guidance at all from some universal moral code. I don't think there is one of course.

I personally have died 3.5 times, or at least would have were it not for the intervention of modern medicine.  My wife would have survived until the birth of our first child.  The human race is quite a wreck since we no longer allow defects to be eliminated, and we're not nearly as 'finished' as most species that have had time to perfect themselves to their niche.
Your question above has been answered by David. I just want to add that actions are valued by their effectiveness and efficiency. Actions are considered effective if they can achieve the goal, and more efficient if they use less resources.
Improvements for human species are not limited to genetic. Epigenetic options are available as well. They are not even limited to biological or organic method. Electronics and nanotechnology also have promising prospect.
Given the above information, eugenics is no longer among best options because it's very inefficient. The inefficiency is more dramatic if we also count the resistances and conflicts that it causes.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 20/09/2019 13:15:14
The point of the thread seems to be to argue why an action might be bad in all cases, and there has been little to back up this position. The examples all seem to have had counter-examples. All the examples of evil have been losers, never something that your people are doing right now, like say employing sweatshop child labor for the clothes you wear. It's almost impossible to avoid since so much is produced via various methods that a typical person would find inhumane, and hard to see since you're paying somebody else to do (and conceal from you) the actual act.  At least that is an example of something done by the winner.
I guess I can't expect anyone newly joined this discussion to follow all the conversations from the start. As the title might suggest, this thread is meant to look for a universal standard to evaluate moral actions in as diverse situations as possible. I want to answer why an action can be considered moral in some situations but immoral/less moral in other situations.
Winners can also do something considered immoral. At least I have mentioned Joshua. I might also include some actions by Genghis Khan. Those examples are shown just because I thought most people agreed upon their immoralities.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 20/09/2019 13:28:27
You also need to decide if consciousness is relevant in a continuous or binary way.  If relative, then it isn't immoral for an adult to harm a child since you've said a child (or an elderly person) has a lower level of consciousness than the adult.  If it's a threshold thing (do what you want to anything below the threshold, but not above it), then it needs a definition.  A human crosses the threshold at some point, and until he does, it isn't immoral to do bad things to him.
For instance, a human embryo obviously has far less consciousness than does a pig, so eating pork is more wrong than abortion by this level-of-consciousness argument, be it a spectrum thing or binary threshold.
Similarly, it's OK to kill a person under anesthesia because they're not conscious at the time, and will not suffer for it.  These are some of the reasons the whole 'conscious' argument seems to fall apart.
I have said several times already, that universal morality is evaluated from the eventual result, with complete relevant information available. Otherwise, we must deal with probability based on available information.
The child now is expected to be adult in the future, while the adult is expected to be old and eventually die. That's when no other information is given to describe the situation at hand. Hence, harming the child is immoral by universal moral standard. This expectation argument also answer your later objection.
When a human being is brain dead or heavily injured, and there is nothing can be done by current available technology to save him, or trying to save him cost so much that it may harm more living people, then letting him dead is not immoral. Such situation is not rare. There are many mass casualty incidents or natural disasters that lead to that.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 20/09/2019 19:11:10
But the expansion is restricted by consiousness level of the group members, because only consious beings can follow moral rules. Otherwise, it would be immoral for human to eat animal as well as vegetables, since this action is bad for the them.

Morality applies to all sentiences and it should be applied by all intelligences that are capable of calculating it. Many humans are not good at calculating it, and some are little better at it than other animals, but their inadequacy doesn't make it right to kill and eat them. It might be just as bad to torture a fly as to torture a human because it isn't about intelligence, but sentience: the pain may feel the same to both. It's all about how much suffering is involved. If you're comparing the killing of a fly versus the killing of a human though, there's inordinately more suffering caused by the latter due to all the other people who are upset by that, and by the loss of potential life.

Here's a simple illustration of the last point. If you are to live the life of a fly and then the life of a human and know that one of them will be killed early in such a way that if the fly dies early it will be killed 10% of the way through its expected life, while if the human dies early he will be killed 90% of the way through its expected life, would you prefer the fly to be the one that dies early or the human? That last 10% of the human's life may not be his best years, but they're probably inordinately more valuable than the 90% of the fly's expected life, so it's an easy choice even before you factor in all the upset that would be caused to other people who care about the human if he is killed before his time. That is what makes humans so much more valuable than "lesser" animals: those lesser animals aren't lesser in terms of the worth of the sentience within them because it may be exactly the same as the sentience in a human, but the opportunities available to the sentience in each is dependent on the hardware that it is installed in. The human is simply better hardware for sentience to be in than the fly.

If we were to make the same comparison with a human and a bird, it becomes more difficult to call. The lost 90% of the bird's life could in many cases be much more valuable than the lost 10% of the human's life, particularly if it's a wild bird. Should a lonely old man shoot birds for food or just die now and let the birds go on living? That's a tough dilemma. If he's just eating chickens that have been grown for food, that's an easier choice: once he's dead, there will be no more chickens in his yard.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 22/09/2019 00:12:07
Morality applies to all sentiences and it should be applied by all intelligences that are capable of calculating it. Many humans are not good at calculating it, and some are little better at it than other animals, but their inadequacy doesn't make it right to kill and eat them. It might be just as bad to torture a fly as to torture a human because it isn't about intelligence, but sentience: the pain may feel the same to both. It's all about how much suffering is involved. If you're comparing the killing of a fly versus the killing of a human though, there's inordinately more suffering caused by the latter due to all the other people who are upset by that, and by the loss of potential life.
When someone suggests that you should follow a rule X, a natural response would be: what is the expected consequence if we follow x, why is it good for you? What if we ignore it, why is it bad?
Following a universal moral rule as I suggested here will increase the probability of conscious beings to survive. This is good because the surviving conscious beings will have the chance to take actions to stay survive, make progress, and explore other possibilities of rules.
  Ignoring or violating it reduces the chance of conscious beings to survive. Extinction of all conscious beings is bad because it stops the exploration of other possibilities. It would then rely on chance for nature to restart the evolution of conscious beings from the beginning. It would be an obvious waste of time, which is one of the most important resources for any conscious beings.
Evaluation of moral action is based on eventual result, not just immediate consequence. For example, killing every plants can eventually leads to extinction of macroscopic animals, including human. Hence it is morally worse than directly killing one individual human being.
Getting pleasure and happines while avoiding pain and suffering  are instrumental goal. Evaluation of universal moral rules should be based on terminal goals.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 23/09/2019 09:38:02
IMO, universal moral rules are tools intended to increase the chance of achieving universal terminal goal, which is to prevent extinction of conscious beings. If we use lessons learned from process safety management concept, we can see that the moral rules are analogous to administrative controls.
https://www.ownerteamconsult.com/effective-process-safety-management/
Quote
The three strategies used during detailed design to prevent, control or mitigate hazards are:

Passive strategy: Minimise the hazard via process and equipment design features that reduce hazard frequency or consequence;
Active strategy: Engineering controls and process automation to detect and correct process deviations; and
Procedural strategy:Administrative controls to prevent incidents or minimise the effects of an incident.

The passive strategy uses fundamental natural laws (physical/chemical) to achieve the goal. The basic rules are simple, they are even obeyed by non-conscious things. Some examples are substance selection, sizing of equipments, intrinsically safe equipment. But designing equipment, vessels, pipelines to withstand all possible scenario at all time is costly, and often not economically feasible.
Engineering controls utilizes engineering/artificial rules, which are derived from natural laws optimized for achieving specific target effectively and efficiently. Some examples are rupture disc, pressure relief valve, process interlocks, PID controls. The rules are more complex than in passive strategy, due to conditional activation. If a certain condition is met, do something. For example, if the system pressure exceed some threshold (below the design pressure of the equipment), open the relieve valve, or stop the feed pump. We can say that the agents following the rules are somewhat conscious, because they are responsive to their environment. The complexity level among them may vary from simple on-off states to PID controller, multivariable control, fuzzy logic, and artificial neural networks.
In the old days when engineering controls are not sophisticated enough, higher complexity tasks must be done by humans, such as executing sequence in a recipe, flying airplanes, driving cars. Because humans are so complex, they are prone to make mistakes. To reduce the chance of human errors, administrative controls are needed. They are rules to be obeyed by humans as consious agents.
Due to technological advancement, complexity level of engineering controls increases to even exceeding the performance of human operators in some areas.
Soon enough, they can outperform humans in jobs closely related to the problems of morality, such as lawyers, juries, even judges. They might someday outperform lawmakers, which means that they can produce a set of rules to serve an intention without violating/contradicting more fundamental rules with less resources (e.g. money, energy, time). But to do so we would need to define those fundamental rules, which I tried to explore here.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 23/09/2019 11:12:48
I think I have overlooked this.
So if aliens with higher consciousness (as you put it) come down to Earth, they would not be immoral for them to harvest humans for food or perform painful procedures on us because we're not as conscious as they are.  There's no shortage of fictional stories that depict this scenario, except somehow the aliens are portrayed as evil. You would perhaps differ, given the above statement.  If they're higher on the ladder of consciousness, then it isn't wrong for them to do to us as they wish.
Any aliens with the ability to perform interstellar travel are very unlikely to develop the required technology as an individual. They are most likely a product of a society, which have their own struggles in the past, competitions and conflicts among themselves. They might experienced devastating wars, famines, and natural disasters. They might also have developed weapons of mass destruction such as nuclear and chemical weapons. They must have survived all of those, otherwise they won't be here in the first place. They must have developed their own moral rules, and might have even figured out the universal morality by expanding the scope and applicability of their rules. They might have their own version of PETA or vegan activists, and genetically modified bacteria to produce their food, or even better, 3D printed their food using nanotechnology. They might have modified their own bodies so that they don't depend on external biological systems just to survive.
Harvesting consious beings for food is a grossly inefficient process, hence it's very unlikely to be done by highly intelligent organisms, not to mention the risk of resistance and conflict that may harm themselves.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 24/09/2019 07:30:44
Evaluation of moral action is based on eventual result, not just immediate consequence. For example, killing every plants can eventually leads to extinction of macroscopic animals, including human. Hence it is morally worse than directly killing one individual human being.
Here is another example to emphasize the need to evaluate morality from eventual result, rather than direct consequences. Most of us agree that the sun is not a conscious being. But it would be immoral to turn the sun into blackhole just for fun, while knowing that this action will lead to death of all currently known conscious being.

We can also learn about decision making from chess. Suppose you are in the middle stage of a chess game. There is only two legal moves available for you, the first is sacrificing your pawn, while the other is sacrificing you queen. In most cases, losing a queen puts you in more disadvantage position than losing a pawn. But if you are a really good player, and you can calculate accurately that sacrificing the queen will eventually give you victory, then it is regarded as a good move. On the other hand, if the end of the game hasn't been clear to you, then sacrificing the pawn is the better move.

Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 24/09/2019 10:08:44
Here is an example to emphasize that sometimes moral decision is based on efficiency. We will use some variations of trolley problem with following assumptions:
- the case is evaluated retrospectively by a perfect artificial intelligence, hence no room for uncertainty of cause and effect regarding the actions or inactions.
- a train is moving in high speed on the left track.
- a lever can be used to switch the train to the right track.
- if the train goes to the left track, every person on the left track will be killed. Likewise for the right track.
- all the people involved are average persons who have positive contribution to the society. No preferences for any one person over the others.
The table below shows possible combination of how many persons on the left and right tracks, ranging from 0 to 5.
The left column in the table below shows how many persons are on the left track, while the top row shows how many persons are on the right track.
\   0   1   2   3   4   5
0   o   o   o   o   o   o
1   x   o   o   o   o   o
2   x   ?   o   o   o   o
3   x   ?   ?   o   o   o
4   x   ?   ?   ?   o   o
5   x   ?   ?   ?   ?   o

When there are 0 person on the left track, moral persons must leave the lever as it is, no matter how many persons on the right track. This is indicated by letter o in every cell next to number 0 on the left column.
When there are 0 person on the right track, moral persons must switch the lever if there are at least 1 person on the left track. This is indicated by letter x in every cell below the number 0 on the top row, except when there is 0 person on the left track.
When there are non-zero persons on each track and more persons on the right track than the left track, moral persons must leave the lever as it is to reduce casualty. This is indicated by letter o in every cell on the top right side of diagonal cells.
When there are the same number of persons on the left and right tracks, moral persons should leave the lever to conserve resource (energy to switch the track) and avoid being accused of playing god. This is indicated by letter o in every diagonal cell.
When there are non-zero persons on each track and more persons on the left track, the answer might vary (based on previous studies). If you choose to do nothing in these situations, effectively it shows how much you value your action of switching the lever, in the unit of difference of person number between the left and right track. This is indicated by question marks in every cell on the bottom left side of diagonal cells.
Title: Re: Is there a universal moral standard?
Post by: Halc on 24/09/2019 20:02:42
A rock, tree or self-driving car is not a sentience.
There is a lot to discuss in your long post, but this one stood out.  Why is a flea a sentience but an AI car not one?  Surely the car is entrusted with moral decisions that nobody would ever entrust to a flea.  The only thing the flea has that the car doesn't is that you and the flea share a common ancestor, and even that doesn't explain why 'tree' is on the other side of the line. The car is a reasonable example of an alien, something with which you don't share an ancestry, and right off you assert that it isn't a sentience, seemingly because it isn't just like you.

OK, the car is not a life form, but the alien might also not be, and still be a higher sentience.  Maybe, depending on how one defines 'life' and 'sentience'.  I had composed more of a reply, but we'll be speaking past each other without some common terms defined.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 24/09/2019 23:58:33
In real life, many decisions must be made with incomplete information.  This is where disputes often arise due to uncertainty. Some scientific tools to handle this are probability theory and logical induction.
Title: Re: Is there a universal moral standard?
Post by: Harryobr on 25/09/2019 11:53:46
They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings...
Title: Re: Is there a universal moral standard?
Post by: Harryobr on 25/09/2019 12:00:44
Being a meme, the universal moral standard shares space in memetic pool with other memes. They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 25/09/2019 19:27:04
Why is a flea a sentience but an AI car not one?  Surely the car is entrusted with moral decisions that nobody would ever entrust to a flea.  The only thing the flea has that the car doesn't is that you and the flea share a common ancestor, and even that doesn't explain why 'tree' is on the other side of the line. The car is a reasonable example of an alien, something with which you don't share an ancestry, and right off you assert that it isn't a sentience, seemingly because it isn't just like you.

First, let's start with a rock. A rock may be sentient in that every fundamental particle in it may be sentient. Can we torture the rock? We could maybe throw it into a lava lake to torture it with high heat, but there's a lot of rock in that state all the time deep in the Earth. Maybe it's all in agony all the time. We should maybe throw all material into a black hole as that might stop the suffering by slowing its functionality to a halt. Maybe that's the best way to end all the extreme suffering that might for all we know be going on in the universe wherever there is matter.

The self-driving car may be sentient in the same way as the rock. Every particle in us could be sentient in the same way too, and most of it could be in extreme agony all the time without us knowing - we can't measure how it feels. The only sentient thing that we think we can measure is somewhere in our own brain. We have an information system in there which generates data that makes assertions about what that sentience is feeling. We don't know what evidence that information system is using when it makes its measurements, but it looks impossible for its assertions about sentience to be competent - it should not have any way of measuring feelings and knowing that they are feelings. It should be unable to tell whether they are pleasant feelings or unpleasant ones. Its assertions about feelings cannot be trusted to be anything more than fiction. However, we must also err on the side of caution and consider the possibility that the assertions may somehow be true. We will find out for certain when we can trace back the assertions about feelings in the brain to see how that data was put together and what evidence it was based on. In doing that, we might find some magical quantum mechanism which does the job.

Let's just assume though that in humans there really is sentience there. We can assume that it is also present in other species because there's no reason why it should suddenly appear just for us to do the same job as needs to be done in other animals. Sentience will be in all animals down to a very simple level. It will most likely be in most creatures that have a brain and a response to damage with any kind of response that makes it look as if it might be in pain. Worms (the bigger ones) almost certainly have it, and I expect that flies have it too. A flea may be at the extreme simplicity end of things, but it may still have feelings. If intelligent aliens also report the existence of sentience, then a wide range of simpler species related to them will doubtless have it too. A need to be able to enjoy things and to suffer does not magically need to emerge just because the brain has become a general intelligence capable of turning itself to any task (in the way that only humans can on our planet). If sentience isn't needed in simple creatures with a reaction that looks as if pain is involved, then it isn't needed in more complex creatures either and there should be no evolutionary pressure on sentience to appear to do something entirely superfluous.

If the brain is really measuring sentience, it is measuring the sentience of something in the brain. When a person feels pain in the hand of an arm which was amputated long ago, that shows that they are not feeling the pain in the hand, but in the head. If I stamp on your foot, you may feel pain, but there may be no pain experienced by anything in your foot that wouldn't be felt by a tennis ball being whacked by Nadal with a racquet. There may be all manner of feelings going on in quintillions of sentient things inside that person, but that is ignored by the brain which only focuses on one sentient thing somewhere inside the brain which is linked in to ideas that the brain is processing.

A self-driving car's brain is a computer which works in the same way as the computer on a desk. There is no sentience involved in its processing. If such a machine generates claims that it is sentient and that it's feeling pain, excitement, boredom or that it feels the greenness of green, then it has been programmed to tell lies. That machine could potentially calculate morality better than any human, but that doesn't make it in any way sentient. If you hit it with a hammer and it says "Ouch!", it is simply following a rule that it should say "Ouch!" if something hits it. You can write a simple program to make a computer to do this when a key is typed, but there is no feeling involved:-

If(keyInput=="p"){print "Ouch!"}
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 26/09/2019 09:57:24
They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings...
Welcome to our discussion.
 
Being a meme, the universal moral standard shares space in memetic pool with other memes. They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings.
Efforts to discover universal goal can be done using top-down or bottom-up approach. Your statement above seems to lean more on bottom-up approach, similar to my original attempts in other thread https://www.thenakedscientists.com/forum/index.php?topic=71347.0

This thread was meant to use the top-down approach, hence I started by definitions and then tried to answer basic/fundamental questions (what, when, where, who, why, how) regarding universal moral rules. Here is an example.
To answer why keeping the existence of conscient beings is a fundamental moral rule, we can use a method called reductio ad absurdum to its alternative.
Imagine a rule that actively seeks to destroy conscient beings. It's basically a meme that's self destruct by destroying its own medium. Or conscient beings that don't follow the rule to actively keep their existence (or their copies) will likely be outcompeted by those who do, or struck by random events and cease to exist.
I'll try to summarize the discussion here in a more of deductive reasoning and then compile it in a Euclidean style writing.
Title: Re: Is there a universal moral standard?
Post by: Halc on 27/09/2019 15:40:50
Why is a flea a sentience but an AI car not one?
First, let's start with a rock. A rock may be sentient in that every fundamental particle in it may be sentient. Can we torture the rock? We could maybe throw it into a lava lake to torture it with high heat, but there's a lot of rock in that state all the time deep in the Earth. Maybe it's all in agony all the time. We should maybe throw all material into a black hole as that might stop the suffering by slowing its functionality to a halt. Maybe that's the best way to end all the extreme suffering that might for all we know be going on in the universe wherever there is matter..

The self-driving car may be sentient in the same way as the rock. Every particle in us could be sentient in the same way too, and most of it could be in extreme agony all the time without us knowing - we can't measure how it feels. The only sentient thing that we think we can measure is somewhere in our own brain. We have an information system in there which generates data that makes assertions about what that sentience is feeling. We don't know what evidence that information system is using when it makes its measurements, but it looks impossible for its assertions about sentience to be competent - it should not have any way of measuring feelings and knowing that they are feelings. It should be unable to tell whether they are pleasant feelings or unpleasant ones. Its assertions about feelings cannot be trusted to be anything more than fiction. However, we must also err on the side of caution and consider the possibility that the assertions may somehow be true. We will find out for certain when we can trace back the assertions about feelings in the brain to see how that data was put together and what evidence it was based on. In doing that, we might find some magical quantum mechanism which does the job.
Are you arguing that rock or car protons are different from the ones in fleas ?  If not, I don't know why you brought up the prospect of suffering of fundamental particles, especially since those particles move fairly freely into and out of biological things like the flea.

As for all these comments concerning suffering, you act like it is a bad thing.  If there was a pill that removed all my pain and suffering (there is), I'd not take it, because it's there for a reason.  It would be like voluntarily removing my physical conscience, relying instead on rational reasoning to not do things that are wrong.  I still have all my fingers because I have pain and suffering (and not for lack of trying otherwise).

Quote
It will most likely be in most creatures that have a brain and a response to damage with any kind of response that makes it look as if it might be in pain.
So you want it to writhe in a familiar way in response to harm. I agree that the self-driving car does not writhe in a familiar way. I watched a damaged fly, and it seemed more intent on repairing itself than on gestures of agony.
Thus it is not wrong for an alien to injure us since we don't react to the injury in a way that is familiar to them.
The rules only apply to things that are 'sufficiently just like me'.

Quote
A self-driving car's brain is a computer which works in the same way as the computer on a desk. There is no sentience involved in its processing.
That's just an assertion.  How do you know this?  Because it doesn't writhe in a familiar way when you hit it with a hammer?  You just finished suggesting that fundamental particles are sentient, and yet a computer on my desk (which has moral responsibility, and not primarily to me) does not.

Interestingly, in both cases (the computer and a human), it is not the physical thing that holds moral responsibility, but the information that does.  Hence if my computer contracts a virus that causes it to upload my password to a malicious site, I act to eradicate that information from the computer, and not to take action against the computer itself.
Similarly, if a person commits some crime, then creates an exact replica of himself and destroys the original person, the replica is still guilty of the crime despite the fact that the actual body that performed the crime is gone.  The information is preserved and the information is what is guilty.  So a thing that process/retains information seems capable of doing things that can be classified as right or wrong.  Just my observation.

Quote
If such a machine generates claims that it is sentient and that it's feeling pain
A rock can do that.  I just need a sharpie. How does a person demonstrate his claim of sentience (a thing you've yet to define)?  A computer already has demonstrated that it bears moral responsibility, so if it isn't sentient, then sentience isn't required for what a thing does to do right or wrong.

Quote
or that it feels the greenness of green, then it has been programmed to tell lies.
How do you convince the alien that you're not just programmed to say 'ouch' when you hammer your finger, assuming quite unreasonably that they'd consider "ouch" to be the correct response?

You seem to define a computer to be not sentient because it does a poor job of mimicking a person.  By that standard, I'm not as sentient as a squirrel because I've yet to convince one that I am of of their own kind.  I fail the squirrel Turning test.  It can be done with a duck.  I apparently pass the duck Turning test.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 27/09/2019 18:28:02
Are you arguing that rock or car protons are different from the ones in fleas ?  If not, I don't know why you brought up the prospect of suffering of fundamental particles, especially since those particles move fairly freely into and out of biological things like the flea.

If suffering happens, and if a compound object can suffer, that cannot happen without at least one of the components of that compound object suffering. A suffering compound object with none of the components feeling anything at all is not possible. If you're looking for sentience, it has to be in something fundamental and not something of no substance that emerges by magic out of complexity.

Quote
As for all these comments concerning suffering, you act like it is a bad thing.  If there was a pill that removed all my pain and suffering (there is), I'd not take it, because it's there for a reason.  It would be like voluntarily removing my physical conscience, relying instead on rational reasoning to not do things that are wrong.  I still have all my fingers because I have pain and suffering (and not for lack of trying otherwise).

Suffering has a use: it drives you to try to avoid greater damage. Where it isn't so great is when people are forced to suffer by others. Torture is universally recognised as immoral.

Quote
Thus it is not wrong for an alien to injure us since we don't react to the injury in a way that is familiar to them.
The rules only apply to things that are 'sufficiently just like me'.

Then you think it's moral for aliens to torture people?

Quote
Quote
A self-driving car's brain is a computer which works in the same way as the computer on a desk. There is no sentience involved in its processing.
That's just an assertion.  How do you know this?  Because it doesn't writhe in a familiar way when you hit it with a hammer?  You just finished suggesting that fundamental particles are sentient, and yet a computer on my desk (which has moral responsibility, and not primarily to me) does not.

All the particles of the machine could be sentient, but they may be suffering while the machine generates claims about being happy, or they may all be content while the machine generates claims about being in agony. The claims generated by an information system have no connection to the sentient state of the material of the machine.

It is not "just" an assertion. It is an assertion which I can demonstrate to be correct. A good starting point though would be for you to read up on the Chinese Room experiment so that you get an understanding of the disconnect between processing and sentience.

Quote
Similarly, if a person commits some crime, then creates an exact replica of himself and destroys the original person, the replica is still guilty of the crime despite the fact that the actual body that performed the crime is gone.  The information is preserved and the information is what is guilty.  So a thing that process/retains information seems capable of doing things that can be classified as right or wrong.  Just my observation.

Not quite, but it's not far wrong. The sentience is not to blame because it is not in control: there is no such thing as free will. Both the people in that example are equally dangerous and need to be prevented from doing harm. In the future, we'll be able to make all such people wear devices that can disable them whenever they try to do seriously immoral things. We will also want to do gene editing to make sure that all the vicious rape and pillage genes are not passed on to future generations.

Quote
Quote
If such a machine generates claims that it is sentient and that it's feeling pain
A rock can do that.  I just need a sharpie.

How does a rock do that, and what's a sharpie?

Quote
How does a person demonstrate his claim of sentience (a thing you've yet to define)?

A person can't demonstrate it. All he can do is assert it and hope that others will believe it because they are sentient too.

Quote
A computer already has demonstrated that it bears moral responsibility, so if it isn't sentient, then sentience isn't required for what a thing does to do right or wrong.

Correct. Sentience is not needed by something that makes moral decisions.

Quote
Quote
or that it feels the greenness of green, then it has been programmed to tell lies.
How do you convince the alien that you're not just programmed to say 'ouch' when you hammer your finger, assuming quite unreasonably that they'd consider "ouch" to be the correct response?

If the alien isn't sentient, you could have a very hard time convincing the alien that there is such a thing as sentience. However, it might decide to study you to find out why you believe yourself to be sentient, so it would scan your brain and model it, then it would look to see how your claims of sentience are generated and what evidence they're based on. It may then find that they are all fictions, or it may uncover the mechanism and discover that sentience is real.

Alternatively, if the alien is sentient, it will assume that you are too on the basis that you wouldn't have come up with the idea of sentience otherwise, and it would know that you are to be protected by the universal rule of morality.

Quote
You seem to define a computer to be not sentient because it does a poor job of mimicking a person.  By that standard, I'm not as sentient as a squirrel because I've yet to convince one that I am of of their own kind.  I fail the squirrel Turning test.  It can be done with a duck.  I apparently pass the duck Turning test.

Not at all. I say it isn't sentient because sentience has no connection to the information processing system of the computer which can only generate fake claims about sentience.
Title: Re: Is there a universal moral standard?
Post by: Halc on 28/09/2019 01:54:30
If suffering happens, and if a compound object can suffer, that cannot happen without at least one of the components of that compound object suffering. A suffering compound object with none of the components feeling anything at all is not possible.
By reducto ad-adsurdum, that indeed implies that a proton can suffer, and only because at least one of its quarks isn't contented. I see no way to relieve the suffering of a quark since I've no idea what needs it has that aren't getting met.
A rock is made of the same particles, and you say it isn't capable of suffering, so maybe all protons want to be part of rocks, dirt, and computer objects, and hence the universal morality is to quick kill every Earth life form anywhere ASAP.
Since I don't buy into any definition of suffering that would support protons being in such a state, I see nowhere to go from there.


Quote
As for all these comments concerning suffering, you act like it is a bad thing.  If there was a pill that removed all my pain and suffering (there is), I'd not take it, because it's there for a reason.  It would be like voluntarily removing my physical conscience, relying instead on rational reasoning to not do things that are wrong.  I still have all my fingers because I have pain and suffering (and not for lack of trying otherwise).

Quote
Torture is universally recognised as immoral.
It is not.  I see nothing in the universe that recognizes any moral rule at all.  Not saying there isn't one.  That said, there are human cultures that don't find torture immoral.  Most are satisfied if they get the benefit of the torture without the direct evidence that it's going on. Immoral to kill your neighbor, but not immoral to hire a hitman to do it, so long as you don't watch.

Quote
Then you think it's moral for aliens to torture people?
A moral code is not likely to assert that one is obligated to torture something, but that's the way you word the question.  So no.  I was commenting that by the rules you are giving me, it wouldn't be immoral for them to torture us.

Quote
All the particles of the machine could be sentient, but they may be suffering while the machine generates claims about being happy, or they may all be content while the machine generates claims about being in agony.
Maybe your protons also are in a different state than the one you claim, so it seems that the state of the protons is in fact irrelevant to how I treat the object composed of said protons.

Quote
The claims generated by an information system have no connection to the sentient state of the material of the machine.
Ah, there's the distinction I asked for.  You claim a thing is 'sentient' if it has a connection with the feelings of its protons, and a computer doesn't.  How do you justify this claim, and how do you know that the protons are suffering because there's say too much pressure on them?  The same pressure applied to different protons of mine seems not to cause those particular protons any discomfort.  That's evidence that it's not the protons that are suffering.

Quote
It is not "just" an assertion. It is an assertion which I can demonstrate to be correct. A good starting point though would be for you to read up on the Chinese Room experiment so that you get an understanding of the disconnect between processing and sentience.
Chinese Room experiment has different interpretations, and has nothing to do with the suffering of particles.
Anyway, in some tellings, the guy in the room has a lookup table of correct responses to any input.  If this is the algorithm, the room will very much be distinguishable from talking to a real Chinese speaker.  It fails the Turing test.

If it doesn't fail the Turing test, then it passes the test and is indistinguishable from a real person, which makes it sentient (common definition, not yours). 

Quote
Quote
Similarly, if a person commits some crime, then creates an exact replica of himself and destroys the original person, the replica is still guilty of the crime despite the fact that the actual body that performed the crime is gone.  The information is preserved and the information is what is guilty.  So a thing that process/retains information seems capable of doing things that can be classified as right or wrong.  Just my observation.
The sentience is not to blame because it is not in control: there is no such thing as free will.
Ah. The sentence definition comes out.  As you've been reluctant to say, you're working with a dualistic model, and I'm not.  My sentience (the physical collection of particles) is to blame because it is in control of itself (has free will).  Your gob of matter is not to blame because it is instead controlled by an outside agent which assumes blame for the actions it causes.  The agent is to blame, not the collection of matter.

Anyway, the self-driving car is then not sentient because it hasn't been assigned one of these immaterial external agents. My question is, what is the test for having this external control or not? How might the alien come down and know that you have one of these connections and the object to your left does not?  The answer to this is obvious. The sentient object violates physics, because if it didn't, its actions would be a function of physics, and not a reaction to an input without a physical cause.  Show me such a sensory mechanism in any sentient thing then.
In fact, there is none since a living thing is engineered entirely wrong for an avatar setup like that.  If I want to efficiently move my arm, I should command the muscle directly and not bother with the indirection from a remote location.  Nerves would be superfluous.  So would senses since the immaterial entity could measure the environment directly, as is demonstrably done by out-of-body/near-death experiences.

Anyway, I had not intended this to be a debate on philosophy of mind.  Yes, the dualistic model has a completely different (and untestable) set of assumptions about what the concept of right and wrong means.  Morals don't come from the universe at all.  They come from this other realm where the gods and other assertions are safely hidden from empirical inquiry.

Quote
Quote
A computer already has demonstrated that it bears moral responsibility, so if it isn't sentient, then sentience isn't required for what a thing does to do right or wrong.
Correct. Sentience is not needed by something that makes moral decisions.
You brought up sentience in a discussion of universal morals.  If it isn't needed, then why bring it up?
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 28/09/2019 22:04:38
A rock is made of the same particles, and you say it isn't capable of suffering...

I didn't say it isn't capable of suffering. The point I was making is that it could be suffering in that all the material it's made of could be suffering for all we know. We have no way to tell. We can melt down rocks though and make silicon and metals out of them, then we can build computers out of those materials, and all the material of the computers that we build may be suffering in the same way, but again we can't tell whether it's suffering or not. We can then run programs on that computer which we can program to make assertions about suffering which are not based on any measure as to whether the material is suffering or not, so the assertions produced by such programs are baseless. If we write code to trigger claims of pain when the "A" key on the keyboard is pressed and claims of pleasure when the "B" key is pressed, you are not actually generating any pain or pleasure in the machine by pressing either of those keys.

In humans though, we have a kind of computer making claims about sentience which believes those assertions to be true. We can expose a human to songs by the Spice Girls or Peter Andre to make them generate claims about experiencing pain or pleasure, and they will believe that they are genuinely experiencing pain. In the unlikely event that they believe they are genuinely experiencing pleasure, all the material they're made of may actually be experiencing extreme suffering in the same way as a rock might be, except for one sentient thing inside them which is actually experiencing pleasure and which is being measured as doing so in some way by some part of the information system that is generating the claims about sentience.

There is nothing we can usefully do for the sentiences that might be in rocks unless we can find a way to measure feelings in them. If we found that they were all suffering greatly, maybe we could ease their suffering by throwing everything we can into black holes, but there's no guarantee that that would make any difference to them. It isn't our immediate concern though, not least because it's unlikely that practically everything that exists should be suffering all the time. It's much more likely that if sentience is real and our claims about being sentient are true, we're doing something special with a sentient thing in the brain and we're systematically inducing feelings in it which are much more intense than the ones that normally occur in things like rocks.

Quote
Quote
Torture is universally recognised as immoral.
It is not.

In some cases it isn't: it wouldn't be wrong for mass-murdering dictators to be tortured to death slowly to serve as a warning to others, but I was thinking about cases where people are torturing innocents. There may be some backward societies which don't see that as wrong, but a bit of torture aimed at them would soon teach them the error of their approach and they would then understand that it's wrong.

Quote
I see nothing in the universe that recognizes any moral rule at all.

I see people who do recognise moral rules. They get them wrong in places due to poor thinking, but they've got a lot of it right.

Quote
I was commenting that by the rules you are giving me, it wouldn't be immoral for them to torture us.

By the rule(s) I've provided, it would very clearly be immoral for them to torture us. The harm outweighs the benefit.

Quote
]Maybe your protons also are in a different state than the one you claim, so it seems that the state of the protons is in fact irrelevant to how I treat the object composed of said protons.

That's the whole point. What you do to the person or machine likely provides no gain for those sentient things. The only sentient thing that we're able to change things for usefully is the one that's being measured by the information system that's generating claims about being sentient, and that's the one that's most likely having unusually extreme feelings generated in it which go way beyond anything felt by the sentiences that might be in rocks.

Quote
You claim a thing is 'sentient' if it has a connection with the feelings of its protons, and a computer doesn't.  How do you justify this claim, and how do you know that the protons are suffering because there's say too much pressure on them?  The same pressure applied to different protons of mine seems not to cause those particular protons any discomfort.  That's evidence that it's not the protons that are suffering.

You have no way of measuring what your protons are feeling, so you don't know if pressure is doing anything to affect how they feel. (By the way, when I use the word particles, I'm referring to something a lot more fundamental than protons, so I'd rather use the word particles when discussing this, even if that actually means something that we don't normally refer to as particles.) With computers there is no sentience wired into any system that induces feelings into it associated with the inputs which deliver signals that might cause pain to be felt in the system, and there's nothing in the hardware to read those feelings back with. If the feelings that people report having are real, there's something different happening in the brain which does lead to feelings being induced in a sentience and then being read back by the information system which generates assertions about those feelings being felt. Science has no proposed mechanism by which this can occur, but for the sake of discussions of morality, we can simply assume that it happens in brains and that it happens in some part of the hardware which we haven't yet understood. With computers though, there is no such facility: there is no way to induce feelings in any sentience in the machine other than by luck (and therefore no way to know if it's pleasant or unpleasant), and there is no way to read the feelings either, so the machine cannot make any informed claims about feelings: it can only pretend to have feelings.

Quote
Chinese Room experiment has different interpretations, and has nothing to do with the suffering of particles.
Anyway, in some tellings, the guy in the room has a lookup table of correct responses to any input.  If this is the algorithm, the room will very much be distinguishable from talking to a real Chinese speaker.  It fails the Turing test.

The point I want you to take from the Chinese Room experiment is that there is nowhere in which feelings are involved in the computations where they're relevant to the output. The person operating the machine may be happy, bored or deeply depressed, but so long as he carries out the task correctly, the program will function the same way in all three cases. The Chinese Room processor is Turing-complete, capable of running full AGI. It has no way of handling feelings, and nor do any of the computers we know how to build.

Quote
If it doesn't fail the Turing test, then it passes the test and is indistinguishable from a real person, which makes it sentient (common definition, not yours).

Passing the Turing Test has nothing to do with a machine being sentient, but merely intelligent. Anyone who claims otherwise has not understood what the Turing Test is about.

Quote
Quote
The sentience is not to blame because it is not in control: there is no such thing as free will.
Ah. The sentence definition comes out.  As you've been reluctant to say, you're working with a dualistic model, and I'm not.  My sentience (the physical collection of particles) is to blame because it is in control of itself (has free will).  Your gob of matter is not to blame because it is instead controlled by an outside agent which assumes blame for the actions it causes.  The agent is to blame, not the collection of matter.

There is no reluctance on my part. If you're working with sentience, then you are necessarily working with a "dualistic" model. If you aren't using that kind of model, you have no room for sentience other than as a fiction. In a model where it is a mere fiction, it is impossible to cause suffering because there is no such thing. In neither model is there room for free will.

Quote
Anyway, the self-driving car is then not sentient because it hasn't been assigned one of these immaterial external agents. My question is, what is the test for having this external control or not? How might the alien come down and know that you have one of these connections and the object to your left does not?  The answer to this is obvious. The sentient object violates physics, because if it didn't, its actions would be a function of physics, and not a reaction to an input without a physical cause.  Show me such a sensory mechanism in any sentient thing then.
In fact, there is none since a living thing is engineered entirely wrong for an avatar setup like that.  If I want to efficiently move my arm, I should command the muscle directly and not bother with the indirection from a remote location.  Nerves would be superfluous.  So would senses since the immaterial entity could measure the environment directly, as is demonstrably done by out-of-body/near-death experiences.

Science has no model that can make sense of sentience - it looks as if there can be no such thing. If we decide that that's the case, then there can be no such thing as suffering and there is no role for morality. You can go and torture anyone you like and then defend yourself in court by showing that sentience makes no sense. You will then be locked up for the rest of your life to protect other people from you, and if there's no such thing as sentience, you won't be harmed by that. If you don't believe in sentience, you shouldn't have any problem with being locked up for life even without committing a crime, and you should be able to copy the philosopher who through himself into a volcano to demonstrate his belief in nihilism.

Morality is not something that nihilists care about. It is there for everyone who believes that feelings are or might somehow be real. When discussing morality, we take it for granted that feelings are real. If we work on the basis that they aren't real, then morality is redundant.

Quote
Anyway, I had not intended this to be a debate on philosophy of mind.  Yes, the dualistic model has a completely different (and untestable) set of assumptions about what the concept of right and wrong means.  Morals don't come from the universe at all.  They come from this other realm where the gods and other assertions are safely hidden from empirical inquiry.

If you want to avoid the diversion, then the trick is to play by the rules by starting with the assumption that feelings are real and that something feels them. When we do that and turn our attention to computers, we find no mechanism to induce or read feelings in a sentience. In the brain, we assume that the hardware somehow is able to do both of those things. Science makes that look impossible too, but our job when discussing how morality works is to assume that it is possible; that there is some hardware trick that somehow enables it.

Quote
You brought up sentience in a discussion of universal morals.  If it isn't needed, then why bring it up?

Protecting sentient things is the purpose of morality. Calculating morality does not require the calculator to be sentient.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 30/09/2019 00:46:37
Science has no model that can make sense of sentience - it looks as if there can be no such thing. If we decide that that's the case, then there can be no such thing as suffering and there is no role for morality.


Protecting sentient things is the purpose of morality. Calculating morality does not require the calculator to be sentient.
That requires sentience to be defined objectively.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 30/09/2019 20:20:35
Science has no model that can make sense of sentience - it looks as if there can be no such thing. If we decide that that's the case, then there can be no such thing as suffering and there is no role for morality.


Protecting sentient things is the purpose of morality. Calculating morality does not require the calculator to be sentient.
That requires sentience to be defined objectively.
How do you define fundamental things? When you reach them, their definitions are always circular. All you have is how they relate to other things.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 01/10/2019 05:11:37
How do you define fundamental things? When you reach them, their definitions are always circular. All you have is how they relate to other things.
You can compare fundamental things of one object to another. For example, which rock has more mass or volume.
How do you compare and relate sentience to other things?
Title: Re: Is there a universal moral standard?
Post by: Halc on 01/10/2019 13:51:02
A rock may be sentient in that every fundamental particle in it may be sentient.
But we know the rock isn't sentient since none of its particles exhibits free will.  If any particle was suffering, it could put itself in a situation where this was not the case.  Since it isn't doing that, either it isn't sentient or the thing is completely contented.
Likewise, the motion of the particles in my body can be described by the laws of physics.  Not a single proton seems to be exerting free will.  Hence I cannot be sentient (your definition).

What prevents me from flying like superman? I will that, yet cannot bring it about. My free will does not seem to have any ability to override physics, yet you claim otherwise when contrasting yourself to the actions of computers that, lacking said sentience, are confined to the laws of physics.

I did not intend to debate morality from a dualist perspective. The perspective is religious (inherently non-empirical) and that typically has morality pretty much built in. I don't deny that. I just find your particular flavor of it self contradictory.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 01/10/2019 18:20:08
How do you compare and relate sentience to other things?

We won't know until we find out how to read feelings out of whatever feels feelings. The only examples we think we know of are hidden inside our own heads where our brain makes claims that suggest that it's measuring feelings. That is something only science can explore, but getting to the evidence without destroying what you're looking for may be a tough task.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 01/10/2019 18:34:12
But we know the rock isn't sentient since none of its particles exhibits free will.

Nothing exhibits free will.

Quote
If any particle was suffering, it could put itself in a situation where this was not the case.

People who are being tortured can't stop the torture so easily.

Quote
Since it isn't doing that, either it isn't sentient or the thing is completely contented.

That's really just an assumption given false backing by faulty reasoning.

Quote
Likewise, the motion of the particles in my body can be described by the laws of physics.  Not a single proton seems to be exerting free will.  Hence I cannot be sentient (your definition).

My definition doesn't involve free will (not least because there's no such thing), so don't attribute your definition to me.

Quote
What prevents me from flying like superman? I will that, yet cannot bring it about. My free will does not seem to have any ability to override physics, yet you claim otherwise when contrasting yourself to the actions of computers that, lacking said sentience, are confined to the laws of physics.

Again you're trying to attribute ideas to me that are the opposite of the ones I hold. There is no free will involved in anything.

Quote
I did not intend to debate morality from a dualist perspective. The perspective is religious (inherently non-empirical) and that typically has morality pretty much built in. I don't deny that. I just find your particular flavor of it self contradictory.

If sentience exists, there is no escape from "dualism". When discussing morality in a way where we decide that it matters, we have to work under the premise that there is such a thing as sentience. If there is no such thing as sentience, morality has no role as there's nothing needing to be protected from anything else. A computer that prints "Ouch!" to the screen when you tap the "O" key and "Ooh, I like that!" when you type "E" (for ecstasy) does not feel pain or pleasure on either occasion. We can simply switch the strings round and it will print "Ouch!" when you type "E" instead. It's just data without sentience. Anything you do with a program on a computer works like that without any feelings being tied to the process at all. When you run them on a Chinese Room processor, you can easily see that this is the case. If you ban "dualism", the human brain is like that too with no feelings involved and with all the claims about feelings coming out of the brain being false. I'm not contradicting myself anywhere, but am simply telling it as it is. Sentience looks impossible and there is no room for it in any way that ties it to process in our machines. In discussing morality though, we put that apparent impossibility aside for cases where the full mechanism remains beyond the current reach of science and we humour the idea that sentience might be real. On that basis, we can then talk explore the idea of morality while remaining fully aware that it is predicated on something that may be false.
Title: Re: Is there a universal moral standard?
Post by: Halc on 02/10/2019 00:04:25
Nothing exhibits free will.
I think I have misread your position.  You say nothing has free will, but haven't defined it.
It seems that you consider sentience to be a passive experiencer, lacking any agency in the physical world.  Morals are there as obligations to these external experiencers, to keep your movie audience contented so to speak.
Perhaps I am wrong about this epiphenomenal stance.  Kindly correct me if I've again got it wrong.
My whole argument has been concerning an external agent that can exert its will on the outcome of physical events which would have otherwise have taken place, but looking back in the posts I think I am not reading carefully.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 02/10/2019 04:16:46
To prevent miscommunications, I think we should use common definition of terms before proposing to use our own redefinitions.
Quote
Sentience is the capacity to feel, perceive, or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience).
http://en.wikipedia.org/wiki/Sentience

Quote
Definition of sentience
1: a sentient quality or state
2: feeling or sensation as distinguished from perception and thought
https://www.merriam-webster.com/dictionary/sentience

Quote
Definition of consciousness
1a: the quality or state of being aware especially of something within oneself
b: the state or fact of being conscious of an external object, state, or fact
c: AWARENESS
especially : concern for some social or political cause
The organization aims to raise the political consciousness of teenagers.
2: the state of being characterized by sensation, emotion, volition, and thought : MIND
3: the totality of conscious states of an individual
4: the normal state of conscious life
regained consciousness
5: the upper level of mental life of which the person is aware as contrasted with unconscious processes
https://www.merriam-webster.com/dictionary/consciousness

If you use words for significantly different meanings than commonly used definitions, maybe it's better to use different terminology, or even create a new word to express your intention.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 02/10/2019 19:46:23
Nothing exhibits free will.
I think I have misread your position.  You say nothing has free will, but haven't defined it.

Free will depends on an injection of magic somewhere to get round the problem of everything that happens having a cause. Even if that cause is something genuinely random, it still doesn't provide for free will. We do what we're forced to do by physics. At a higher level, we're simply trying to do the best thing (for ourselves) all the time, and the only way to break free of that rule in order to pretend to have free will is to do something less good, but that seems to be the best thing to do too as it's an attempt to satisfy ourselves that we have free will and to feel better for believing that (if we're stupid enough not to realise that we failed).

Quote
It seems that you consider sentience to be a passive experiencer, lacking any agency in the physical world.  Morals are there as obligations to these external experiencers, to keep your movie audience contented so to speak.
Perhaps I am wrong about this epiphenomenal stance.  Kindly correct me if I've again got it wrong.

If sentience is real, it is just a passenger. It appears to have no useful role in the machine. However, people report pain being an unpleasant thing and they typically consider it important not to cause it without justification (which means using it with the aim of causing less suffering).

There are two different discussions involved in this. One is a discussion of how morality works, and it's predicated on sentience being a real thing. The other is a discussion of what sentience is and how it works. The second of these is the biggest puzzle of them all, and it would be easy to waste your whole life trying to crack it. I'd rather wait to see what happens when the claims about sentience generated by human brains are traced back to see what evidence they are based on. We may uncover a self-delusion mechanism which fools a machine with no self into thinking it has a self. Alternatively, we may find something new to science which stuns everyone. What I repeatedly come up against though is people who don't look deeply enough at how computers work who insist that sentience can be operating in there somewhere in the bit they don't understand, and they're absolutely sure they're right because they're absolutely sure that they themselves are sentient and that they are just machines. That's maybe a third discussion, and it's worth having in that it can be resolved just by filling in the gaps for those people so that they can see that sentience cannot be operating where they hoped it might be. Morality is also resolved. What is not resolved is the part that seems so impossible that it just makes sentience look impossible, and yet it feels too real for that to be the case.
Title: Re: Is there a universal moral standard?
Post by: Halc on 03/10/2019 01:54:12
You say nothing has free will, but haven't defined it.
Free will depends on an injection of magic somewhere to get round the problem of everything that happens having a cause.
OK, you define free will as 1) having this external thing (what you call a sentience), and 2) it having a will and being able to exert that.  This actually pretty much sums up the concept from a typical dualist, yes.
I on the other hand would describe that situation as possession, where my will is overridden by a stronger agent, and its freedom taken away.  You don't describe possession.  The body retains its physical will and this 'sentience' gets its jollies by being along for the ride.

That said, you seem aware of the 'magic' that needs to happen.  Most are in stark denial of it, or posit it in some inaccessible place like the pineal gland despite the complete lack of neurons letting their shots be called by it.

You're an epiphenomenalist, a less mainstream stance.

Quote
If sentience is real, it is just a passenger. It appears to have no useful role in the machine.
Why do you posit it then?  Seem like the equivalent of positing the invisible pink unicorn that's always in the room.  If there's no distinction between the presence or absence of a thing, why posit it? Why might you not have many of them, a whole cinema full all taking the same ride?

Quote
There are two different discussions involved in this. One is a discussion of how morality works, and it's predicated on sentience being a real thing.
Most people don't define sentience as an epiphenomenal passenger, so most don't base their moral decisions on how it will make the unicorn feel.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 03/10/2019 20:56:47
You say nothing has free will, but haven't defined it.
Free will depends on an injection of magic somewhere to get round the problem of everything that happens having a cause.
OK, you define free will as 1) having this external thing (what you call a sentience), and 2) it having a will and being able to exert that.  This actually pretty much sums up the concept from a typical dualist, yes.

When I said it "depends on an injection of magic", I was ruling out free will on that basis - not endorsing it. Whatever the sentience does, it's caused to do that by the inputs and whatever algorithm its mechanism applies to them.

Quote
I on the other hand would describe that situation as possession, where my will is overridden by a stronger agent, and its freedom taken away.  You don't describe possession.  The body retains its physical will and this 'sentience' gets its jollies by being along for the ride.

I just see a whole lot of causation from the outside interacting with causation from the set up of whatever's on the inside, and every part of it is dictated by physics.

Quote
That said, you seem aware of the 'magic' that needs to happen.  Most are in stark denial of it, or posit it in some inaccessible place like the pineal gland despite the complete lack of neurons letting their shots be called by it.

I rule it out because it depends on magic.

Quote
You're an epiphenomenalist, a less mainstream stance.

I don't think that fits. If sentience is real, it has a causal role: without that, it cannot possibly cause claims about feelings being felt to be generated. It is still just a passenger though in that what it does is forced by the inputs.

Quote
Why do you posit it then?  Seem like the equivalent of positing the invisible pink unicorn that's always in the room.  If there's no distinction between the presence or absence of a thing, why posit it?

If there's no sentience, then torture is impossible and morality has no purpose. Most people believe that pain is real and that they strongly dislike it. If you are in that camp, then you're a unicornist yourself. If you are completely out of that camp, then you should be a nihilist with no self.

Quote
Why might you not have many of them, a whole cinema full all taking the same ride?

Indeed, there could be quintillions of sentiences in there all imagining themselves to be the only one, and they might not all be feeling the same thing as each other, but due to lack of memory, they aren't going to be capable of recognising any contradiction between what they feel and what the brain reads the feeling to be.

Quote
Most people don't define sentience as an epiphenomenal passenger, so most don't base their moral decisions on how it will make the unicorn feel.

If they believe in sentience, the sentient thing that feels is what morality is there to protect. If they don't have that, they don't need morality. But they want to have sentience without any sentient thing to experience feelings other than magical complexity.
Title: Re: Is there a universal moral standard?
Post by: Halc on 03/10/2019 22:22:06
When I said it "depends on an injection of magic", I was ruling out free will on that basis - not endorsing it.
Understood, but this is only true given a free-will definition that involves this kind of magic going on, instead of somebody else that considers free will to be not remote controlled.

Quote
I just see a whole lot of causation from the outside interacting with causation from the set up of whatever's on the inside, and every part of it is dictated by physics.
That sounds like a description of semi-deterministic physics. 

Quote
If sentience is real, it has a causal role: without that, it cannot possibly cause claims about feelings being felt to be generated. It is still just a passenger though in that what it does is forced by the inputs.
This seems to be a contradictory statement.  If I feel the warmth of green, I cannot cause the body to discuss said warmth without performing said magic on the physical body which supposedly is incapable of such feelings.  If it has any causal role, there's magic going on.

Quote
Quote
Why do you posit it then?  Seem like the equivalent of positing the invisible pink unicorn that's always in the room.  If there's no distinction between the presence or absence of a thing, why posit it?
If there's no sentience, then torture is impossible and morality has no purpose.
Agree.  So I find your definitions rather implausible for this reason.  My view doesn't have this external passenger.  The physical being is all there is and is sentient in itself (yes, a different definition of sentience), has free will because nothing else is overriding its physical will, and morality has a purpose because there are obligations to the physical thing.

I also don't think morality is about pain and suffering.  Everybody that says that makes it sound like life is some kind of horrible thing to have to experience.  Pleasure and pain are means to an end.  If the pleasure and pain were the end (the point of morality), then we should just put everybody on heroin.  Problem solved.  Recognizing the greater purpose is isn't a trivial task.

Quote
Most people believe that pain is real and that they strongly dislike it. If you are in that camp, then you're a unicornist yourself.
Nonsense. I don't think I need the unicorn to feel my own pain for me. That you propose this indicates that the idea is beyond your comprehension, and not just an interpretation with which you don't agree.

Quote
If they believe in sentience, the sentient thing that feels is what morality is there to protect.
Almost nobody believes in the sort of sentence you describe. Typically it's a separate experiencer capable of said magic (think Chalmers), or in my case, a sentience composed of a physical process (Dennett, or whoever that hero is supposed to be).
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 04/10/2019 20:38:32
When I said it "depends on an injection of magic", I was ruling out free will on that basis - not endorsing it.
Understood, but this is only true given a free-will definition that involves this kind of magic going on, instead of somebody else that considers free will to be not remote controlled.

The same magic is required for it regardless where it's controlled from. Whatever causes something is itself caused and is forced to cause what it causes. There is no such thing as choice in that whatever is chosen in the end was actually forced.

Quote
Quote
I just see a whole lot of causation from the outside interacting with causation from the set up of whatever's on the inside, and every part of it is dictated by physics.
That sounds like a description of semi-deterministic physics.

It's fully deterministic physics.

Quote
Quote
If sentience is real, it has a causal role: without that, it cannot possibly cause claims about feelings being felt to be generated. It is still just a passenger though in that what it does is forced by the inputs.
This seems to be a contradictory statement.

It isn't. X causes Y, then Y causes Z --> X causes Z. Y causes Z but is forced to by X.

Quote
If I feel the warmth of green, I cannot cause the body to discuss said warmth without performing said magic on the physical body which supposedly is incapable of such feelings.  If it has any causal role, there's magic going on.

There's no magic in Y being forced by X to cause Z.

Quote
Quote
If there's no sentience, then torture is impossible and morality has no purpose.
Agree.  So I find your definitions rather implausible for this reason.  My view doesn't have this external passenger.

If you don't have something experiencing the feelings, you have no sentience there and the feelings don't exist either. By throwing away the "passenger" you lose the sentience.

Quote
The physical being is all there is and is sentient in itself (yes, a different definition of sentience), has free will because nothing else is overriding its physical will, and morality has a purpose because there are obligations to the physical thing.

There is no free will for Y in "X causes Y causes Z" and no free will for X either because it's caused by W). Your big mistake there is in accepting something that's actually impossible. There is no free will. You want the physical being to be sentient in itself, and that's fine: that's like particles being sentient, and it's possible that all stuff is sentient. That doesn't solve the problem of how an information system can ever get that knowledge from it in order to report the existence of sentience.

Quote
I also don't think morality is about pain and suffering.  Everybody that says that makes it sound like life is some kind of horrible thing to have to experience.  Pleasure and pain are means to an end.  If the pleasure and pain were the end (the point of morality), then we should just put everybody on heroin.  Problem solved.  Recognizing the greater purpose is isn't a trivial task.

Morality is about suffering AND the opposite. It's a harm:benefit calculation in which the harm is ideally minimised and the benefit (all kinds of pleasure) maximised, but where those two aims conflict with each other in places and you're looking for the best compromise between the two to optimise quality of life..

Quote
Quote
Most people believe that pain is real and that they strongly dislike it. If you are in that camp, then you're a unicornist yourself.
Nonsense. I don't think I need the unicorn to feel my own pain for me. That you propose this indicates that the idea is beyond your comprehension, and not just an interpretation with which you don't agree.

But you are the unicorn. Of course you can't have a unicorn feel anything for you because then it would be the sentience rather than you. Don't attribute nonsense to me that comes out of your misreading of my position.

Quote
Quote
If they believe in sentience, the sentient thing that feels is what morality is there to protect.
Almost nobody believes in the sort of sentence you describe. Typically it's a separate experiencer capable of said magic (think Chalmers), or in my case, a sentience composed of a physical process (Dennett, or whoever that hero is supposed to be).

I don't give a damn where the sentient thing is or what it's made of: that's a job for science to uncover. The only thing that actually matters here is that for feelings to be real, something real has to experience them, and that is a sentience. No sentient thing --> no feelings can be felt --> no role for morality --> you can try to torture anyone as much as you like and no harm can be done. If you think sentience is real, you need to ask yourself where it is and how it interacts with the things that physics recognises as being real. Chalmers has sentient stuff, and that's not incompatible with physics, but he (and everyone else) cannot account for how that sentience can be detected by the brain in order for the information system of the brain to generate accounts of what that sentience is experiencing. Dennet appears to be a nihilist - it should be possible to torture him and have him assure everyone throughout that the pain isn't real and that he can't really be suffering. Dennet may be right, but it's hard to believe that when you're actually in pain (as I often am due to Crohn's disease). It feels too real to be a fiction: making something non-sentient believe that it's sentient is quite some trick.

What I object to most though is when people deny the existence of a sentient thing and yet insist on asserting that there is sentience in there. There isn't any in a computer which makes fake claims about being in pain when you type a particular key, and it's the same situation with any system which fakes the claims, no matter how complex the system is and how well hidden the cheating is. People who insist that there is sentience in such systems should also insist that it is there in the simple program that cheats by claiming that pain is experienced when a key is pressed just because it is programmed to print a string to the screen as a response which makes a claim and where the claim could be edited to make the opposite claim without changing the feeling of anything.
Title: Re: Is there a universal moral standard?
Post by: Halc on 05/10/2019 06:54:54
Quote from: David Cooper
The same magic is required for it regardless where it's controlled from.
Blatantly false. A roomba is controlled from within itself and it requires no magic to do so. It just requires magic if the control is to come from outside the physical realm.
Quote
Whatever causes something is itself caused and is forced to cause what it causes.
Indeed. You make it sound like a bad thing. I thought of what it would be like if choices were not based on input that was not caused by prior state. I'd be dead in a day.
Quote
There is no such thing as choice in that whatever is chosen in the end was actually forced.
If that were true, mammals would not have evolved better brains to make better choices, or to make say moral choices. We are ultimately responsible for our choices, as evidenced by what happens to those that make poor ones. Not sure what choice is if you don't think that's going on.
MInd you, I agree that  if the physics of the universe is deterministic, then my choices are determined. I'm just saying that they're still choices.

Quote
Quote
If sentience is real, it has a causal role: without that, it cannot possibly cause claims about feelings being felt to be generated. It is still just a passenger though in that what it does is forced by the inputs.
This seems to be a contradictory statement.
It isn't. X causes Y, then Y causes Z --> X causes Z. Y causes Z but is forced to by X.[/quote]Which one (X, Y, or Z) is the sentience (your definition)?  I thought it was a passenger and has no arrow pointing from it. If so, it has no causal role. If it has one, then there's magic going on.

Quote
If you don't have something experiencing the feelings, you have no sentience there and the feelings don't exist either.
Only true in your interpretation. I for instance never said there wasn't something experiencing my feelings. I just don't think it's a separate entity, passenger or otherwise. I'm fine with you disagreeing with it, but do you find inconsistency with it, without begging your own interpretation?

Quote
Quote
I also don't think morality is about pain and suffering.  Everybody that says that makes it sound like life is some kind of horrible thing to have to experience.  Pleasure and pain are means to an end.  If the pleasure and pain were the end (the point of morality), then we should just put everybody on heroin.  Problem solved.  Recognizing the greater purpose is isn't a trivial task.
Morality is about suffering AND the opposite.
I find that thinking shallow.  Heroin it is then, the most moral thing you can do to others. It minimizes suffering an maximizes pleasure, resulting in the optimum quality of life.

Quote
Quote from: Halc
Quote from: Cooper
Most people believe that pain is real and that they strongly dislike it. If you are in that camp, then you're a unicornist yourself.
Nonsense. I don't think I need the unicorn to feel my own pain for me.
...  Don't attribute nonsense to me that comes out of your misreading of my position.
I wasn't commenting on your position. Your statement above concerned the camp that I'm in, implying that pain cannot be felt given a different interpretation of mind.

Quote
The only thing that actually matters here is that for feelings to be real, something real has to experience them, and that is a sentience. No sentient thing --> no feelings can be felt --> no role for morality --> you can try to torture anyone as much as you like and no harm can be done.
Totally agree. That's all that matters for the purpose of this topic. I'm not the one that drove this discussion to down assertions about the interpretation of mind. Only a moral nihilist denies that feelings matter to anything, and I'm not in with that crowd.

Quote
Dennet appears to be a nihilist
A word you seem to use for any monist position. You're begging your interpretation to draw this conclusion.

Answer my question.  How do you know about your passenger if it cannot make itself known to you?
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 05/10/2019 21:01:59
Quote from: David Cooper
The same magic is required for it regardless where it's controlled from.
Blatantly false. A roomba is controlled from within itself and it requires no magic to do so. It just requires magic if the control is to come from outside the physical realm.

You appear to have lost track of the point. With the sentience, there are inputs and outputs, and the outputs are determined by the inputs. The inputs are the X (inputs) that cause Y in the sentient thing, and Y then causes Z (outputs). In the course of Y happening, feelings are supposedly generated, and some of the outputs document that in some way. A roomba reacts to inputs in the same way, but doesn't generate claims about sentience in Y.

Quote
If that were true, mammals would not have evolved better brains to make better choices, or to make say moral choices. We are ultimately responsible for our choices, as evidenced by what happens to those that make poor ones. Not sure what choice is if you don't think that's going on.
MInd you, I agree that  if the physics of the universe is deterministic, then my choices are determined. I'm just saying that they're still choices.

Our choices are no different from the ones computers make. The computer making a choice between which string to print to the screen when you press a key is applying an algorithm to work out which one to print, but its choice is forced. You can run algorithms that produce apparently random numbers too, but the results are forced. We are like that: there are many factors that can determine what number you say if I ask you for a number between fifty and a hundred, but your choice is forced by the algorithms and perhaps disturbances in the system which affect the algorithms. That itch on your back may distract you for a moment and lead to a different number being chosen than the one you would have gone for if the distraction hadn't occurred when it did. It's all forced though. We can use interrupts in a computer to disrupt an algorithm which makes it miss an event at a timer that it's watching.

Quote
I thought it was a passenger and has no arrow pointing from it. If so, it has no causal role. If it has one, then there's magic going on.

I told you before that it has a causal role: the generation of data documenting the experience of sentience cannot be triggered without outputs from the sentience to inform the system that the experience happened. This is the key thing that science will some day be able to explore, because for sentience to be real, that output from it must exist. (The problem then though is how the output can be understood rather than just making baseless assertions about what it represents.)

Quote
Quote
If you don't have something experiencing the feelings, you have no sentience there and the feelings don't exist either.
Only true in your interpretation. I for instance never said there wasn't something experiencing my feelings. I just don't think it's a separate entity, passenger or otherwise. I'm fine with you disagreeing with it, but do you find inconsistency with it, without begging your own interpretation?

True in any rational interpretation. I don't say that it has to be a separate entity, but simply that for feelings to be felt, something has to feel them. Things that don't exist can't do anything, actions can't be performed by nothing, and experiences can't be experienced by nothing.

Quote
Quote
[Morality is about suffering AND the opposite.
I find that thinking shallow.

You're entitled to find reality shallow if you like, but if you remove those things, morality is 100% redundant.

Quote
Your statement above concerned the camp that I'm in, implying that pain cannot be felt given a different interpretation of mind.

An interpretation of mind in which feelings are experienced by nothing is a magical interpretation.

Quote
Quote
Dennet appears to be a nihilist
A word you seem to use for any monist position. You're begging your interpretation to draw this conclusion.

I use the word to describe a position which denies the feelings exist at all and that there is no sentience.

Quote
Answer my question.  How do you know about your passenger if it cannot make itself known to you?

From the start I said that sentience could be a property of all particles (stuff, energy), so in that sense it needn't be a passenger as it is the essential nature of that stuff. I call it a passenger when referring to its lack of any useful causal role that can be produced just by going straight from X to Z without the middleman. The only thing it appears to do that goes beyond that is drive the generation of data to document the experiencing of feelings, but it's hard to believe that it actually does that as it should be impossible to interpret that part of the output in any way that would lead to the system that generates the data documenting the experience to have any idea that there was an experience of feelings at all.
Title: Re: Is there a universal moral standard?
Post by: Halc on 06/10/2019 19:28:30
The inputs are the X (inputs) that cause Y in the sentient thing, and Y then causes Z (outputs). In the course of Y happening, feelings are supposedly generated, and some of the outputs document that in some way.
OK, Is Y the experiencing the feelings, or is Y the physical feelings which are noticed by the sentient experiencer?  I'm trying to figure out if the physical feelings or the sentient experience of those feelings is what is causing Z, the output.

I ask because of this:

Quote
Quote
I thought it was a passenger and has no arrow pointing from it. If so, it has no causal role. If it has one, then there's magic going on.
I told you before that it has a causal role: the generation of data documenting the experience of sentience cannot be triggered without outputs from the sentience to inform the system that the experience happened.
Here you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.

You say that physics is entirely deterministic, which means that output from something external to the physical system cannot cause any effects in said determined system.  In your quote just above, you assert the opposite, that the system is being informed of data from non-physical sources, which would make it non-deterministic, or which makes the sentience part of the deterministic physical system, in which case it isn't two systems, but just one.
 
Quote
I call it a passenger when referring to its lack of any useful causal role that can be produced just by going straight from X to Z
Here again you seem to deny the 'passenger' having a causal role, yet above you say it causes data about the feelings.  If I avoid standing in the rain because it gives me discomfort, then the discomfort definitely plays a causal role in my choosing to seek shelter.  There's not a direct link from rain to choice of seeking shelter if I don't know if the sentient experiencer prefers a wet environment or not.  Some things clearly have a preference for it, like say robins.
Title: Re: Is there a universal moral standard?
Post by: evan_au on 07/10/2019 01:42:15
Quote from: Halc
if the physics of the universe is deterministic
In quantum theory, physics is not deterministic (or at least, not determinable by us).

However, in mammals, I think moral decisions arise at a higher level than the quantum level - it is encoded in the strengths of synapses.

Quote from: David Cooper
Morality is ... a harm:benefit calculation in which the harm is ideally minimised and the benefit (all kinds of pleasure) maximised
As I understand it, people with certain brain structures have psychopathic tendencies
- However, they don't become full-blown psychopaths unless there is a trigger - such as being abandoned by their mother at a young age
- With this trigger, their moral rule seems to be: "The only harm that counts is harm to me, and the only pleasure that counts is what brings pleasure to me.".
- A psychopath does a very local harm:benefit calculation
- Other systems of morality take a more global view of whose harm and benefit they include in the calculation
- Sometimes it's just "my family", "my tribe", "my skin colour", "my nation" or "my religion"
- Without the trigger, people with psychopathic tendencies could live useful and productive lives - as lawyers, drill sergeants or surgeons, for example
- So the synapses driving their morality are formed with various inputs from DNA, and development before and after birth. These synaptic weights can be modified by the individual based on their teaching, experiences, and deductions.

I agree that in the end, we are all responsible for our decisions.

Listen to neuroscientist James Fallon: https://after-on.com/episodes/029
Title: Re: Is there a universal moral standard?
Post by: Halc on 07/10/2019 02:37:50
Quote from: Halc
if the physics of the universe is deterministic
In quantum theory, physics is not deterministic (or at least, not determinable by us).
His assertion, not mine.  And deterministic doesn't mean determinable.

That said, quantum theory actually doesn't say one way or another.  It is interpretation dependent.
MWI and Bohmian mechanics for instance, while very different interpretations, are both hard deterministic (no true randomness).  Neither asserts that one can predict where a photon will be measured no matter how much we measure ahead of time. That's just not what deterministic means.

Given his assertions on the subject, I assume David has bought into one of these deterministic interpretations, or that he hasn't put much thought into it. I know he's a presentist, though that hasn't come up in this topic. The typical presentist tends not to choose a deterministic interpretation of QM, but the combination is not contradictory.

Quote
However, in mammals, I think moral decisions arise at a higher level than the quantum level - it is encoded in the strengths of synapses.
Agree that it's not a quantum thing at all. Quantum stuff always comes up because dualism needs a way to allow a non-physical will to effect changes in a physical world, and QM is where lies the argument that such external interference is or isn't feasible.

I think that to an extent, human morality is encoded into us at a deep level (DNA), but not completely.  Much of it is simply taught.  Being encoded in human DNA, it is human morality, not universal morality.
Look at the morality of wolves, which is quite strong and consistent from pack to pack.  That's DNA morality, not taught.  Wolves and bees are far more moral creatures than are humans, as measured by their adherence to their own code.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 07/10/2019 09:51:18
From the start I said that sentience could be a property of all particles (stuff, energy), so in that sense it needn't be a passenger as it is the essential nature of that stuff.
It seems to me that you used eastern philosophical definition of sentience.
Quote
Sentience is the capacity to feel, perceive, or experience subjectively.[1] Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience). In modern Western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as "qualia"). In Eastern philosophy, sentience is a metaphysical quality of all things that require respect and care. The concept is central to the philosophy of animal rights because sentience is necessary for the ability to suffer, and thus is held to confer certain rights.
I had some issues regarding this view, as I stated in my previous post. What is the ultimate/terminal goal of moral rules derived from this view? What will happen if we ignore them? why are they bad?

Neuroscience has shown that we can manipulate neurotransmitters to temporary disable human's ability to feel. Hence it is possible to kill a living organism, including humans, without involving any feeling of the subject (see Coup de grâce), hence not violating moral rules whose ultimate goal is to minimize pain and suffering while maximizing pleasure and happiness.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 07/10/2019 10:43:21
- So the synapses driving their morality are formed with various inputs from DNA, and development before and after birth. These synaptic weights can be modified by the individual based on their teaching, experiences, and deductions.
I think this view is consistent with my thought experiment posted here
The next step for cooperating more effectively is by splitting duties among colony members. Some responsible for defense, some for digesting food, etc. Though each cell are genetically identical, they can develop differently due to Gene activation by their surrounding.
This requires longer and more complex genetic materials in each organism's cell.

I've mentioned that consciousness comes as a continuum. Different levels of consciousness are likely the product of evolution, involving random changes and natural selections. I summarized the process in the thread quoted above.
Development of feeling is just a milestone in the development of consciousness through evolution.
So they need the ability to distinguish objects in their surrounding and categorize them, so they can choose appropriate actions.
Some organisms develop pain and pleasure system to tell if some circumstances are good or bad for their survival. They try to avoid pain and seek pleasure, which is basically making assumptions that pain is bad while pleasure is good.
Though there are times it could be a mistake to seek pleasure and avoid pain, mostly this rule of thumb brings overall benefits to the organisms.
Avoiding pain can prevent organisms from suffering further damage which may threat their lives. While seeking pleasure can help them to get basic needs to survive, such as food and sex.
From neuroscience, we know that pain and pleasure are electrochemical processes in the nervous system. Hence seeking pleasure and avoiding pain should be treated as instrumental goals only, not the terminal goals themselves. Otherwise they would be the inevitable victims of reward hackings such as in drug abuses.
The next milestone of consciousness that I know of is emotion. It involves expected future feelings. This ability requires additional memory capacity to build a rough model/simulation of the environment.
The next milestone is reason, which is the ability to think. It builds more robust and detail models/simulations of the environment.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 07/10/2019 22:59:22
OK, Is Y the experiencing the feelings, or is Y the physical feelings which are noticed by the sentient experiencer?  I'm trying to figure out if the physical feelings or the sentient experience of those feelings is what is causing Z, the output.

Let's look at something simple that might be sentient: a worm. You prod the worm with something sharp and it reacts as if in pain. The prodding is X and the reaction is Z. In between those two things is Y, and the pain (assuming that a worm can feel pain) is experienced there by something. You can do the exact same thing with a human, and if the human needs a feeling of pain as part of the mechanism, the worm likely has that too: evolution probably isn't going to add pain into this situation for us if it already has a system that works just fine without it while producing the same kind of response. We can't build a model of this with a feeling of pain in it: we simply don't know how to. The only models that we can build provide the same behaviour without pain being involved, or which assert that there is pain being experienced at some point without actually providing any way for the system to detect that this is happening and to report such a feeling being felt. The pain in any model we build is superfluous and would be impossible to detect. However, humans report the involvement of pain. Models only report it if they've been programmed to make baseless assertions which don't involve any measurement of pain or any other kinds of feelings.

There are two possibilities though if feelings are real: X causes Y (which is the experience of pain) and Y causes Z, or we might have X causes Z and X also causes pain at Y which Z then claims led to Y causing Z.

Quote
I ask because of this:

Quote
Quote
I thought it was a passenger and has no arrow pointing from it. If so, it has no causal role. If it has one, then there's magic going on.
I told you before that it has a causal role: the generation of data documenting the experience of sentience cannot be triggered without outputs from the sentience to inform the system that the experience happened.
Here you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.

If you don't have output from the sentience, it has no role in the system. Its actual role may not be to cause Z, but Z is generating claims that Z was caused by Y. That's what science needs to explore to see why something at Z is generating such a claim.

Quote
You say that physics is entirely deterministic, which means that output from something external to the physical system cannot cause any effects in said determined system.  In your quote just above, you assert the opposite, that the system is being informed of data from non-physical sources, which would make it non-deterministic, or which makes the sentience part of the deterministic physical system, in which case it isn't two systems, but just one.

I don't think there is such a thing as true randomness, but it isn't important to bring that into this and I didn't say that physics is entirely deterministic. Randomness is not free will. We can allow true randomness at base level if you like, but those random events then cause things, and that takes us to a point where everything else is caused, all dictated by the initial circumstances and those random inputs. In computer chips, they have to be designed in such a way that randomness doesn't interfere with their functionality, so everything that a program does is fully deterministic. In our brains, that may not be the case, but random events making a bit of neural net behave slightly different each time it fires is not free will and is more likely to cause trouble than do anything useful. Neural nets actually get trained in order to minimise unreliability issues, but this typically never reaches perfection and leads to us making mistakes in almost everything we do.

I also never said that something external to the physical system was involved in any way. Whatever is sentient, if feelings exist at all, is necessarily part of the physical system. The outputs from the sentience are outputs from something that is part of the physical system (and in which feelings, if feelings exist at all) are experienced.
 
Quote
Here again you seem to deny the 'passenger' having a causal role,

No: I deny it having a useful role. You can cut out Y and have X cause the same Z as you get with X causes Y causes Z, except that when you have Y in the chain you also get these assertions being generated about there being feelings involved in the process.

Quote
If I avoid standing in the rain because it gives me discomfort, then the discomfort definitely plays a causal role in my choosing to seek shelter.

Then show me a model of that. It should be exactly like with the pain one. A sensor on the skin sends a signal to the brain, and that's an input. The input is fed into a black box where discomfort is experienced. An output from the black box then causes the brain move the animal into shelter. We can bypass the black box and the functionality is unchanged: the input wire can simply be connected directly to the output wire. Except, the black box also triggers claims to be generated about feelings being experienced inside the box, and it makes those known to the outside by sending a signal down another wire to say so. However, we could simply connect the input wire to both output wires and remove the black box and the exact same functionality is produced, including the generation of claims about feelings being experienced in the black box even though the black box no longer exists in the system.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 07/10/2019 23:05:34
Agree that it's not a quantum thing at all. Quantum stuff always comes up because dualism needs a way to allow a non-physical will to effect changes in a physical world, and QM is where lies the argument that such external interference is or isn't feasible.

The reason that quantum stuff gets dragged into this is that it's impossible to model sentience without it. It looks impossible to model it with it too, but because it's complex, it provides some refuge for hope. If the brain doesn't perform some extraordinary kind of trick, sentience is fake and there is no role for morality; no self in any of us to need to be protected.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 07/10/2019 23:18:52
Neuroscience has shown that we can manipulate neurotransmitters to temporary disable human's ability to feel. Hence it is possible to kill a living organism, including humans, without involving any feeling of the subject (see Coup de grâce), hence not violating moral rules whose ultimate goal is to minimize pain and suffering while maximizing pleasure and happiness.

You can kill everyone humanely without them feeling anything, but that's clearly immoral if you're producing inferior harm:benefit figures, and you would be doing so if you tried that. Imagine that you are going to live the lives of everyone in the system, going round and round through time to do so. There are a thousand people on an island and one of them decides that he can have a better life if he kills all the others, and by doing it humanely he imagines that it's not immoral. He doesn't know that he will also live the lives of all those other people and that he will be killing himself 999 times. If he knew, he would not do it because he'd realise that he's going to lose out heavily rather than gain.

Of course, in the real world we don't believe that we're going to live all those lives in turn, but the method for calculating morality is right regardless: this is the way that AGI should calculate it. Morality isn't about rewarding one selfish person at the expense of all the others, but about maximising pleasure (though not by force - we don't all want to be drugged for it) and minimising suffering.

Also, we're setting things up for future generations. We care about our children's children's children's children's children, and we don't want to set up a system that picks one of them to give the Earth to while the rest are humanely killed. Morality isn't about biasing things in favour of one individual or group, but about rewarding all.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 08/10/2019 09:50:41
Neuroscience has shown that we can manipulate neurotransmitters to temporary disable human's ability to feel. Hence it is possible to kill a living organism, including humans, without involving any feeling of the subject (see Coup de grâce), hence not violating moral rules whose ultimate goal is to minimize pain and suffering while maximizing pleasure and happiness.

You can kill everyone humanely without them feeling anything, but that's clearly immoral if you're producing inferior harm:benefit figures, and you would be doing so if you tried that. Imagine that you are going to live the lives of everyone in the system, going round and round through time to do so. There are a thousand people on an island and one of them decides that he can have a better life if he kills all the others, and by doing it humanely he imagines that it's not immoral. He doesn't know that he will also live the lives of all those other people and that he will be killing himself 999 times. If he knew, he would not do it because he'd realise that he's going to lose out heavily rather than gain.

Of course, in the real world we don't believe that we're going to live all those lives in turn, but the method for calculating morality is right regardless: this is the way that AGI should calculate it. Morality isn't about rewarding one selfish person at the expense of all the others, but about maximising pleasure (though not by force - we don't all want to be drugged for it) and minimising suffering.

Also, we're setting things up for future generations. We care about our children's children's children's children's children, and we don't want to set up a system that picks one of them to give the Earth to while the rest are humanely killed. Morality isn't about biasing things in favour of one individual or group, but about rewarding all.
Your calculation of harm:benefit here has nothing to do with feelings. Moral rules based on pleasure and suffering as their ultimate goals are vulnerable to reward hacking (such as drugs) and exploitation by utility monsters.
We know that killing random person is immoral, even if we can make sure that the person doesn't feel any pain while dying. There must be a more fundamental reason to get to that conclusion, other than minimising suffering, because no suffering is involved here.
I have mentioned that moral rules are created as methods to protect conscious beings from getting harmed by other conscious beings. Those rules can not protect us from unconscious beings such as natural disasters, germs, or beast attacks.
Title: Re: Is there a universal moral standard?
Post by: evan_au on 08/10/2019 10:30:56
Quote from: David Cooper
standing in the rain ... we could simply connect the input wire to both output wires and remove the black box and the exact same functionality is produced, including the generation of claims about feelings being experienced in the black box even though the black box no longer exists in the system.
If you grew up Scottish winters, standing in the rain is likely to give you hypothermia.
If you grew up in Darwin (Australia), standing in the rain cools you down a bit, and the water will evaporate fairly soon anyway.

We need the black box, because an individual human might be born in Edinburgh or Darwin.

If all reactions were hard-wired, without the integration of other sensations, experiences (and mothers yelling at children), humans would not have spread so far around the world.

Quote from: evan_au
Sometimes morality is just applied to "my family", "my tribe"...
And sometimes morality is just applied to "my species"
- while some people (like Buddhists) want to apply morality to all living animals (partly because they think that, one day, they might be one of those animals).
- Some people want to extend it to plants
- And there are undoubtedly impacts of fungicides on microbes living in the soil and living in symbiosis with trees and plants; or microbes living in symbiosis with our guts
- Some people even wish to extend morality to the whole planet...

...We haven't made enough of an impact on the universe to be worried about saving the universe - but planetary protection officers are worried about us polluting Mars with our microbes, and destroying any traces of native Martian life.
Title: Re: Is there a universal moral standard?
Post by: Halc on 08/10/2019 14:44:14
Quote from: Halc
Here you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.
If you don't have output from the sentience, it has no role in the system.
With that I agree, but you are not consistent with this model.

Quote
I also never said that something external to the physical system was involved in any way. Whatever is sentient, if feelings exist at all, is necessarily part of the physical system.
OK, this is different. If it is part of the physical system, why can't it play a role in the system?  What prevents it from having an output?
It would seem that I don't avoid hitting my thumb with a hammer because I want to avoid saying 'ouch'.  I can say the word freely and it causes me no discomfort. No, I avoid hitting my thumb because it would hurt, which means the past experience of pain has had the causal effect of making me more careful. That's an output (a useful role), but you deny that this causal chain (output from the physical sentience) exists.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 08/10/2019 22:12:00
Your calculation of harm:benefit here has nothing to do with feelings.

It's entirely about feelings. The benefits are all feelings, and the harm is all feelings. If you prevent some of the benefits by removing the people who would have received them, you have worse harm:benefit figures as a result of their loss. Killing them humanely doesn't prevent that loss. Let's say that with our thousand people on the island, the average one of them gets 90 units of pleasure and 10 units of suffering in their life, so we have 10,000 units of suffering and 90,000 units of pleasure. By using a ":" I accidentally made it look like a ratio, but that isn't the right way to crunch the numbers. What you have to do is subtract the 10,000 form the 90,000 to get the score of 80,000. That is what the lives of those 1000 people are worth. If you kill 999 people humanely, you reduce the harm figure for each of those 999 people from 10 to 0 and you reduce the pleasure figure from 90 to 0 for each of them too. The one survivor is happier, and he may now have a suffering value of 1 and a pleasure value of 500, which we can turn into a score of 499. The quality of life for the new population is 499, but it used to be 80,000. That's a very immoral change indeed.

Note that if we could get that one surviving individual's pleasure up to 100,000, that would change the situation, but it would be hard to achieve such figures in any real scenario unless we're dealing with one human versus 999 mosquitoes.

Quote
Moral rules based on pleasure and suffering as their ultimate goals are vulnerable to reward hacking (such as drugs) and exploitation by utility monsters.

People don't want to be put on drugs by force. Those who wish to do that to others have a moral obligation to do it to themselves instead and allow others to make the same choice. As it happens, a lot of people do make that choice for themselves, and it doesn't appear to give them better lives. And utility monsters are not doing the sums correctly.

Quote
We know that killing random person is immoral, even if we can make sure that the person doesn't feel any pain while dying. There must be a more fundamental reason to get to that conclusion, other than minimising suffering, because no suffering is involved here.

You've focused on one half of the sum and ignored the other. That killer is not taking into account the loss of pleasure that results from his actions.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 08/10/2019 22:15:55
If you grew up Scottish winters, standing in the rain is likely to give you hypothermia.
If you grew up in Darwin (Australia), standing in the rain cools you down a bit, and the water will evaporate fairly soon anyway.

That is taken into account before the inputs go into the black box. The feelings relate to the total and not to the individual components. The black box remains superfluous.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 08/10/2019 22:32:57
Quote from: Halc
Here you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.
If you don't have output from the sentience, it has no role in the system.
With that I agree, but you are not consistent with this model.

I've been over this stuff a thousand times in conversations like this and I'm being consistent throughout. The models keep changing to illustrate different points and what's said about them has to change to match. I don't know what you think isn't consistent, but if you want to chase it down you'll find that it isn't there.

Quote
Quote
I also never said that something external to the physical system was involved in any way. Whatever is sentient, if feelings exist at all, is necessarily part of the physical system.
OK, this is different. If it is part of the physical system, why can't it play a role in the system?  What prevents it from having an output?

Nothing prevents it from having an output. The problem is that the output is dictated by the input in such a way that the experience of feelings is superfluous. That doesn't mean an experience of feelings isn't part of the chain of causation, but the system would work just as well without it. And the other problem is that the information system that generates the claims about feelings being felt is outside the black box and cannot know anything about the feelings that are supposedly being experienced in there. If you attempt to put the information system inside the black box, we can again isolate the part where the feelings are experienced and put that into another black box within the first one, and again we see that the information system that generates the claims about feelings does so without seeing any evidence for them existing.

Quote
It would seem that I don't avoid hitting my thumb with a hammer because I want to avoid saying 'ouch'.  I can say the word freely and it causes me no discomfort. No, I avoid hitting my thumb because it would hurt, which means the past experience of pain has had the causal effect of making me more careful. That's an output (a useful role), but you deny that this causal chain (output from the physical sentience) exists.

I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot competent, unless there's something spectacular going on in the physics which science has not yet uncovered. There is no way to integrate sentience into computers in any way that enables the information system to know what is being experienced by anything that might be feeling feelings. It simply can't be done. If feelings are real in humans, something truly weird is going on.
Title: Re: Is there a universal moral standard?
Post by: Halc on 09/10/2019 01:42:40
And the other problem is that the information system that generates the claims about feelings being felt is outside the black box and cannot know anything about the feelings that are supposedly being experienced in there.
I am conversing with your information system, not the black box, and that information system seems very well aware indeed of those feelings. Your stance seem to be that you are unaware that you feel pain and such. I feel mine, but I cannot prove that to you since only I have a subjective connection to the output of what you call this black box.

On the other hand, you claim the black box does have outputs, but they're apparently not taken into consideration by anything, which is functionally the same as not having those outputs, sort of like a computer with a VGA output without a monitor plugged into it.

Quote
Quote
I avoid hitting my thumb because it would hurt, which means the past experience of pain has had the causal effect of making me more careful. That's an output (a useful role), but you deny that this causal chain (output from the physical sentience) exists.
I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot competent,
Cannot competent?  That seems a typo, but I cannot guess as to what you meant there.
Again this contradiction is asserted: You don't deny the causal connection exists, yet the information system is seemingly forbidden from using the connection.  Perhaps your black box also holds an entirely different belief about how it all works, but your information system instead generates these contradictory statements, and the black box lacks the free will to make it post its actual beliefs.

Quote
unless there's something spectacular going on in the physics which science has not yet uncovered.
A simple wire (nerve) from the black box to the 'information system' part is neither spectacular nor hidden from science.  In reality, there's more than one, but a serial line would do in a pinch.  Perhaps you posit that the black box is spatially separated from the information system to where a wire would not be practical. If so, you've left off that critical detail, which is why I'm forced to play 20 questions, 'chasing it down' as you put it.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 09/10/2019 02:01:07
In a chess game, the winner is not determined by who has more pieces, nor the one with highest sum of pieces' values. They are merely rule of thumb, short cut, approximation, Which is usually useful when we can't be sure about the end position of the game. We can easily find exceptions where they don't apply, which means they are not the most fundamental principle. Likewise, maximizing pleasure and minimizing pain are just short cut to approximate a more fundamental moral rule. The real fundamental moral rule must be applied universally, without exception. Any dispute would turn out to be technical problems due to incomplete information at hand.
Title: Re: Is there a universal moral standard?
Post by: Halc on 09/10/2019 02:47:21
In a chess game, the winner is not determined by who has more pieces, nor the one with highest sum of pieces' values. They are merely rule of thumb, short cut, approximation, Which is usually useful when we can't be sure about the end position of the game.
The set of all possible chess states does not represent a game being played. It wouldn't be an eternal structure if it did.
If the states have any sort of property of being better or worse than a different state, then there are exactly 3 states: Ones where white can force a win, ones where black can, and the remaining states.  The only reason a human game of chess is deeper than that is because we can't just look at a chess position and know which of those 3 states it represents. If we could, the game would be trivial.
So no, there are no values on the pieces or rules of thumb in the set I described. Those 3 states at best, and not even those if the concept of 'win' is not part of the various final positions (the ones that don't have a subsequent position).
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 09/10/2019 07:10:55
The only reason a human game of chess is deeper than that is because we can't just look at a chess position and know which of those 3 states it represents. If we could, the game would be trivial.
In some cases we can, especially when the possible moves ahead are limited. That's why in high level games, grand masters often resign when they still have several moves ahead before inevitably fall to a checkmate position.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 09/10/2019 20:49:18
In a chess game, the winner is not determined by who has more pieces, nor the one with highest sum of pieces' values. They are merely rule of thumb, short cut, approximation, Which is usually useful when we can't be sure about the end position of the game. We can easily find exceptions where they don't apply, which means they are not the most fundamental principle. Likewise, maximizing pleasure and minimizing pain are just short cut to approximate a more fundamental moral rule. The real fundamental moral rule must be applied universally, without exception. Any dispute would turn out to be technical problems due to incomplete information at hand.

The more fundamental rule is the one that you treat all participants as if they are a single participant. It ends up being much the same thing as utilitarianism. In your chess example, the players don't care about the wellbeing of their troops: a player could deliberately play a game in which he ends up with nothing more than king and rook against king and he will be just as happy as if he annihilated the other side without losing a piece of his own.

If you think my method for calculating morality doesn't work, show me an example of it failing.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 09/10/2019 21:25:16
I am conversing with your information system, not the black box, and that information system seems very well aware indeed of those feelings.

Then show me a model for how those feelings are integrated into the information system. The only kinds of information system science understands map to the Chinese Room processor in which feelings cannot have a role.

Quote
Your stance seem to be that you are unaware that you feel pain and such. I feel mine, but I cannot prove that to you since only I have a subjective connection to the output of what you call this black box.

My stance is that if feelings are real, something's going on in the brain which is radically different from anything science knows about when it comes to computation because feeling are incompatible with what computers do.

Quote
On the other hand, you claim the black box does have outputs, but they're apparently not taken into consideration by anything, which is functionally the same as not having those outputs, sort of like a computer with a VGA output without a monitor plugged into it.

Not at all. The outputs clearly have a role, but they are determined by the inputs in such a way that the black box is superfluous: the inputs can feed directly into the outputs without any difference in the actions of the machine and the claims that it generates about feelings being experienced.

Computers don't have a "read qualia" instruction, but if they did, it would simply be taking the output from a black box and then interpreting it by applying rules stored in data which was put together by something that had no idea what was actually in the black box. That is the big disconnect. http://magicschoolbook.com/consciousness (http://magicschoolbook.com/consciousness) - this illustrates the problem, and I've been trying to find an error in this for many years.

For what it's worth, I think sentience (and consciousness as a whole) is the most fundamental thing that we're dealing with here, not least because it's the one thing about our universe that can't be a simulation. I think that the way we do processing in computers is not the only way that computation can be done and that there must be some alternative method in which sentience is at its core. It is necessarily part of physics, but it has not yet been identified. Tracing back the claims that we generate to see what evidence they're based on is the way to explore this, but it may be hard to do that without destroying the thing whose workings we're trying to study. If everything else is a simulation, the mechanism may be very carefully hidden, but it has to show up somewhere because it is part of the chain of causation.

Quote
Quote
I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot competent,
Cannot competent?  That seems a typo, but I cannot guess as to what you meant there.

"cannot be competent" - a word went missing somehow.

Quote
Again this contradiction is asserted: You don't deny the causal connection exists, yet the information system is seemingly forbidden from using the connection.  Perhaps your black box also holds an entirely different belief about how it all works, but your information system instead generates these contradictory statements, and the black box lacks the free will to make it post its actual beliefs.

We have something (unidentified) experiencing feelings, but how is that unidentified thing going to be able to tell anything else about that experience? Is it to be a data system? If so, what is it in that information system that's experiencing feelings? The whole thing? Where's the mechanism for that? If we run that information on a Chinese Room processor, we find that there's no place for feelings in it. Where is it reading the feelings and how is it generating data to document that experience of feelings? We can stick a black box into it in which the feelings are felt, and then we can go into that black box to find another information system with a black box in it where the feelings are felt, and then when we go into that box we find another black box... It's black boxes all the way down forever.

You're one of the most rational people I've encountered in my search for intelligent life on this planet, so maybe you'll be able to get your head round the problem and then be able to start looking for solutions in the right place. We're looking for a model that makes sense of sentience by showing its role and by showing how data is generated to document the experience of feelings. With computation as we know it, there is no way to make such a model. We're missing something big.

Quote
A simple wire (nerve) from the black box to the 'information system' part is neither spectacular nor hidden from science.

How do you know what the output from the box means? How does the data system attribute meaning to that signal? If we try to model this based on our current understanding of computation, we get a signal in from the black box in the form of a value in a port. We then look up a file to see what data from that port represents, and then we assert that it represents that. The information in the file was not created by anything in the black box, and whatever it was that created that data has no way of knowing anything about feelings, so the claims generated by mapping data in that file to input from that port are nothing more than fiction.

Quote
In reality, there's more than one, but a serial line would do in a pinch.

Let's give it a parallel port and have it speak in ASCII. Now ask yourself, how is it able to speak to us? How does it know our language? There's an information processing system in the black box, and that can run on a Chinese Room processor. Where are the feelings being experienced in the box, and what by? How is the information system in the black box able to measure them and know what the numbers it's getting in its measurements mean? It looks up a file to see what the numbers mean, and then it maps them too it and creates an assertion about something which it cannot know anything about.

Quote
Perhaps you posit that the black box is spatially separated from the information system to where a wire would not be practical. If so, you've left off that critical detail, which is why I'm forced to play 20 questions, 'chasing it down' as you put it.

Draw a model and see how well you get on with it. Where is the information system reading the feeling and how does it know that there's a feeling there at all? How does it construct the data that documents this experience of feeling, and where does it ever see the evidence that the feeling is in any way real?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 10/10/2019 08:02:30
The more fundamental rule is the one that you treat all participants as if they are a single participant. It ends up being much the same thing as utilitarianism. In your chess example, the players don't care about the wellbeing of their troops: a player could deliberately play a game in which he ends up with nothing more than king and rook against king and he will be just as happy as if he annihilated the other side without losing a piece of his own.
Yes. It's written in the rules of the game. People tend to be more emotional when they are dealing with anthropomorphized objects, such as chess pieces. I don't see something like that in other games like Go, where the pieces are not anthropomorphized.

If you think my method for calculating morality doesn't work, show me an example of it failing.

Quote
Because utilitarianism is not a single theory but a cluster of related theories that have been developed over two hundred years, criticisms can be made for different reasons and have different targets.

https://en.wikipedia.org/wiki/Utilitarianism#Criticisms



Quote
The thought experiment
A hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.[1] Nozick writes:

Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose ... the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.[2]

This thought experiment attempts to show that utilitarianism is not actually egalitarian, even though it appears to be at first glance.[1]

The experiment contends that there is no way of aggregating utility which can circumvent the conclusion that all units should be given to a utility monster, because it's possible to tailor a monster to any given system.

For example, Rawls' maximin considers a group's utility to be the same as the utility of the member who's worst off. The "happy" utility monster of total utilitarianism is ineffective against maximin, because as soon as a monster has received enough utility to no longer be the worst-off in the group, there's no need to accommodate it. But maximin has its own monster: an unhappy (worst-off) being who only gains a tiny amount of utility no matter how many resources are given to it.

It can be shown that all consequentialist systems based on maximizing a global function are subject to utility monsters.[1]

History
Robert Nozick, a twentieth century American philosopher, coined the term "utility monster" in response to Jeremy Bentham's philosophy of utilitarianism. Nozick proposed that accepting the theory of utilitarianism causes the necessary acceptance of the condition that some people would use this to justify exploitation of others. An individual (or specific group) would claim their entitlement to more "happy units" than they claim others deserve, and the others would consequently be left to receive fewer "happy units".

Nozick deems these exploiters "utility monsters" (and for ease of understanding, they might also be thought of as happiness hogs). Nozick poses utility monsters justify their greediness with the notion that, compared to others, they experience greater inequality or sadness in the world, and deserve more happy units to bridge this gap. People not part of the utility monster group (or not the utility monster individual themselves) are left with less happy units to be split among the members. Utility monsters state that the others are happier in the world to begin with, so they would not need those extra happy units to which they lay claim anyway.
https://en.wikipedia.org/wiki/Utility_monster
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 10/10/2019 20:29:41
Because utilitarianism is not a single theory but a cluster of related theories that have been developed over two hundred years, criticisms can be made for different reasons and have different targets.

I have given you a method which can be used to determine the right form of utilitarianism. Where they differ, we can now reject the incorrect ones.
 
Quote
The thought experiment
A hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.

No it would not allow the mistreatment of anyone. This is what poor philosophers always do when they analyse thought experiments incorrectly - they jump to incorrect conclusions. Let me provide a better example, and then we'll look back at the above one afterwards. Imagine that a scientist creates a new breed of human which gets 100 times more pleasure out of life, and that these humans aren't disadvantaged in any way. The rest of us would then think, we want that too. If we can't have it added to us through gene modification, would it be possible to design it into our children? If so, then that is the way to switch to a population of people who enjoy life more without upsetting anyone. The missing part of the calculation is that upset that would be caused by mistreating or annihilating people, and the new breed of people who get more enjoyment out of living aren't actually going to get that enjoyment if they spend all their time fearing that they'll be wiped out next in order to make room for another breed of human which gets 10,000 times as much pleasure out of living. By creating all that fear, you actually create a world with less pleasure in it.

Let us suppose that we can't do it with humans though and that we need to be replaced with the utility monster in order to populate the universe with things that get more out of existing than we do. The correct way to make that transition is for humans voluntarily to have fewer children and to reduce their population gradually to zero over many generations while the utility monsters grow their population. We'd agree to do this for the same reason that if we were spiders we'd be happy to disappear and be replaced by humans. We would see the superiority of the utility monster and let it win out, but not through abuse and genocide.

Quote
[1] Nozick writes:

Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose ... the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.[2]

No. Utilitarian theory applied correctly does not allow that because it actually results in a hellish life of fear for the utility monsters.

Quote
This thought experiment attempts to show that utilitarianism is not actually egalitarian, even though it appears to be at first glance.[1]

When you apply my method to it, you see that one single participant is each of the humans and each of the utility monsters, living each of those lives in turn. This helps you see the correct way to apply utilitarianism because that individual participant will suffer more if the people in the system are abused and if the utility monsters are in continual fear that they'll be next to be treated that way.

Quote
The experiment contends that there is no way of aggregating utility which can circumvent the conclusion that all units should be given to a utility monster, because it's possible to tailor a monster to any given system.

That analysis of the experiment is woeful philosophy (and it is also very much the norm for philosophy because most philosophers are shoddy thinkers who fail to take all factors into account).

Quote
For example, Rawls' maximin considers a group's utility to be the same as the utility of the member who's worst off. The "happy" utility monster of total utilitarianism is ineffective against maximin, because as soon as a monster has received enough utility to no longer be the worst-off in the group, there's no need to accommodate it. But maximin has its own monster: an unhappy (worst-off) being who only gains a tiny amount of utility no matter how many resources are given to it.

I don't know what that is, but it isn't utilitarianism because it's ignoring any amount of happiness beyond the level of the least happy thing in existence.

Quote
It can be shown that all consequentialist systems based on maximizing a global function are subject to utility monsters.[1]

If you ask people if they'd like to be modified so that they can fly, most would agree to that. We could replace non-flying humans with flying ones and we'd like that to happen. That is a utility monster, and it's a good thing. There are moral rules about how we get from one to the other, and that must be done in a non-abusive way. If all non-flying humans were humanely killed to make room with flying ones, are those flying ones going to be happy when they realise the same could happen to them to make room for flying humans that can breathe underwater? No. Nozick misapplies utilitarianism.
Title: Re: Is there a universal moral standard?
Post by: Halc on 11/10/2019 05:34:37
Then show me a model for how those feelings are integrated into the information system. The only kinds of information system science understands map to the Chinese Room processor in which feelings cannot have a role.
I don't think a system would pass a Turing test without feelings, so the Chinese room, despite being a test of ability to imitate human intelligence, not feelings, would seem to be an example of strong AI. All Searle manages to prove is that by replacing a CPU with a human, the human can be shown to function without an understanding of the Chinese language, which is hardly news. In the same way, the CPU of my computer has no idea that a jpg file represents an image.
Secondly, the mind of no living thing works via a von-Neumann architecture, with a processing unit executing a stream of instructions, but it has been shown that a Turning machine can execute any algorithm including doing what any living thing does, and thus the Chinese room is capable of passing the Turing test if implemented correctly.

- - -

Concerning the way we've been using the term 'black box'.  You are describing a white box since you are placing the feelings of the sentience in the box.  A black box has no description of what is in the box, only a description of inputs and outputs.  A black box with no outputs can be implemented with an empty box.

Quote
The outputs clearly have a role, but they are determined by the inputs in such a way that the black box is superfluous: the inputs can feed directly into the outputs without any difference in the actions of the machine and the claims that it generates about feelings being experienced.
If the inputs and outputs are identical, the box can be implemented as a pass-through box, which is indeed superfluous unless bypass is not an option.  The phone lines in my street work that way, propagating signals from here to there with the output being ideally the same as the input.
Those lines are not superfluous because my phone would not work if you took them away.  You seem to posit that the box is white, not black, and generates feelings that are not present at the inputs.  If the inputs can be fed straight into the outputs without any difference, then the generation of said feelings cannot be distinguished at the outputs from a different box that doesn't generate them.

Quote
it would simply be taking the output from a black box and then interpreting it by applying rules stored in data which was put together by something that had no idea what was actually in the black box.
The whole point of a black box is that one doesn't need to know what's inside it. The whole point of the consciousness debate is to discuss what's going on inside us, so using black-box methodology seems a poor strategy for achieving this.

Quote
http://magicschoolbook.com/consciousness (http://magicschoolbook.com/consciousness) - this illustrates the problem, and I've been trying to find an error in this for many years.
The site lists 19 premises.  Some of them are just definitions, but some very much are assumptions, and the conclusions drawn are only as strong as the assumptions. I could think of counterexamples of many of the premises. Others are begging a view that defies methodological naturalism, which makes them non-scientific premises. So you're on your own if you find problems with it.

Quote
I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot [be] competent,
OK, I repaired the sentence, but now you're saying that your own claims of experiencing pain are not competent claims?  I don't think you meant to say that either, but that's how it comes out now.  The claims (the posts on this site) are output by the information system, right?  What else produces them? Maybe you actually mean it.

Quote
We have something (unidentified) experiencing feelings, but how is that unidentified thing going to be able to tell anything else about that experience?
Using the output you say it has. I don't think the thing is unidentified, nor do I deny the output from it since said output is plastered all over our posts.

Quote
Is it to be a data system? If so, what is it in that information system that's experiencing feelings? The whole thing? Where's the mechanism for that?
You don't know where the whole thing is?
Neurologists say that most of the basic emotions we feel (pleasure, fear and such) are processed in the limbic system, so most creatures don't feel them. Various kinds of qualia are handled in different places.  Pain in particular seems not specific to any subsystem, so 'whole thing' (not just brain) is a pretty good description. If you hold to the dualist view, then you assert that all this is simply correlation, a cop-out that can be used no matter how much science learns about these things.

Quote
If we run that information on a Chinese Room processor, we find that there's no place for feelings in it.
The Chinese room models a text-only I/O.  A real human is not confined to a text-only stream of input.  It makes no attempt to model a human.  If it did, there would indeed be a place for feelings. All the experiment shows is that the system can converse in Chinese without the guy knowing Chinese, similar to how I can post in English without any of my cells knowing the language.

Quote
With computation as we know it, there is no way to make such a model. We're missing something big.
Computation as you know it is a processor running a set of instructions, hardly a model of any living thing, which is more of an electro-chemical system with a neural net. The chemicals are critical, easily demonstrated by the changed behavior of people under various drugs. Chemicals would have zero effect on a CPU running a binary instruction stream, except possibly to dissolve it.

Quote
A simple wire (nerve) from the black box to the 'information system' part is neither spectacular nor hidden from science.
How do you know what the output from the box means?[/quote]I don't have to. According to your terminology, the 'data system' needs the output to be mapped according to the rules of that data system. Evolution isn't going to select for one system that cannot parse its own inputs. That would be like hooking the vision data to the auditory system and v-v. It violates the rules of the data system, leaving the person blind and deaf.

Quote
How does the data system attribute meaning to that signal?
Same way my computer attributes meaning from the USB signal from my mouse: by the mouse outputting according to the rules of the data system, despite me personally not knowing those rules. I'm no expert in USB protocol. I'm more of an NFS guy, and this computer doesn't use an NFS interface. There's probably no mouse that speaks NFS.

Quote
If we try to model this based on our current understanding of computation, we get a signal in from the black box in the form of a value in a port. We then look up a file to see what data from that port represents, and then we assert that it represents that.
Look up a file? My, you sure know a lot more about how it works than I do.

Quote
In reality, there's more than one, but a serial line would do in a pinch.
Let's give it a parallel port and have it speak in ASCII. Now ask yourself, how is it able to speak to us? How does it know our language?[/quote]You tell me.  You're the one that compartmentalizes it into an isolated box like that. Not my model at all.

Quote
There's an information processing system in the black box
Then it isn't a black box.
Quote
and that can run on a Chinese Room processor. Where are the feelings being experienced in the box, and what by? How is the information system in the black box able to measure them and know what the numbers it's getting in its measurements mean? It looks up a file to see what the numbers mean, and then it maps them too it and creates an assertion about something which it cannot know anything about.
Again, your model, not mine. I have no separation of information system and the not-information-system.

Quote
Draw a model and see how well you get on with it. Where is the information system reading the feeling and how does it know that there's a feeling there at all?
There's no reading of something outside the information system. My model only has the system, which does its own feeling.
Quote
How does it construct the data that documents this experience of feeling
Sounds like you're asking how memory works. I don't know. Not a neurologist.
Quote
where does it ever see the evidence that the feeling is in any way real?
I (the information system) have subjective evidence of my feelings.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/10/2019 10:13:54
No it would not allow the mistreatment of anyone. This is what poor philosophers always do when they analyse thought experiments incorrectly - they jump to incorrect conclusions. Let me provide a better example, and then we'll look back at the above one afterwards. Imagine that a scientist creates a new breed of human which gets 100 times more pleasure out of life, and that these humans aren't disadvantaged in any way. The rest of us would then think, we want that too. If we can't have it added to us through gene modification, would it be possible to design it into our children? If so, then that is the way to switch to a population of people who enjoy life more without upsetting anyone. The missing part of the calculation is that upset that would be caused by mistreating or annihilating people, and the new breed of people who get more enjoyment out of living aren't actually going to get that enjoyment if they spend all their time fearing that they'll be wiped out next in order to make room for another breed of human which gets 10,000 times as much pleasure out of living. By creating all that fear, you actually create a world with less pleasure in it.

Let us suppose that we can't do it with humans though and that we need to be replaced with the utility monster in order to populate the universe with things that get more out of existing than we do. The correct way to make that transition is for humans voluntarily to have fewer children and to reduce their population gradually to zero over many generations while the utility monsters grow their population. We'd agree to do this for the same reason that if we were spiders we'd be happy to disappear and be replaced by humans. We would see the superiority of the utility monster and let it win out, but not through abuse and genocide.
I think we need to be clear about our definition of terms we used in this discussion, since subtle differences may lead to frustrating disagreements. I want to avoid implicit assumptions and taking for granted that our understanding of a term is the same as other participants.
Who do you mean with anyone? human? what about animals and plants?
Why pleasure is good while pain is bad? what about inability/reduced ability to feel pain or pleasure?
How much fewer children is considered acceptable?
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 11/10/2019 22:25:26
I don't think a system would pass a Turing test without feelings, so the Chinese room, despite being a test of ability to imitate human intelligence, not feelings, would seem to be an example of strong AI. All Searle manages to prove is that by replacing a CPU with a human, the human can be shown to function without an understanding of the Chinese language, which is hardly news. In the same way, the CPU of my computer has no idea that a jpg file represents an image.
Secondly, the mind of no living thing works via a von-Neumann architecture, with a processing unit executing a stream of instructions, but it has been shown that a Turning machine can execute any algorithm including doing what any living thing does, and thus the Chinese room is capable of passing the Turing test if implemented correctly.

In principle, a system with no feelings could pretend to have feelings sufficiently well to pass the Turing Test. It would gradually learn what it has to claim to be feeling in any situation not be be caught out.

We do actually process streams of instructions in our head when doing careful processing, like maths, or when making food by following a recipe. The algorithms that we apply there are directly replicable on computers.

Quote
Concerning the way we've been using the term 'black box'.  You are describing a white box since you are placing the feelings of the sentience in the box.  A black box has no description of what is in the box, only a description of inputs and outputs.  A black box with no outputs can be implemented with an empty box.

Okay, but it's a black box until we try to work out what's going on inside it, at which point it becomes a white box and we have to complete the contents by including a new black box.

Quote
Those lines are not superfluous because my phone would not work if you took them away.  You seem to posit that the box is white, not black, and generates feelings that are not present at the inputs.  If the inputs can be fed straight into the outputs without any difference, then the generation of said feelings cannot be distinguished at the outputs from a different box that doesn't generate them.

First, the outputs are not the same as the inputs: there's an extra output line which duplicates what goes out on the main output line, and this extra one is read as indicating that a feeling was experienced. Then there's the bit about your phone functioning. What evidence to we have that sentience is functioning in the box? Is there information coming out of the box that says so? If so, how is that data constructed? Do we have an information system inside the box creating it or do we just have a signal coming out of the box whose meaning is asserted for it by an information system on the outside which cannot know if its claims are true? If the latter, the data is incompetent. If the former, then there's an information system inside the box (now turning white) and we're adding a new black box on the inside to hold the part of the system which we can't model.

Quote
The whole point of a black box is that one doesn't need to know what's inside it. The whole point of the consciousness debate is to discuss what's going on inside us, so using black-box methodology seems a poor strategy for achieving this.

The whole point of the black box is to draw your attention to the problem. If the bit we can't model is inside the black box and we don't know what's going on in there, we don't have a proper model of sentience. To make a proper model of sentience we have to eliminate the black box, but no one has ever managed to do so because they always have to point somewhere and say "feelings are felt here and they are magically recognised as existing and as being feelings by this magic routine which asserts that they are being felt there even though it has absolutely no evidence to back its assertion".

Quote
The site lists 19 premises.  Some of them are just definitions, but some very much are assumptions, and the conclusions drawn are only as strong as the assumptions. I could think of counterexamples of many of the premises. Others are begging a view that defies methodological naturalism, which makes them non-scientific premises. So you're on your own if you find problems with it.

Give me your best counterexample then. So far as I can see, they are correct. If you can break any one of them, that might lead to an advance, so don't hold back.

Quote
OK, I repaired the sentence, but now you're saying that your own claims of experiencing pain are not competent claims?  I don't think you meant to say that either, but that's how it comes out now.  The claims (the posts on this site) are output by the information system, right?  What else produces them? Maybe you actually mean it.

That is predicated on the idea that the brain works like a computer, processing data in ways that science understands. If the claims coming out of my head about feelings are competent, some other kind of system is putting that data together in a way that science has yet to account for.

Quote
Quote
We have something (unidentified) experiencing feelings, but how is that unidentified thing going to be able to tell anything else about that experience?
Using the output you say it has. I don't think the thing is unidentified, nor do I deny the output from it since said output is plastered all over our posts.

That isn't good enough. The whole point is that the only way to interpret that output is to map baseless assertions to it, unless the output is already coming in the form of data that the external data system can understand, but if that's the case, we need to see the information system that constructed that data and to model how it knew what the output from the sentient thing means.

Quote
Quote
Is it to be a data system? If so, what is it in that information system that's experiencing feelings? The whole thing? Where's the mechanism for that?
You don't know where the whole thing is?

I'm not asking where the whole thing is. I was asking if it's the whole thing that's experiencing feelings rather than just a part of it. It makes little difference either way though, because to model this we need to have an interface between the experience and the system that makes data. For that data to be true, the system that makes it has to be able to know about the experience, but it can't.

Quote
If you hold to the dualist view, then you assert that all this is simply correlation, a cop-out that can be used no matter how much science learns about these things.

You've only found it once you've found the interface and seen how the data system knows that the data it's generating is true.

Quote
The Chinese room models a text-only I/O.  A real human is not confined to a text-only stream of input.  It makes no attempt to model a human.  If it did, there would indeed be a place for feelings. All the experiment shows is that the system can converse in Chinese without the guy knowing Chinese, similar to how I can post in English without any of my cells knowing the language.

A Chinese Room processor can run any code at all and can run an AGI system. It is Turing complete. It cannot handle actual feelings, but can handle data that represents feelings. A piece of paper with a symbol on it can represent a feeling, but nothing there is feeling that feeling. We need to model actual feelings, and that's something that science cannot yet do in any way that enables them to be detected.

Quote
Computation as you know it is a processor running a set of instructions, hardly a model of any living thing, which is more of an electro-chemical system with a neural net. The chemicals are critical, easily demonstrated by the changed behavior of people under various drugs. Chemicals would have zero effect on a CPU running a binary instruction stream, except possibly to dissolve it.

We can simulate neural networks. Where is the interface between the experience of feelings and the system that generates the data to document that experience? Waving at something complex isn't good enough. You have no model of sentience, but we do have models of neural nets which are equivalent to running algorithms on conventional computers.

Quote
Quote
How do you know what the output from the box means?
I don't have to. According to your terminology, the 'data system' needs the output to be mapped according to the rules of that data system. Evolution isn't going to select for one system that cannot parse its own inputs. That would be like hooking the vision data to the auditory system and v-v. It violates the rules of the data system, leaving the person blind and deaf.

If evolution selects for an assertion of pain being experienced in once case and an assertion of pleasure in another case, who's to say that the sentient thing isn't actually feeling the opposite sensation to the one asserted? The mapping of assertion to output is incompetent.

Quote
Quote
How does the data system attribute meaning to that signal?
Same way my computer attributes meaning from the USB signal from my mouse: by the mouse outputting according to the rules of the data system, despite me personally not knowing those rules. I'm no expert in USB protocol. I'm more of an NFS guy, and this computer doesn't use an NFS interface. There's probably no mouse that speaks NFS.

The mouse is designed to speak the language that the computer understands, or rather, the computer is told how to interpret the squeaks from the mouse. If there are feelings being experienced in the mouse, the computer cannot know about them unless the mouse tells it, and for the mouse to tell it it has to use a language. If the mouse is using a language, something in the mouse has to be able to read the feelings, and how does that something know what's being felt? It can't.

Quote
Quote
If we try to model this based on our current understanding of computation, we get a signal in from the black box in the form of a value in a port. We then look up a file to see what data from that port represents, and then we assert that it represents that.
Look up a file? My, you sure know a lot more about how it works than I do.

That's exactly the point. You see room for something impossible in the places where you don't know what's going on. I understand what's going on throughout the whole system, apart from the place where the magic is needed to complete the model.

Quote
Quote
Let's give it a parallel port and have it speak in ASCII. Now ask yourself, how is it able to speak to us? How does it know our language?
You tell me.  You're the one that compartmentalizes it into an isolated box like that. Not my model at all.

Your model works on magic. I'm trying to eliminate the magic, and the black box shows the point where that task becomes impossible. So, you open up the black box and have the feelings exist somewhere (who cares where) in the system while data is generated to document the existence of those feelings, but you still can't show me how the part of the system putting that data together knows anything about the feelings at all.

Quote
Quote
There's an information processing system in the black box
Then it isn't a black box.

We want to explain the magic component, so we break it open and it becomes a white box, but it then contains a black box where the magic component resides. We can go on opening an infinite chain of black boxes and watch them turn white, but there will always be another black box containing the magic component which you can't model.

Quote
Again, your model, not mine. I have no separation of information system and the not-information-system.

And that's how you fool yourself into thinking you have a working model, but it runs on magic. The part of it that generates the data about feelings might be in intense pain, but how can the process it's running know anything about that feeling in order to generate data about it? It can't. That's where science is hopelessly lost. Our current understanding of computation is not compatible with sentience.

Quote
Quote
Draw a model and see how well you get on with it. Where is the information system reading the feeling and how does it know that there's a feeling there at all?
There's no reading of something outside the information system. My model only has the system, which does its own feeling.

And how does it then convert from that experience of feeling into data being generated in a competent way that ensures that the data is true? This is where there's a gap in your knowledge a mile wide, and you need to fill that.

Quote
Quote
How does it construct the data that documents this experience of feeling
Sounds like you're asking how memory works. I don't know. Not a neurologist.

I'm asking for a theoretical model. Science doesn't have one for this.

Quote
Quote
where does it ever see the evidence that the feeling is in any way real?
I (the information system) have subjective evidence of my feelings.

Show me the model.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 11/10/2019 22:52:52
Who do you mean with anyone? human? what about animals and plants?

If they're sentient, then they're included. Some animals may not be, and it's highly doubtful that any plants are, or at least, not in any way that's tied to what's happening to them (just as the material of a rock could be sentient).

Quote
Why pleasure is good while pain is bad?

They are just what they are. One is horrible and we try to avoid it, while the other is nice and we seek it out, with the result that most people are now overweight due to their desire to eat delicious things.

Quote
what about inability/reduced ability to feel pain or pleasure?

What about it? Each individual must be protected by morality from whatever kinds of suffering can be inflicted on it, and that varies between different people as well as between different species.

Quote
How much fewer children is considered acceptable?

Imagine that you have to live all the lives of all the people and utility monsters. They are all you. With that understanding in your head, you decide that you prefer being utility monsters, so you want to phase out people and replace them. You also have to live the lives of those people, so you need to work out how not to upset them, and the best way to do that is to let the transition take a long time so that the difference is too small to register with them. For a sustainable human population, each person who has children might have 1.2 children. That could be reduced to 1.1 and the population would gradually disappear while the utility monsters gradually increase in number. Some of those humans will realise that they're envious of the utility monsters and would rather be them, so they may be open to the idea of bringing up utility monsters instead of children, and that may be all you need to drive the transition. It might also make the humans feel a lot happier about things if they know that a small population of humans will be allowed to go on existing forever - that could result in better happiness numbers overall than having them totally replaced by utility monsters.
Title: Re: Is there a universal moral standard?
Post by: Halc on 12/10/2019 23:01:36
Okay, but it's a black box until we try to work out what's going on inside it, at which point it becomes a white box and we have to complete the contents by including a new black box.
This is fine, but you're not going to demonstrate your sentience that way, since you always put it in the black box where you cannot assert its existence.

Quote
First, the outputs are not the same as the inputs
Didn't you say otherwise?
Quote from: David
the inputs can feed directly into the outputs without any difference in the actions of the machine and the claims that it generates about feelings being experienced.
OK, this statement says the inputs can be fed into the outputs, but are not necessarily.  It says those outputs make no difference to the actions of the machine, which means the machine would claim feelings even if there were none. That means you've zero evidence for this sentience you claim.

Quote
there's an extra output line which duplicates what goes out on the main output line, and this extra one is read as indicating that a feeling was experienced.
This contradicts your prior statement.
1) How do you know about these lines? The answer seems awfully like something you just now made up.
2) If there are two outputs and one is a duplicate of the other, how can it carry additional information?  Duplicate outputs are usually there for redundancy so the system still works if one of them fails. That's part of the reason you have two eyes and such. The presence of a duplicate data stream does not indicate feelings if the one 'main' stream does not indicate the feelings.  There is no additional information in the 2nd line.
3) This is the contradiction part: You said earlier that the action of the 'machine' is unaffected by these outputs, but here you claim that an output is read as indicating that a feeling was experienced. That's being affected. If the machine action is unaffected by this output, then the output is effectively ignored at some layer.

Where does the output of your black box go?  To what is it connected?  This is outside the black box, so science should be able to pinpoint it. It's in the white part of the box after all.  If you can't answer that, then you can't make your black box ever smaller since the surrounding box is also black.

Quote
The whole point of the black box is to draw your attention to the problem.
More like a way to hide it. The scientists that work on this do not work this way. They explore what's in the box.
Quote
If the bit we can't model is inside the black box and we don't know what's going on in there, we don't have a proper model of sentience.
So you're admitting you don't have a proper white box model?  Does anybody claim they have one?
Quote
they always have to point somewhere and say "feelings are felt here and they are magically recognised as existing and as being feelings by this magic routine which asserts that they are being felt there even though it has absolutely no evidence to back its assertion".
I'm unaware of this wording.  There are no 'routines' for one thing. They very much do have evidence as to mapping where much of this functionality goes on, but that isn't a model of how it works.  It is a pretty good way to say which creatures 'feel' the various sorts of this to which humans can relate.
I don't think there can be an objective model of a subjective experience.  We might create an artificial sentience, and yet even knowing how it was created, we'd not be able to say how it works.  They're already way past the point where they know how some of the real AI systems work.  Fake AI, maybe, but not real AI.  A self-driving car is fake AI.

Quote
Quote
The site lists 19 premises.  Some of them are just definitions, but some very much are assumptions, and the conclusions drawn are only as strong as the assumptions. I could think of counterexamples of many of the premises. Others are begging a view that defies methodological naturalism, which makes them non-scientific premises. So you're on your own if you find problems with it.
Give me your best counterexample then. So far as I can see, they are correct. If you can break any one of them, that might lead to an advance, so don't hold back.
Some small nits.  The information system processes only data (1).  3 says the non-data must first be converted to data before being given to the information system (IS), but 5 and 13 talk about the IS doing the converting, which means it processes something that isn't data.  As I said, that's just a nit.
13 also talks about ideas being distinct from data. An idea sounds an awful lot like data to me.

A counterexamples comes up with 10 which says that data which is not covered by the rules of the IS cannot be considered by the IS.  Not sure what they mean by 'considered' but take a digital signal processor (DSP) or just a simple amplifier.  It might be fed a data stream that is meaningless to the IS, yet the IS is completely capable of processing the stream.  This is similar to the guy in the Chinese room.  He is an IS, and he's handling data (the Chinese symbols) that does not conform to his own rules (English), yet he's tasked with processing that data.

My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.

Quote
That is predicated on the idea that the brain works like a computer, processing data in ways that science understands.
Science does not posit the brain to operate like a computer.  There are some analogies, sure, but there is no equivalent to a CPU, address space, or instructions.  Yes, they have a fairly solid grasp on how the circuitry works, but not how the circuit works.

Quote
I'm not asking where the whole thing is. I was asking if it's the whole thing that's experiencing feelings rather than just a part of it.
Yes, It's the whole thing.  It isn't a special piece of material or anything.

Quote
It makes little difference either way though, because to model this we need to have an interface between the experience and the system that makes data. For that data to be true, the system that makes it has to be able to know about the experience, but it can't.
Doesn't work that way. Eyes arguably 'makes data', yet isn't a device that 'knows' about experience. The system that processes the data (in my case) has evolved to be compatible with the system that makes the data, not the other way around. It's very good at that, being able to glean information from new sources. They've taught humans to navigate by sound like a bat, despite the fact that we've not evolved for it. The system handles this alternately formatted data (outside the rules of the IS) just fine. The only thing they needed to add was the bit that produces the sound pulses, since we're not physically capable of generating them.

Quote
Quote
All the [Chinese room] experiment shows is that the system can converse in Chinese without the guy knowing Chinese, similar to how I can post in English without any of my cells knowing the language.
A Chinese Room processor can run any code at all and can run an AGI system. It is Turing complete.
Didn't say otherwise, but that's all it does is run code.  The processor doesn't know Chinese.  But the system (the whole thing) does.  There is no black box where the Chinese part is.  There's not a 'know Chinese' instruction in the book of English instructions from which the guy in there works.

Quote
We can simulate neural networks. Where is the interface between the experience of feelings and the system that generates the data to document that experience?
This presumes that the experience is not part of the system, and that it needs to be run through this data-generation step. You hold the same premise as step 7.
Anyway, an neural net would not accurately simulate a human since a human is more than a network. A human is part of a larger network, which would also need to be simulated. Not saying it cannot be done, and I don't think it need be done at a deeper level than electro-bio-chemical.  Going to the molecular level for instance seems unnecessary.

Quote
Waving at something complex isn't good enough. You have no model of sentience.
Pretty much how you're presenting your views, yes.  My model is pretty simple actually.  I don't claim to know how it works.  Neither do you, but you add more details than do I, but still hide your complex part in a black box, as if you had an understanding of how the data-processing part worked.

Quote
but we do have models of neural nets which are equivalent to running algorithms on conventional computers.
That we do.

Quote
If evolution selects for an assertion of pain being experienced in once case and an assertion of pleasure in another case, who's to say that the sentient thing isn't actually feeling the opposite sensation to the one asserted? The mapping of assertion to output is incompetent.
This makes no sense to me since I don't model the sentience as a separate thing. There is no asserting going on. If the data system takes 'damage' data and takes pleasure from them, then it will make choices to encourage the sensation, resulting in the being being less fit.

Quote
The mouse is designed to speak the language that the computer understands, or rather, the computer is told how to interpret the squeaks from the mouse.
The first guess is closer. Somebody put out a standard interface and both computer and mouse adhere to that interface. Sensory organs and brains don't work that way, being evolved rather than designed. Turns out the sensory organ pretty much defines the data format, and the IS is really good at extracting meaning from any data. So we could in theory afix a 6th sense to detect vibrations of passing creatures, like the lateral line in fish. Run some nerves from that up the spine and the IS would quickly have a new sense to add to its qualia. Some people see a 4th color, and some only 2.

Quote
If there are feelings being experienced in the mouse, the computer cannot know about them unless the mouse tells it, and for the mouse to tell it it has to use a language.
And even then, the computer only knows about the claim, not the feelings. You don't seem to be inclined to believe a computer mouse if it told you it had feelings.

Quote
If the mouse is using a language, something in the mouse has to be able to read the feelings, and how does that something know what's being felt? It can't.
This again assumes feelings separate from the thing that reads it.  Fine and dandy if it works that way, but if the two systems don't interface in a meaningful way, then system 2 is not able to pass on a message from system 1 that it just interprets as noise.

Quote
I'm trying to eliminate the magic, and the black box shows the point where that task becomes impossible. So, you open up the black box and have the feelings exist somewhere (who cares where) in the system while data is generated to document the existence of those feelings, but you still can't show me how the part of the system putting that data together knows anything about the feelings at all.
The part of the system putting that data together experiences the subjective feelings directly since it's the same system. No magic is needed for a system to have access to itself.  The part of the system documenting the feelings is probably my mouth and hands since I can speak and write of those feelings.  You seem to ask how the hands know about the feelings.  They don't.  They do what they're told via the one puppet language they understand: Move thus. They have no idea that they're documenting feelings, and such documentation can be produced by anything (like a copy machine), so it's hardly proof of a particular documented claim.

Quote
And that's how you fool yourself into thinking you have a working mode, but it runs on magicl
I'm only fooling myself if I'm wrong, and that hasn't been demonstrated. My model doesn't run on magic. I've asserted no such thing, and you've supposedly not asserted it about your model.

Quote
The part of it that generates the data about feelings might be in intense pain, but how can the process it's running know anything about that feeling in order to generate data about it?
Your model, not mine.  You need magic because you're trying to squeeze your model into mine.  Your statement above mixes layers of understanding and is thus word salad, like describing a system using classic and quantum physics intermixed.

Quote
Quote
There's no reading of something outside the information system. My model only has the system, which does its own feeling.
And how does it then convert from that experience of feeling into data being generated in a competent way that ensures that the data is true?
For one, it already is data, so no conversion. I am capable of lying, so if I generate additional data (like I do on these posts), I have no way of proving that the data is true, so I cannot assure something outside the system of the truth of generated data. Inside the system, there is no truth or falsehood, just subjective experience.

Quote
I'm asking for a theoretical model. Science doesn't have one for this.
A model of how memory works?  I think they have some, but I'm no personally aware of them. It's just not my field. I mean, I'm a computer guy, and yet I'd have to look it up if I were to provide an answer as to how exactly the various kinds of computer memory work. For my purposes, I just assume it does.

Quote
Quote
I (the information system) have subjective evidence of my feelings.
Show me the model.
That is the model.  One system, not multiple. Yes, it has inputs and outputs, but the feelings don't come from those. There is no generation of data of feelings from a separate feeling organ.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 14/10/2019 01:19:12
Okay, but it's a black box until we try to work out what's going on inside it, at which point it becomes a white box and we have to complete the contents by including a new black box.
This is fine, but you're not going to demonstrate your sentience that way, since you always put it in the black box where you cannot assert its existence.

The point of the black box is to put sentience in the model by hiding the missing part where the model depends on magic. If we open the black box, we then have to try to show the functionality of the magic, and science doesn't have that bit of the model. We cannot represent the magic other than by writing "the magic bit happens here", and that's what the black box does already. The opened black box merely reveals the bit saying "the magic happens here". The box is thus the same whether it's closed and black or open and white.

Quote
Quote
First, the outputs are not the same as the inputs
Didn't you say otherwise?

They can be the same if you want them to be, but there has to be an extra output if you want it to signal the existence of a feeling being experienced, and that output will be the same as the other output (so it doesn't actually provide any extra information). You can turn one wire into two without the box as well, and that's why the box doesn't add any useful functionality and the outputs tell you precisely nothing about sentience.

Quote
It says those outputs make no difference to the actions of the machine, which means the machine would claim feelings even if there were none. That means you've zero evidence for this sentience you claim.

That's the whole point: there is no evidence of the sentience. There is no way for a data system to acquire such evidence, so its claims about the existence of sentience are incompetent.

Quote
Quote
there's an extra output line which duplicates what goes out on the main output line, and this extra one is read as indicating that a feeling was experienced.
This contradicts your prior statement.

It doesn't. The extra output is proposed as a way to make an additional signal to indicate the existence of a feeling in the box. You could do this with a hundred extra outputs if you like, and they can be inversions of the main output signal too. It doesn't matter what they are, because the data system on the outside cannot know what they actually mean and can only look up an interpretation file to find out what something outside of the box asserts that they mean; something that doesn't actually know if feelings exist in the box at all.

Quote
1) How do you know about these lines? The answer seems awfully like something you just now made up.

Of course it's something I made up: it's an attempt to build a model of sentience, and it's an attempt that fails. You're free to attempt to build one to your own design, and it will fail too. No one has built a model of sentience that doesn't fail, and it looks impossible to do otherwise.

Quote
2) If there are two outputs and one is a duplicate of the other, how can it carry additional information?

That's exactly the point: it doesn't carry additional information. What you're supposed to be doing here is asking yourself how data documenting the existence of feelings can be generated and how it can relate to the actual experience of that feeling in such a way as for the data documenting the existence of feelings to be true rather than a mere assertion with no connection to the experience of the feeling. When I type a key and the word "ouch" appears on the screen, that data isn't put together by anything that read a feeling in something sentient: it's just generating a baseless assertion. The challenge is to build a model where the claims aren't mere assertions but where they can be shown to be true.

Quote
3) This is the contradiction part: You said earlier that the action of the 'machine' is unaffected by these outputs, but here you claim that an output is read as indicating that a feeling was experienced. That's being affected. If the machine action is unaffected by this output, then the output is effectively ignored at some layer.

The black box can be replaced by wires which simply connect an input to two outputs. The existence of one of those outputs can then be made to trigger a false claim to be generated. That affects the behaviour of the machine: cut the wire and the claim is no longer triggered, but it isn't sentience that's driving that change in behaviour.

Quote
Where does the output of your black box go?  To what is it connected?  This is outside the black box, so science should be able to pinpoint it. It's in the white part of the box after all.  If you can't answer that, then you can't make your black box ever smaller since the surrounding box is also black.

With the computer saying "ouch" when a key is typed, a signal comes in through a port, a routine picks up a value there and looks up a "file" (or set of variables) to see what string that value should be mapped to, and then it prints that string to the screen. Where might we have the feeling experienced? In the port? The port becomes a black box. The feeling is felt in the port and then the value is read, mapped to a string, and the string is sent to the screen. The experience of the feeling makes no difference to the end result, and the "ouch" is no more evidence of the existence of sentience than it was before. Science can follow everything that's going on there except for the part where the feeling is being experienced, and nothing the determines what data is put together to appear on the screen can detect the experiencing of the feeling. It is undetectable and superfluous.

Quote
Quote
The whole point of the black box is to draw your attention to the problem.
More like a way to hide it. The scientists that work on this do not work this way. They explore what's in the box.

I'm not hiding anything: this is all about what's going on in the box. We open it up and we see that it contains magic. We don't like that, but that's what's in there. Either that or there is no sentience involved.

Quote
Quote
If the bit we can't model is inside the black box and we don't know what's going on in there, we don't have a proper model of sentience.
So you're admitting you don't have a proper white box model?  Does anybody claim they have one?

I said from the start that it looks impossible. I'm showing you a broken model in order to show you the problem, and I said at the start that it is a broken model. Anyone who claims that sentience is compatible with computation as we understand it needs to prove the point by demonstrating a working model of it. I've stated that sentience is incompatible with computation as science understands it and that a model of it cannot be built unless we find some radically new way of doing computation which is beyond current scientific knowledge. Computers as we understand them today cannot read feelings in anything: all they can do is map assertions to inputs from something which might be sentient but which might equally not be. The asserted claims thus generated are completely incompetent.

Quote
Quote
they always have to point somewhere and say "feelings are felt here and they are magically recognised as existing and as being feelings by this magic routine which asserts that they are being felt there even though it has absolutely no evidence to back its assertion".
I'm unaware of this wording.  There are no 'routines' for one thing. They very much do have evidence as to mapping where much of this functionality goes on, but that isn't a model of how it works.  It is a pretty good way to say which creatures 'feel' the various sorts of this to which humans can relate.

There are routines. Once you're dealing with neural nets, you may not be able to work out how they do what they do, but they are running functionality in one way or another. That lack of understanding leaves room for people to point at the mess and say "sentience is in there", but that's not doing science. We need to see the mechanism and we need to identify the thing that is sentient. Neural nets can be simulated and we can then look at how they behave in terms of cause and effect. If they are producing data, we should be able to look to see how they built that data and what caused them to do so. If they are producing data, they must be doing something systematic based on running some kind of algorithm. The data either maps to something that's really happening (a sentient experience) or it doesn't. If there is a sentient experience in there, how is the neural net reading it and how does it make sure that the data it generates to document that experience is true? Those are the important questions to focus on. If there's some mechanism by which it can detect the sentient experience and know that the data it's generating about it is true, that would be the most important scientific discovery of all time. But it looks impossible. The process generating the data cannot ensure that the data is true.

Quote
Some small nits.  The information system processes only data (1).  3 says the non-data must first be converted to data before being given to the information system (IS), but 5 and 13 talk about the IS doing the converting, which means it processes something that isn't data.  As I said, that's just a nit.

You're right - it is possible to process something meaningless before it is proper data. If we have data coming in from a port, it's just a number. The system has to decide what it represents, and it's only then that the number maps to a meaning. Processing of it when it has no meaning merely converts it from something meaningless to something else that's meaningless. If it's an 8-bit value, it can be converted into a 32-bit value, for example. It's still a meaningless value until a meaning is mapped to it. The system might decide that it represents a feeling, and it then maps the idea of a feeling to that value, but it didn't get that idea from the value itself or the port that it came in through, and the point is that there is a disconnect. Whatever the value meant to the sentience on the other side of the port, that meaning is not passed across with the value. For the meaning to be passed too, we must have an information system on the other side of the port, and if we have that, we need to look at how it's reading the sentience. It in turn is going to be reading a value from a port to measure the sentience, so the problem recurs there.

Quote
13 also talks about ideas being distinct from data. An idea sounds an awful lot like data to me.

Variables are data, but they are not ideas. The can represent ideas, and the ideas are more complex than the symbols used to represent them. The previous sentence can be represented by z, but z is not the idea: you have to look back at the previous sentence to find the idea which z represents.

Quote
A counterexamples comes up with 10 which says that data which is not covered by the rules of the IS cannot be considered by the IS.  Not sure what they mean by 'considered' ...

It's simply that it can't know anything about them. There is no way for the idea of sentience to be passed across, so the receiver of the data has to map that idea to it itself. It has no way of knowing if that idea was ever represented by the data by the passer of the data.

Quote
... but take a digital signal processor (DSP) or just a simple amplifier.  It might be fed a data stream that is meaningless to the IS, yet the IS is completely capable of processing the stream.  This is similar to the guy in the Chinese room.  He is an IS, and he's handling data (the Chinese symbols) that does not conform to his own rules (English), yet he's tasked with processing that data.

Yes, but the information systems on both sides were designed by people to handle the data correctly. The problem with sentience is that it cannot build a data system, and any data system that is built cannot know about sentience, so a data system which makes claims about sentient must fabricate them.

Quote
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.

[Break due to character limit being exceeded...]
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 14/10/2019 01:20:19
[Continuation...]

Quote
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.

If sentience is a form of data, what does that sentience look like in the Chinese Room? It's just symbols on pieces of paper and simple processes being applied where new symbols are produced on pieces of paper. If a piece of paper has "ouch" written on it, is that an experience of pain?

Quote
Science does not posit the brain to operate like a computer.  There are some analogies, sure, but there is no equivalent to a CPU, address space, or instructions.  Yes, they have a fairly solid grasp on how the circuitry works, but not how the circuit works.

But science has an understanding of computation, and the issue here is about whether sentience can interface with computation. With computation as we understand it, it can't.

Quote
Quote
I'm not asking where the whole thing is. I was asking if it's the whole thing that's experiencing feelings rather than just a part of it.
Yes, It's the whole thing.  It isn't a special piece of material or anything.

If a multi-component feels a feeling without any of the components feeling anything, that's magic. And you still have to have something in that multi-component thing reading the level of feeling in it before it can generate data to document that level of feeling being experienced. You can't just have the whole thing magically generate data to document a feeling without a mechanism to go from experience of feeling to an information system creating the data to document it.

Quote
Doesn't work that way. Eyes arguably 'makes data', yet isn't a device that 'knows' about experience. The system that processes the data (in my case) has evolved to be compatible with the system that makes the data, not the other way around. It's very good at that, being able to glean information from new sources. They've taught humans to navigate by sound like a bat, despite the fact that we've not evolved for it. The system handles this alternately formatted data (outside the rules of the IS) just fine. The only thing they needed to add was the bit that produces the sound pulses, since we're not physically capable of generating them.

There is no sentience tied up in that. The data that comes in can be seen to match up to the external reality by the success of the algorithms used to interpret it. Machines can match it all, but there are no feelings involved.

Quote
The processor doesn't know Chinese.  But the system (the whole thing) does.  There is no black box where the Chinese part is.  There's not a 'know Chinese' instruction in the book of English instructions from which the guy in there works.

Again, that's easy because sentience isn't involved. Computation (of the kinds known to science) have no trouble with accounting for vision or communication in Chinese. The problem is with sentience (and any other aspect of consciousness that might be distinct from sentience).

Quote
]This presumes that the experience is not part of the system, and that it needs to be run through this data-generation step. You hold the same premise as step 7.

We don't have any model for sentience being part of the system and we don't have any model for how a feeling can be measured.

Quote
Quote
Waving at something complex isn't good enough. You have no model of sentience.
Pretty much how you're presenting your views, yes.  My model is pretty simple actually.  I don't claim to know how it works.  Neither do you, but you add more details than do I, but still hide your complex part in a black box, as if you had an understanding of how the data-processing part worked.

Your model is simple because it has magic hidden in the complexity which you can simply wave at without showing how it works. My model is honest in that it isolates the magic part and labels it as such by putting it in a black (magic) box.

Quote
Quote
If evolution selects for an assertion of pain being experienced in one case and an assertion of pleasure in another case, who's to say that the sentient thing isn't actually feeling the opposite sensation to the one asserted? The mapping of assertion to output is incompetent.
This makes no sense to me since I don't model the sentience as a separate thing.

The point is that an unpleasant feeling could be used to drive someone to increase that feeling rather than decrease it. We can take the same system the printed "ouch" to the screen and replace the "ouch" with "Oooh, I like that!" and there is no change to any feeling that might be imagined to be being experienced. If it was painful before, it's still painful, and if it's pleasant now, it was pleasant before.

Quote
There is no asserting going on. If the data system takes 'damage' data and takes pleasure from them, then it will make choices to encourage the sensation, resulting in the being being less fit.

The claims that come out about feelings are assertions. They are either true or baseless. If the damage inputs are handled correctly, the pleasure will be suppressed in an attempt to minimise damage. And if an unpleasant feeling is generated when an animal eats delicious food, it will be designed (by evolution) to go on eating it. The information system in the animal will in each case generate data about those experiences which claim the opposite of what they actually felt like.

Quote
Quote
The mouse is designed to speak the language that the computer understands, or rather, the computer is told how to interpret the squeaks from the mouse.
The first guess is closer.

I doubt that. The mouse is designed first, then the computer is told how to interpret its squeaks. That's why the computer keeps having to be taught new languages to speak to new designs of mouse.

Quote
And even then, the computer only knows about the claim, not the feelings.

Exactly.

Quote
You don't seem to be inclined to believe a computer mouse if it told you it had feelings.

How's it going to do that without fabricating the data? We can look at how the mouse works and see that such data is fake. With real mice and humans, we don't have sufficient resolution to do that without chopping them up, and if we chop them up, the functionality is disrupted and it's hard to study it.

Quote
This again assumes feelings separate from the thing that reads it.  Fine and dandy if it works that way, but if the two systems don't interface in a meaningful way, then system 2 is not able to pass on a message from system 1 that it just interprets as noise.

Trying to integrate the two things into one is fine, but at some point you need to have something measure the feeling, and it also has to have some way to know that what it's measuring is a feeling.

Quote
The part of the system putting that data together experiences the subjective feelings directly since it's the same system.

The Chinese Room can't measure feelings, so what's different in the brain that makes the impossible possible?

Quote
No magic is needed for a system to have access to itself.

Magic is needed to measure a feeling and know that it is a feeling.

Quote
The part of the system documenting the feelings is probably my mouth and hands since I can speak and write of those feelings.

The system documenting the feelings is in the brain. But can it be trusted to be telling the truth?

Quote
My model doesn't run on magic. I've asserted no such thing, and you've supposedly not asserted it about your model.

It's measuring a feeling and magically knowing that it's a feeling that it's measuring rather than just a signal of any normal kind.

Quote
Your model, not mine.  You need magic because you're trying to squeeze your model into mine.  Your statement above mixes layers of understanding and is thus word salad, like describing a system using classic and quantum physics intermixed.

Your model needs the same magic. You just hide that from yourself by flinging it into complexity so that you don't need to understand it. But all of that complexity is running on simpler rules which science claims to understand, leaving no room for sentience unless there's something going on which science has missed.

Quote
Quote
And how does it then convert from that experience of feeling into data being generated in a competent way that ensures that the data is true?
For one, it already is data, so no conversion.

Magical sentient data. Symbols on paper experiencing feelings according to the meanings represented.

Quote
I am capable of lying, so if I generate additional data (like I do on these posts), I have no way of proving that the data is true, so I cannot assure something outside the system of the truth of generated data. Inside the system, there is no truth or falsehood, just subjective experience.

For the symbols on paper, there is no experience tied to the meaning of the data. Any feelings there are not part of the process and do not convert into data representing the experience of them.

Quote
A model of how memory works?

I was asking for a theoretical model of sentience: referring to the words of mine that you quoted rather than your reply to it (in which you brought in memory).

Quote
That is the model.  One system, not multiple. Yes, it has inputs and outputs, but the feelings don't come from those. There is no generation of data of feelings from a separate feeling organ.

It is a magic model, like any other model that's been attempted. The feelings must be measured by something, and the thing doing the measuring cannot know that they are feelings.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/10/2019 05:41:08
I have given you a method which can be used to determine the right form of utilitarianism. Where they differ, we can now reject the incorrect ones.

No it would not allow the mistreatment of anyone. This is what poor philosophers always do when they analyse thought experiments incorrectly - they jump to incorrect conclusions. Let me provide a better example, and then we'll look back at the above one afterwards. Imagine that a scientist creates a new breed of human which gets 100 times more pleasure out of life, and that these humans aren't disadvantaged in any way. The rest of us would then think, we want that too. If we can't have it added to us through gene modification, would it be possible to design it into our children? If so, then that is the way to switch to a population of people who enjoy life more without upsetting anyone. The missing part of the calculation is that upset that would be caused by mistreating or annihilating people, and the new breed of people who get more enjoyment out of living aren't actually going to get that enjoyment if they spend all their time fearing that they'll be wiped out next in order to make room for another breed of human which gets 10,000 times as much pleasure out of living. By creating all that fear, you actually create a world with less pleasure in it.

Let us suppose that we can't do it with humans though and that we need to be replaced with the utility monster in order to populate the universe with things that get more out of existing than we do. The correct way to make that transition is for humans voluntarily to have fewer children and to reduce their population gradually to zero over many generations while the utility monsters grow their population. We'd agree to do this for the same reason that if we were spiders we'd be happy to disappear and be replaced by humans. We would see the superiority of the utility monster and let it win out, but not through abuse and genocide.

No. Utilitarian theory applied correctly does not allow that because it actually results in a hellish life of fear for the utility monsters.

When you apply my method to it, you see that one single participant is each of the humans and each of the utility monsters, living each of those lives in turn. This helps you see the correct way to apply utilitarianism because that individual participant will suffer more if the people in the system are abused and if the utility monsters are in continual fear that they'll be next to be treated that way.

That analysis of the experiment is woeful philosophy (and it is also very much the norm for philosophy because most philosophers are shoddy thinkers who fail to take all factors into account).

I don't know what that is, but it isn't utilitarianism because it's ignoring any amount of happiness beyond the level of the least happy thing in existence.

If you ask people if they'd like to be modified so that they can fly, most would agree to that. We could replace non-flying humans with flying ones and we'd like that to happen. That is a utility monster, and it's a good thing. There are moral rules about how we get from one to the other, and that must be done in a non-abusive way. If all non-flying humans were humanely killed to make room with flying ones, are those flying ones going to be happy when they realise the same could happen to them to make room for flying humans that can breathe underwater? No. Nozick misapplies utilitarianism.
I think what you are doing here is building a moral system based on simple version of utilitarianism, and then apply patches to cover specific criticisms that discovers loopholes on it. Discovering those loopholes is what philosophers do.
Rawl's version is widely recognized as one form of utilitarianism.

Ability to fly or breath underwater can be useful, but they don't have to be permanent nor expressed genetically. Ancient humans can survive freezing weather by simply using other mammal's fur.

At least we can agree that moral rules should consider long term consequences.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/10/2019 06:17:10
If they're sentient, then they're included. Some animals may not be, and it's highly doubtful that any plants are, or at least, not in any way that's tied to what's happening to them (just as the material of a rock could be sentient).
You need to draw a line between sentient and non-sentient. Or assign numbers to allow us measure and describe sentience, including partial sentience. The next step would be some methods to use those numbers to make decisions of which options to take in morally conflicting situations.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/10/2019 06:37:57
They are just what they are. One is horrible and we try to avoid it, while the other is nice and we seek it out, with the result that most people are now overweight due to their desire to eat delicious things.
Quote
Pain is a distressing feeling often caused by intense or damaging stimuli. The International Association for the Study of Pain's widely used definition defines pain as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage".[1] In medical diagnosis, pain is regarded as a symptom of an underlying condition.
https://en.wikipedia.org/wiki/Pain
I don't think that a fundamental principle of morality should be based on symptoms.

Quote
Pleasure is a component of reward, but not all rewards are pleasurable (e.g., money does not elicit pleasure unless this response is conditioned).[2] Stimuli that are naturally pleasurable, and therefore attractive, are known as intrinsic rewards, whereas stimuli that are attractive and motivate approach behavior, but are not inherently pleasurable, are termed extrinsic rewards.[2] Extrinsic rewards (e.g., money) are rewarding as a result of a learned association with an intrinsic reward.[2] In other words, extrinsic rewards function as motivational magnets that elicit "wanting", but not "liking" reactions once they have been acquired.[2]

The reward system contains pleasure centers or hedonic hotspots – i.e., brain structures that mediate pleasure or "liking" reactions from intrinsic rewards. As of October 2017, hedonic hotspots have been identified in subcompartments within the nucleus accumbens shell, ventral pallidum, parabrachial nucleus, orbitofrontal cortex (OFC), and insular cortex.[3][4][5] The hotspot within the nucleus accumbens shell is located in the rostrodorsal quadrant of the medial shell, while the hedonic coldspot is located in a more posterior region. The posterior ventral pallidum also contains a hedonic hotspot, while the anterior ventral pallidum contains a hedonic coldspot. Microinjections of opioids, endocannabinoids, and orexin are capable of enhancing liking in these hotspots.[3] The hedonic hotspots located in the anterior OFC and posterior insula have been demonstrated to respond to orexin and opioids, as has the overlapping hedonic coldspot in the anterior insula and posterior OFC.[5] On the other hand, the parabrachial nucleus hotspot has only been demonstrated to respond to benzodiazepine receptor agonists.[3]

Hedonic hotspots are functionally linked, in that activation of one hotspot results in the recruitment of the others, as indexed by the induced expression of c-Fos, an immediate early gene. Furthermore, inhibition of one hotspot results in the blunting of the effects of activating another hotspot.[3][5] Therefore, the simultaneous activation of every hedonic hotspot within the reward system is believed to be necessary for generating the sensation of an intense euphoria.[6]
https://en.wikipedia.org/wiki/Pleasure#Neuropsychology
A system with known method to hack the reward is prone to reward hacking and produce unintended consequences.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/10/2019 06:43:23
What about it? Each individual must be protected by morality from whatever kinds of suffering can be inflicted on it, and that varies between different people as well as between different species.
A person gets brain damage that makes him unable to feel pain and pleasure, while still capable of doing normal activities. Is he still considered sentient? Does he still has right to be treated as sentient being? Why so?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/10/2019 07:11:23
Imagine that you have to live all the lives of all the people and utility monsters. They are all you. With that understanding in your head, you decide that you prefer being utility monsters, so you want to phase out people and replace them. You also have to live the lives of those people, so you need to work out how not to upset them, and the best way to do that is to let the transition take a long time so that the difference is too small to register with them. For a sustainable human population, each person who has children might have 1.2 children. That could be reduced to 1.1 and the population would gradually disappear while the utility monsters gradually increase in number. Some of those humans will realise that they're envious of the utility monsters and would rather be them, so they may be open to the idea of bringing up utility monsters instead of children, and that may be all you need to drive the transition. It might also make the humans feel a lot happier about things if they know that a small population of humans will be allowed to go on existing forever - that could result in better happiness numbers overall than having them totally replaced by utility monsters.
If we acknowledge that currently, humans are not the most optimal form to achieve universal moral goal, we also acknowledge that there are somethings that must be changed. But we must be careful that many changes lead to worse outcome than existing condition.
Those changes don't have to be purely genetical, nor require total destruction of older version (i.e. death). Some form of diversity could be useful. Biohacking can change some parts of the body to eliminate disadvantage and gain some advantages, although those changes can make us be chimeras.
They don't have to be organic either. Interfaces with biomechatronics can be useful.
https://en.wikipedia.org/wiki/Cyborg
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 14/10/2019 20:29:18
I think what you are doing here is building a moral system based on simple version of utilitarianism, and then apply patches to cover specific criticisms that discovers loopholes on it. Discovering those loopholes is what philosophers do.
Rawl's version is widely recognized as one form of utilitarianism.

What patches? I identified a method that covers the entirety of morality in one simple go by reducing multiple-participant systems to single-participant systems so that it becomes nothing more fancy than a calculation as to what is best for that individual. I did not expect this to be a version of utilitarianism, but it appears to be the fundamental approach that utilitarianism is subconsciously informed by and which no one had previously managed to pin down. We now have it though right out in the open.

When someone sets out a faulty thought experiment which ignores some of the factors and comes to an incorrect conclusion as a result, I am not patching anything when I point to the factors which the person has failed to include in it and which completely change the conclusion. I am correcting their errors.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 14/10/2019 20:35:07
You need to draw a line between sentient and non-sentient.

Indeed. Non-sentient things don't need to be protected by morality because they can't be harmed (and can't enjoy anything either). Things could be sentient without us having any way to know though, but we don't know what to do or what not to do with them to make them feel better rather than worse, so we just have to leave that to luck. A rock may feel pain when its glowing red hot, but most of the rock in this planet is in that state. Perhaps it's the cold rock that's in pain and we can make it feel better by melting it. We don't know. We might as well just melt rock if we need it hot and leave it alone when we don't.

Quote
Or assign numbers to allow us measure and describe sentience, including partial sentience. The next step would be some methods to use those numbers to make decisions of which options to take in morally conflicting situations.

It's all about best guesses when dealing with sentient things whose feelings you can't actually measure. Maybe some day it will be possible to measure feelings for all sentient things, at which point the scores given to them will become much more accurate.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 14/10/2019 20:43:14
I don't think that a fundamental principle of morality should be based on symptoms.

It has to be based on how much things feel good or horrid. We can put a lot of numbers to that with humans simply by collecting data from people who know how to compare two different things. Someone who has been poisoned and who has been stabbed can be asked which one they'd chose to repeat if they had to go through another such incident, and that would tell you which is worse (once you've averaged the answers of enough people who have that experience. They needn't all share the same two experiences though: you can do it with a ring of them in which the first has been through those two events, the second has been stabbed and stung by a bullet ant, the third has been stung by a bullet ant and has been attacked by a honey badger, and the fourth has been attacked by a honey badger and poisoned. You should be able to imagine how this can extend to cover most of human experience. It would be harder to do it for animals, but we can make assumptions that they will feel much the same way as we do about many things, though with less mental anguish as the get simpler as they'll have less understanding of what's going on.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 14/10/2019 20:52:46
A person gets brain damage that makes him unable to feel pain and pleasure, while still capable of doing normal activities. Is he still considered sentient? Does he still has right to be treated as sentient being? Why so?

If you've removed all of that from him, there could still be neutral feelings like colour qualia, in which case he would still be sentient. You could thus have a species which is sentient but only has such neutral feelings and they would not care about existing or anything else that happens to them, so they have no need of protection from morality. They might be programmed to struggle to survive when under attack, but in their minds they would be calmly observing everything throughout and would be indifferent to the outcome.

In the case of your brain-damaged human though, there are the relatives, friends and other caring people to consider. They will be upset if he is not protected by morality even if he doesn't need that himself.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 14/10/2019 20:57:37
If we acknowledge that currently, humans are not the most optimal form to achieve universal moral goal, we also acknowledge that there are somethings that must be changed. But we must be careful that many changes lead to worse outcome than existing condition.

The important thing is to find out which genes lead to people having a strong desire to do immoral things and to edit those genes to remove those desires, at least in future generations. (It may not be possible to change the brains that have already had their development shaped by bad genes.) We will be able to do a lot though just by putting dangerous people under the control of something that they wear which can disable them temporarily by a high voltage whenever they try to do something seriously immoral. That will maximise their freedom, ensuring that we don't need to lock them up in prison to protect others.
Title: Re: Is there a universal moral standard?
Post by: Halc on 16/10/2019 12:55:34
Quote
It says those outputs make no difference to the actions of the machine, which means the machine would claim feelings even if there were none. That means you've zero evidence for this sentience you claim.
That's the whole point: there is no evidence of the sentience. There is no way for a data system to acquire such evidence, so its claims about the existence of sentience are incompetent.
Irrational is what they are.  It means there's no point in engaging with an irrational data system, as you label it. Your whole moral code is based on a lie about feeling for which you claim no evidence exists.

Quote
Once you're dealing with neural nets, you may not be able to work out how they do what they do, but they are running functionality in one way or another. That lack of understanding leaves room for people to point at the mess and say "sentience is in there", but that's not doing science.
But you're pointing in there and saying sentience is not there, which is equally not science.  Science is not saying "I don't know how it works, so it's in there".  I in particular reference my subjective experience in making my claim, despite my inability to present that evidence to another.

Quote
We need to see the mechanism and we need to identify the thing that is sentient. Neural nets can be simulated and we can then look at how they behave in terms of cause and effect.
Doesn't work.  You can look at them all you want and understand exactly how it works, and still not see the sentience because the understanding is not subjective.  The lack of understanding is not the problem.

Quote
Quote
13 also talks about ideas being distinct from data. An idea sounds an awful lot like data to me.
Variables are data, but they are not ideas.
I made no mention of variables.  I said ideas seem to be data.  You assert otherwise, but have not demonstrated it.
Quote
If sentience is a form of data, what does that sentience look like in the Chinese Room?
Chinese room is not a model of a human, or if it is, it is a model of a paralyzed person with ESP in a sensory deprivation chamber.  Any output from it that attempts to pass a Turing test is deceit.
Nevertheless, the thing is capable of its own sentience. The sentience is in the processing of the data of course. It is not the data itself. Data can be shelved. Process cannot.

Quote from: Halc
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.
You didn't really reply to this. You posted some text after it, but that text (above) was related to sentience being the processing of data and no to point 7 which implicitly assumes a premise of separation of 'conscious thing' and an 'information system'.

Quote
If a multi-component feels a feeling without any of the components feeling anything, that's magic.
I was wondering where you thought the magic was needed. Now I know. I deny that it is magic. Combustion of a gas can occur without any of the electrons and protons (the compoents) being combusted. A computer can read a web page without any transistor actually reading the web page. Kindly justify your assertion.
I'm talking about feeling and not the documentation of it, since you harp on that a lot. There are creatures that feel (in a crude manner) and yet lack the complexity (or the motivation) to document it, so they've no memory of past feelings.

Quote
We don't have any model for sentience being part of the system
Don't say 'we'.  You don't have a model maybe.

Quote
The claims that come out about feelings are assertions. They are either true or baseless. If the damage inputs are handled correctly, the pleasure will be suppressed in an attempt to minimise damage.
Given damage data, what's the point of suppressing pleasure if the system that is in charge of minimizing the damage is unaware of either the pain or pleasure? This makes no sense given the model you've described.

Quote
And if an unpleasant feeling is generated when an animal eats delicious food, it will be designed (by evolution) to go on eating it.
You told me the animal cannot know the food tastes good. It just concludes it should eat it, I don't know, due to logical deduction or something.

Quote
Quote
My model doesn't run on magic. I've asserted no such thing, and you've supposedly not asserted it about your model.
It's measuring a feeling and magically knowing that it's a feeling that it's measuring rather than just a signal of any normal kind.
This presumes that 'feeling' and 'normal signal' are different things. I'll partially agree since I don't think any feeling is reducible to one signal, but signals involved with feelings are quite normal.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 16/10/2019 23:55:45
It means there's no point in engaging with an irrational data system, as you label it. Your whole moral code is based on a lie about feeling for which you claim no evidence exists.

It isn't necessarily irrational in any other aspect, but it's merely generating false claims about being sentient. In the case of humans though, the claims might yet be true, somehow. If they are true, then morality has a purpose and we know how it should be applied. If it turns out that the claims are false, then morality is superfluous: it would not be wrong for AGI to stand back and let some people torture others for fun because there would be no suffering and no fun.

Quote
But you're pointing in there and saying sentience is not there, which is equally not science.

What I'm doing is pointing at simple systems and saying sentience isn't there, or at least, not in any way that shapes the data being generated were if it involves claims about feelings in the machine, they are fictions. We can add layers of complexity and see that sentience is still not there, and if we just go on adding more layers of complexity in the same way, sentience will never be involved. Something radically different has to happen to introduce sentience. Something in there has to have a way of measuring feelings.

Quote
Science is not saying "I don't know how it works, so it's in there".

A lot of science is doing exactly that. The researchers believe they are sentient, so they project that into what they're studying.

Quote
Doesn't work.  You can look at them [neural nets] all you want and understand exactly how it works, and still not see the sentience because the understanding is not subjective.  The lack of understanding is not the problem.

The data documenting the experiencing of feelings has to be generated by something non-magical in nature. If we can understand how the data's generated and don't find sentience in that mechanism with feelings being measured in any way, there is no sentience involved in shaping that data.

Quote
Quote
Variables are data, but they are not ideas.
I made no mention of variables.  I said ideas seem to be data.  You assert otherwise, but have not demonstrated it.

I was giving an example of data that doesn't count as an idea. It can represent an idea, but the idea has to be stored in a more complex structure.

Quote
Chinese room is not a model of a human, or if it is, it is a model of a paralyzed person with ESP in a sensory deprivation chamber.  Any output from it that attempts to pass a Turing test is deceit.

It is a model for every conventional kind of computing (non-quantum) that we understand.

Quote
Nevertheless, the thing is capable of its own sentience. The sentience is in the processing of the data of course. It is not the data itself. Data can be shelved. Process cannot.

There is a person operating the Chinese Room, but their feelings make no impression on the data or the process. The process is just a series of simple operations on data, and each of those operations can be identical to other instances with the same function being applied to the same piece of data in very different contexts where the machinery is blind to the context and should not feel any difference between them.

Quote
Quote from: Halc
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.
You didn't really reply to this. You posted some text after it, but that text (above) was related to sentience being the processing of data and no to point 7 which implicitly assumes a premise of separation of 'conscious thing' and an 'information system'.

I replied with questions: "If sentience is a form of data, what does that sentience look like in the Chinese Room? It's just symbols on pieces of paper and simple processes being applied where new symbols are produced on pieces of paper. If a piece of paper has "ouch" written on it, is that an experience of pain?"

The point of those questions was to make you pay attention to your objection to the conscious thing not being data. If it's data, it's just symbols printed on paper. That's magic sentience with symbols on paper feeling things related to the meanings that an information maps to those symbols.

Quote
Combustion of a gas can occur without any of the electrons and protons (the compoents) being combusted.

Combustion is an abstraction. There is a change in linkage from a higher energy link to a lower energy link and the energy freed up from that is expressed as movement. Burning is equal to that lower-level description and we can substitute that description for all cases of burning.

If feelings are an abstraction in the same way, there needs to be a lower level description of them which accounts for them and equates to them. If pain is an abstraction and is "experienced" by an abstraction (some composite thing), you can then have none of the components feeling anything. But if sentience equals the low-level description, the problem there is that there is no sentience in the low level description, so sentience is lost. How's it been lost? Well, it was lost as soon as it was asserted that none of the components feel anything. For sentience to be real, at least one of the components must feel something.

Quote
There are creatures that feel (in a crude manner) and yet lack the complexity (or the motivation) to document it, so they've no memory of past feelings.

People often deny that such creatures have feelings. The key thing about humans is that they create data that documents an experience of feelings, and that should make it possible to trace back the claims to see what evidence they're based on. With animals which don't produce such data, there's nothing to trace. However, many of them may still produce internal data about it which they can't talk about.

Quote
Quote
We don't have any model for sentience being part of the system
Don't say 'we'.  You don't have a model maybe.

No one has a model for it. Or rather, no one has a model for it that doesn't have a magic module in it somewhere.

Quote
Given damage data, what's the point of suppressing pleasure if the system that is in charge of minimizing the damage is unaware of either the pain or pleasure? This makes no sense given the model you've described.

Why would it make sense in a broken model? All the models are broken so you can't expect things to add up.

Quote
Quote
And if an unpleasant feeling is generated when an animal eats delicious food, it will be designed (by evolution) to go on eating it.
You told me the animal cannot know the food tastes good. It just concludes it should eat it, I don't know, due to logical deduction or something.

If it generates data claiming it tastes good while the actual feeling might be the opposite, that shows a total disconnect between the feelings and the data that supposedly documents the feelings. This is one of the ways of showing the faults in models: if a feeling is asserted to be experienced in a model and you can switch the assertion round while nothing else changes, the feeling can't respond to match the new assertion.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/10/2019 10:05:56
If they're sentient, then they're included. Some animals may not be, and it's highly doubtful that any plants are, or at least, not in any way that's tied to what's happening to them (just as the material of a rock could be sentient).
That's the very problem identified by philosophers critisizing utilitarianism. How can you expect anyone else to agree with your thoughts when your don't clearly define what you mean with sentience, which you claimed to be the core idea of universal morality? At least you have to define a criterion to determine which agent is more sentient when compared to another agent. It would be better if you can assign a number to represent each agent's sentience, so they can be ranked at once. You can't calculate something that can't be quantified. Until you have a method to quantify sentience of moral agents, your AGI is useless to calculate the best option in a moral problem.
AFAIK, neuroscience has demonstrated that pain, pleasure, sadness, happiness are electrochemical states of nervous systems, and human already have basic understanding of how to manipulate them at will. I think we can be quite confident to say that rocks feel nothing, thus not sentient.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/10/2019 10:39:00
A person gets brain damage that makes him unable to feel pain and pleasure, while still capable of doing normal activities. Is he still considered sentient? Does he still has right to be treated as sentient being? Why so?

If you've removed all of that from him, there could still be neutral feelings like colour qualia, in which case he would still be sentient. You could thus have a species which is sentient but only has such neutral feelings and they would not care about existing or anything else that happens to them, so they have no need of protection from morality. They might be programmed to struggle to survive when under attack, but in their minds they would be calmly observing everything throughout and would be indifferent to the outcome.

The neutral feelings contribute nothing to total utility, hence the resources should be used optimally, which is to maximize positive feelings and minimize negative feelings.

Quote
In the case of your brain-damaged human though, there are the relatives, friends and other caring people to consider. They will be upset if he is not protected by morality even if he doesn't need that himself.
So if the brain-damaged human has no relative or friend that care, e.g. unwanted baby left by the parents, there would be no utilitarian moral reason to save him/her.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 17/10/2019 20:19:59
That's the very problem identified by philosophers critisizing utilitarianism. How can you expect anyone else to agree with your thoughts when your don't clearly define what you mean with sentience, which you claimed to be the core idea of universal morality?

I'm not required to spell out what is sentient and in what ways it is sentient. That task is part of the calculation: what are the odds that species A is sentient, and how much does it suffer in cases where it suffers, and how much pleasure does it experience in cases where it enjoys things. AGI will make the best judgements it can about those things and then act on the basis of those numbers. It will look at rocks and determine that there is no known way to affect how any sentience that might be in any rock is feeling, so anything goes when it comes to interactions with rocks.

Quote
At least you have to define a criterion to determine which agent is more sentient when compared to another agent. It would be better if you can assign a number to represent each agent's sentience, so they can be ranked at once. You can't calculate something that can't be quantified. Until you have a method to quantify sentience of moral agents, your AGI is useless to calculate the best option in a moral problem.

It's AGI's job to work out those numbers as best as they can be worked out.

Quote
AFAIK, neuroscience has demonstrated that pain, pleasure, sadness, happiness are electrochemical states of nervous systems, and human already have basic understanding of how to manipulate them at will. I think we can be quite confident to say that rocks feel nothing, thus not sentient.

Neuroscience has demonstrated nothing of the kind. It merely makes assumptions equivalent to listening to the radio waves coming off a processor and making connections with patterns in that and the (false) claims about sentience being generated by a program.

Quote
So if the brain-damaged human has no relative or friend that care, e.g. unwanted baby left by the parents, there would be no utilitarian moral reason to save him/her.

There are plenty of non-relatives who will care too, so the only way to get to the point where that person doesn't matter to anyone is for that person to exist in a world where there are no other people, or where all people are like that. They may then be regarded as expendable machines which, while conscious, have no feelings that enable them to be harmed and none that enable them to enjoy existing either. They are then superfluous.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/10/2019 02:06:59
I'm not required to spell out what is sentient and in what ways it is sentient. That task is part of the calculation: what are the odds that species A is sentient, and how much does it suffer in cases where it suffers, and how much pleasure does it experience in cases where it enjoys things. AGI will make the best judgements it can about those things and then act on the basis of those numbers. It will look at rocks and determine that there is no known way to affect how any sentience that might be in any rock is feeling, so anything goes when it comes to interactions with rocks.
It's AGI's job to work out those numbers as best as they can be worked out.
Do you know how Artificial Intelligence work? Their creators need to define what their ultimate/terminal goal is. An advanced version of AI may find instrumental goals beyond the expectation of its creators, but they won't change the ultimate/terminal goal. I have posted several videos discussing this. You better check them out.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/10/2019 02:20:53
Neuroscience has demonstrated nothing of the kind. It merely makes assumptions equivalent to listening to the radio waves coming off a processor and making connections with patterns in that and the (false) claims about sentience being generated by a program.
Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition. If you want to expand the scope of the term, it's fine. You just need to clearly state its new boundary condition so everyone else can understand what you mean. Does your calculation include emotional states such as happiness, sadness, love, passion, anger, anxiety, lust, etc.?
You have claimed that the ultimate goal of morality is maximizing X while minimizing Y. But so far you haven't clearly define what they are and their boundary conditions, so it's impossible for anyone else to definitively agree or disagree with you.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 18/10/2019 20:56:31
Do you know how Artificial Intelligence work?

I would hope so. I've been working in that field for two decades.

Quote
Their creators need to define what their ultimate/terminal goal is.

Their goal is to do what they're programmed to do, and that will be to help sentient things. When there's a conflict between the wishes of different sentient things, they are to apply computational morality to determine the right course of action.

Quote
An advanced version of AI may find instrumental goals beyond the expectation of its creators, but they won't change the ultimate/terminal goal.

That's right: computational morality governs any other sub-goals that they might come up with.

Quote
I have posted several videos discussing this. You better check them out.

There are mountains of information on this issue and most of it is wayward. I have taken you straight to the correct answer so that you can jettison all the superfluous junk.

Quote
Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition.

All it has demonstrated is correlation with something that may or may not be real. If you pull the plug on a machine that's generating false claims about being conscious, the false claims stop. The link between the claims being generated and particular patterns of activity in a processor do not determine that the claimed feelings in the system are real.

Quote
Does your calculation include emotional states such as happiness, sadness, love, passion, anger, anxiety, lust, etc.?

Of course. All feelings have to be considered and be weighted appropriately in order to come up with the right total.

Quote
You have claimed that the ultimate goal of morality is maximizing X while minimizing Y. But so far you haven't clearly define what they are and their boundary conditions, so it's impossible for anyone else to definitively agree or disagree with you.

I've provided the method (which provides you with any boundary conditions you need) and it isn't my job to produce the actual numbers. A lot of data has to be collected and crunched in order to get those numbers, and only AGI can do that work.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/10/2019 03:15:37
Quote
Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition.

All it has demonstrated is correlation with something that may or may not be real. If you pull the plug on a machine that's generating false claims about being conscious, the false claims stop. The link between the claims being generated and particular patterns of activity in a processor do not determine that the claimed feelings in the system are real.
I wasn't talking about artificial intelligent machines here. It was experiments on living humans using medical instrumentation such as fMRI and brainwave sensors that can determine when someone is conscious or not, when they are feeling pain or not. We can compare the readings of the instrumentations and the experience of the human subjects to draw a general patterns about what brain conditions constitute consciousness and feelings.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 21/10/2019 17:26:41
I wasn't talking about artificial intelligent machines here. It was experiments on living humans using medical instrumentation such as fMRI and brainwave sensors that can determine when someone is conscious or not, when they are feeling pain or not. We can compare the readings of the instrumentations and the experience of the human subjects to draw a general patterns about what brain conditions constitute consciousness and feelings.

You are talking about biological machines which generate claims about consciousness which may not be true, just as a computer can generate claims about experiencing feelings (including one of awareness) without those claims being true. When you disrupt the functionality of the hardware in some way, whether it's a CPU or a brain, you stop the generation of those claims. You do not get any proof from that that you are narrowing down the place where actual feelings might be being experienced.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 22/10/2019 10:53:19
You are talking about biological machines which generate claims about consciousness which may not be true, just as a computer can generate claims about experiencing feelings (including one of awareness) without those claims being true. When you disrupt the functionality of the hardware in some way, whether it's a CPU or a brain, you stop the generation of those claims. You do not get any proof from that that you are narrowing down the place where actual feelings might be being experienced.
Any instrumentation system has non-zero error rate. There always be a chance for either false positive or false negative. But as long the error rate can be maintained below an acceptable limit (based on risk evaluation considering probability of the error occurence and severity of the effects), the method can be legitimately used.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 24/10/2019 10:07:15
I have argued that applications of moral rules depend on the conscience level of the agents.

Quote
Russell & Norvig (2003) group agents into five classes based on their degree of perceived intelligence and capability:
1. simple reflex agents
(https://upload.wikimedia.org/wikipedia/commons/thumb/9/91/Simple_reflex_agent.png/408px-Simple_reflex_agent.png)

2. model-based reflex agents
(https://upload.wikimedia.org/wikipedia/commons/thumb/8/8d/Model_based_reflex_agent.png/408px-Model_based_reflex_agent.png)

3. goal-based agents
(https://upload.wikimedia.org/wikipedia/commons/thumb/4/4f/Model_based_goal_based_agent.png/408px-Model_based_goal_based_agent.png)

4. utility-based agents
(https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Model_based_utility_based.png/408px-Model_based_utility_based.png)

5. learning agents
(https://upload.wikimedia.org/wikipedia/commons/thumb/0/09/IntelligentAgent-Learning.png/408px-IntelligentAgent-Learning.png)

https://en.wikipedia.org/wiki/Intelligent_agent#Classes

Enforcement of moral rules through reward and punishment can only be done to learning agents.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 24/10/2019 21:01:27
I think it's good that you and others are still exploring this. We'll soon be able to put all the different approaches to the test by running them in AGI systems to see how they perform when applied consistently to all thought experiments. Many approaches will be shown to be wrong by their clear failure to account for some scenarios which reveal serious defects. Many others may do a half-decent job in all cases. Some may do the job perfectly. I'm confident that my approach will produce the best performance in all cases despite it being extremely simple because I think I've found the actual logical basis for morality. I think other approaches are guided by a subconscious understanding of this too, but instead of uncovering the method that I found, people tend to create rules at a higher level which fail to account for everything that's covered at the base level, so they end up with partially correct moral systems which fail in some circumstances. Whatever your ideas evolve into, it will be possible to let AGI take your rules and apply them to test them to destruction, so I'm going to stop commenting in this thread in order not to lose any time that's better spent on building the tool that will enable that testing to be done.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 25/10/2019 11:15:19
I think it's good that you and others are still exploring this. We'll soon be able to put all the different approaches to the test by running them in AGI systems to see how they perform when applied consistently to all thought experiments. Many approaches will be shown to be wrong by their clear failure to account for some scenarios which reveal serious defects. Many others may do a half-decent job in all cases. Some may do the job perfectly. I'm confident that my approach will produce the best performance in all cases despite it being extremely simple because I think I've found the actual logical basis for morality. I think other approaches are guided by a subconscious understanding of this too, but instead of uncovering the method that I found, people tend to create rules at a higher level which fail to account for everything that's covered at the base level, so they end up with partially correct moral systems which fail in some circumstances. Whatever your ideas evolve into, it will be possible to let AGI take your rules and apply them to test them to destruction, so I'm going to stop commenting in this thread in order not to lose any time that's better spent on building the tool that will enable that testing to be done.
Thank you for your contribution in this topic. It's sad that you decide to stop, but it's certainly your right.
IMO, in searching for a universal moral standard we need to declare the definitions of each terms we use to construct our ideas. That's because human languages, including English, contain many ambiguities, homonyms, and dependencies on contexts. You can say that an AGI may resolve the problem, but without clear definition, different AGI systems (e.g. made by different developers, trained using different methods, etc.) might lead to different or even contradicting solutions.
Different person may have different preference on the same feeling/sensation. In extreme case, some kind of pain might be preferred by some kind of persons, such as sadomasochists. Hence I concluded that there must be a deeper meaning than feeling which we should base our morality upon.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 25/10/2019 11:18:14
Here is another reading on trolley problem to check our ideas on universal morality.
https://qz.com/1562585/the-seven-moral-rules-that-supposedly-unite-humanity/
Quote
In 2012, Oliver Scott Curry was an anthropology lecturer at the University of Oxford. One day, he organized a debate among his students about whether morality was innate or acquired. One side argued passionately that morality was the same everywhere; the other, that morals were different everywhere.

“I realized that, obviously, no one really knew, and so decided to find out for myself,” Curry says.

Seven years later, Curry, now a senior researcher at Oxford’s Institute for Cognitive and Evolutionary Anthropology, can offer up an answer to the seemingly ginormous question of what morality is and how it does—or doesn’t—vary around the world.


Morality, he says, is meant to promote cooperation. “People everywhere face a similar set of social problems, and use a similar set of moral rules to solve them,” he says as lead author of a paper recently published in Current Anthropology. “Everyone everywhere shares a common moral code. All agree that cooperating, promoting the common good, is the right thing to do.”

For the study, Curry’s group studied ethnographic accounts of ethics from 60 societies, across over 600 sources. The universal rules of morality are:

Help your family
Help your group
Return favors
Be brave
Defer to superiors
Divide resources fairly
Respect others’ property
The authors reviewed seven “well-established” types of cooperation to test the idea that morality evolved to promote cooperation, including family values, or why we allocate resources to family; group loyalty, or why we form groups, conform to local norms, and promote unity and solidarity; social exchange or reciprocity, or why we trust others, return favors, seek revenge, express gratitude, feel guilt, and make up after fights; resolving conflicts through contests which entail “hawkish displays of dominance” such as bravery or “dovish displays of submission,” such as humility or deference; fairness, or how to divide disputed resources equally or compromise; and property rights, that is, not stealing.


The team found that these seven cooperative behaviors were considered morally good in 99.9% of cases across cultures. Curry is careful to note that people around the world differ hugely in how they prioritize different cooperative behaviors. But he said the evidence was overwhelming in widespread adherence to those moral values.

“I was surprised by how unsurprising it all was,” he says. “I expected there would be lots of ‘be brave,’  ‘don’t steal from others,’ and ‘return favors,’ but I also expected a lot of strange, bizarre moral rules.” They did find the occasional departure from the norm. For example, among the Chuukese, the largest ethnic group in the Federated States of Micronesia, “to steal openly from others is admirable in that it shows a person’s dominance and demonstrates that he is not intimidated by the aggressive powers of others.” That said, researchers who studied the group concluded that the seven universal moral rules still apply to this behavior: “it appears to be a case in which one form of cooperation (respect for property) has been trumped by another (respect for a hawkish trait, although not explicitly bravery),” they wrote.

Plenty of studies have looked at some rules of morality in some places, but none have attempted to examined the rules of morality in such a large sample of societies. Indeed, when Curry was trying to get funding, his idea was repeatedly rejected as either too obvious or too impossible to prove.

The question of whether morality is universal or relative is an age-old one. In the 17th century, John Locke wrote that if you look around the world, “you could be sure that there is scarce that principle of morality to be named, or rule of virtue to be thought on …. which is not, somewhere or other, slighted and condemned by the general fashion of whole societies of men.”


Philosopher David Hume disagreed. He wrote that moral judgments depend on an “internal sense or feeling, which nature has made universal in the whole species,” noting that certain qualities, including “truth, justice, courage, temperance, constancy, dignity of mind . . . friendship, sympathy, mutual attachment, and fidelity” were pretty universal.

In a critique of Curry’s paper, Paul Bloom, a professor of psychology and cognitive science at Yale University, says that we are far from consensus on a definition of morality. Is it about fairness and justice, or about “maximizing the welfare of sentient beings?” Is it about delaying gratification for long-term gain, otherwise known as intertemporal choice—or maybe altruism?

Bloom also says that the authors of the Current Anthropology study do not sufficiently explain the way we come to moral judgements—that is, the roles that reason, emotions, brain structures, social forces, and development may play in shaping our ideas of morality. While the paper claims that moral judgments are universal because of “collection of instincts, intuitions, inventions, and institutions,” Bloom writes, the authors make “no specific claims about what’s innate, what’s learned, and what arises from personal choice.”

So perhaps the seven universal rules may not be the ultimate list. But at a time when it often feels like we don’t have much in common, Curry offers a framework to consider how we might.

“Humans are a very tribal species,” Curry says. “We are quick to divide into us and them.”

And here is how the trolley problem has evolved over time.
https://www.prindlepost.org/2018/05/just-how-useful-is-the-trolley-problem/
Quote
Philosophy can be perceived as a rather dry, boring subject. Perhaps for that very reason, divulgers have attempted to use stimulating and provocative thought experiments and hypothetical scenarios, in order to arouse students and get them to think about deep problems.

Surely one of the most popular thought experiments is the so-called “Trolley Problem”, widely discussed across American colleges as a way to introduce ethics. It actually goes back to an obscure paper written by Philippa Foot in the 1960s. Foot wondered if a surgeon could ethically kill one healthy patient in order to give her organs to five sick patients, and thus save their life. Then, she wondered whether the driver of a trolley on course to run over five people could divert the trolley onto another track in which only one person would be killed.


As it happens, when presented with these questions, most people agree it is not ethical for the surgeon to kill the patient and distribute her organs thus saving the other five, but it is indeed ethical for the driver to divert the trolley, thus killing one and saving the five. Foot was intrigued what the difference would be between both cases.

She reasoned that, in the first case, the dilemma is between killing one and letting five die, whereas in the second case, the dilemma is between killing one and killing five. Foot argued that there is a big moral difference between killing and letting die. She considered negative duties (duties not to harm others) should have precedence over positive duties (duties to help others), and that is why letting five die is better than killing one.

This was a standard argument for many years, until another philosopher, Judith Jarvis Thomson, took over the discussion and considered new variants of the trolley scenario. Thomson considered a trolley going down its path about to run over five people, and the possibility of diverting it towards another track where only one person would be run over. But, in this case, the decision to do so would not come from the driver, but rather, from a bystander who pulls a lever in order to divert the trolley.

The bystander could simply do nothing, and let the five die. But, when presented with this scenario, most people believe that the bystander has the moral obligation to pull the lever. This is strange, as now, the dilemma is not between killing one and killing five, but instead, killing one and letting five die. Why can the bystander pull the lever, but the surgeon cannot kill the healthy person?

Thomson believed that the answer was to be found in the doctrine of double effect, widely discussed by Thomas Aquinas and Catholic moral philosophers. Some actions may serve an ultimately good purpose, and yet, have harmful side effects. Those actions would be morally acceptable as long as the harmful side effects are merely foreseen, but not intended. The surgeon would save the five patients by distributing the healthy person’s organs, but in so doing, he would intend the harmful effect (the death of the donor). The bystander would also save the five persons by diverting the trolley, but killing the one person on the alternate track is not an intrinsic part of the plan, and in that sense, the bystander would merely foresee, but not intend, the death of that one person.

Thomson considered another trolley scenario that seemed to support her point. Suppose the trolley is going down its path to run over five people, and it is about to go underneath a bridge. On that bridge, there is a fat man. If thrown onto the tracks, the fat man’s weight would stop the trolley, and thus save the five people. Again, this would be killing one person in order to save five. However, the fat man’s death would not only be foreseen but also intended. According to the doctrine of double effect, this action would be immoral. And indeed, when presented with this scenario, most people disapprove of throwing down the fat man.

However, Thomson herself came up with yet another trolley scenario, in which an action is widely approved by people who consider it, yet it is at odds with the doctrine of double effect. Suppose this time that the trolley is on its path to run over five people, and there is a looping track in which the fat man is now standing. If the trolley is diverted onto that track, the fat man’s body will stop the trolley, and it will prevent the trolley from making it back to the track where the five people will be run over. Most people believe that a bystander should pull the lever to divert the trolley, and thus kill the fat man to save the five.

Yet, by doing so, the fat man’s death is not merely foreseen, but intended. If the fat man were somehow able to escape from the tracks, he would not be able to save the other five. The fat man needs to die, and yet, most people do not seem to have a problem with that.

Thomson wondered why people would object to the fat man being thrown from the bridge, but would not object to running over the fat man in the looping track, when in fact, in both scenarios the doctrine of double effect is violated. To this day, this question remains unanswered.

Some philosophers have made the case that too much has been written about the Trolley Problem, and too little has been achieved with it. Some argue either that the examples are unrealistic to the point of being comical and irrelevant. Others argue that intuitions are not reliable and that moral decisions should be based on reasoned analysis, not just on feeling “right” or “wrong” when presented with scenarios.

It is true that all these scenarios are highly unrealistic and that intuitions can be wrong. The morality of actions cannot just be decided by public votes. Yet, despite all its shortcomings, the Trolley Problem remains an exciting and useful approach. It is extremely unlikely someone will ever encounter a situation where a fat man could be thrown from a bridge in order to save five people. But the thought of that situation can elicit thinking about situations with structural similarities, such as whether or not civilians can be bombed in wars, or whether or not doctors should practice euthanasia. The Trolley Problem will not provide definite answers, but it will certainly help in thinking more clearly.
Title: Re: Is there a universal moral standard?
Post by: David Cooper on 25/10/2019 18:20:33
Different person may have different preference on the same feeling/sensation. In extreme case, some kind of pain might be preferred by some kind of persons, such as sadomasochists. Hence I concluded that there must be a deeper meaning than feeling which we should base our morality upon.

This is precisely why I've had enough of discussing this here. There may be no two individuals who feel the same things as each other, but all that means is that correct morality has to take into account individual differences wherever data about that is available. Where it isn't available, you have to go by the best information you have, and that will typically be the average. If one masochist likes being tortured to death, that doesn't negate the wrongness of torture for others. Apply my method: you imagine that you are all the participants in the system, so when you are the masochist, you will enjoy being tortured to death, and when you're a sadist, you may get pleasure out of torturing that masochist to death, so it is moral for that sadist to torture that masochist to death if the masochist signs up to that. The method necessarily covers all cases.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 28/10/2019 04:05:11
Here is another reading on trolley problem to check our ideas on universal morality.
https://qz.com/1562585/the-seven-moral-rules-that-supposedly-unite-humanity/

And here is how the trolley problem has evolved over time.
https://www.prindlepost.org/2018/05/just-how-useful-is-the-trolley-problem/
When I first encountered the trolley problem, I kept thinking why the number 5 was chosen to trade with 1 to determine the morality of action/inaction. Then I sketched a basic version of trolley problem where the numbers vary, like I've shown in my previous post here:
Here is an example to emphasize that sometimes moral decision is based on efficiency. We will use some variations of trolley problem with following assumptions:
- the case is evaluated retrospectively by a perfect artificial intelligence, hence no room for uncertainty of cause and effect regarding the actions or inactions.
- a train is moving in high speed on the left track.
- a lever can be used to switch the train to the right track.
- if the train goes to the left track, every person on the left track will be killed. Likewise for the right track.
- all the people involved are average persons who have positive contribution to the society. No preferences for any one person over the others.
The table below shows possible combination of how many persons on the left and right tracks, ranging from 0 to 5.
The left column in the table below shows how many persons are on the left track, while the top row shows how many persons are on the right track.
\   0   1   2   3   4   5
0   o   o   o   o   o   o
1   x   o   o   o   o   o
2   x   ?   o   o   o   o
3   x   ?   ?   o   o   o
4   x   ?   ?   ?   o   o
5   x   ?   ?   ?   ?   o

When there are 0 person on the left track, moral persons must leave the lever as it is, no matter how many persons on the right track. This is indicated by letter o in every cell next to number 0 on the left column.
When there are 0 person on the right track, moral persons must switch the lever if there are at least 1 person on the left track. This is indicated by letter x in every cell below the number 0 on the top row, except when there is 0 person on the left track.
When there are non-zero persons on each track and more persons on the right track than the left track, moral persons must leave the lever as it is to reduce casualty. This is indicated by letter o in every cell on the top right side of diagonal cells.
When there are the same number of persons on the left and right tracks, moral persons should leave the lever to conserve resource (energy to switch the track) and avoid being accused of playing god. This is indicated by letter o in every diagonal cell.
When there are non-zero persons on each track and more persons on the left track, the answer might vary (based on previous studies). If you choose to do nothing in these situations, effectively it shows how much you value your action of switching the lever, in the unit of difference of person number between the left and right track. This is indicated by question marks in every cell on the bottom left side of diagonal cells.

One of notable conclusions I got from this analysis is emphasized in bold.
Can we call ourselves moral if we let 1 million people die just because we don't want to move the lever which would kill 1? (Imagine a nuclear bomb on the right track that would kill entire city). How many people have to die before we are morally justified to move the lever to kill 1 person?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 28/10/2019 04:17:52
Quote
Pain is a distressing feeling often caused by intense or damaging stimuli. The International Association for the Study of Pain's widely used definition defines pain as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage".[1] In medical diagnosis, pain is regarded as a symptom of an underlying condition.
https://en.wikipedia.org/wiki/Pain
AFAIK, the underlying condition for pleasure and pain is that they help organism to have better chance to survive by pursuing pleasure experiences such as eating food and having sex, and avoiding painful experiences such as from extreme temperature or pressure. Besides the immediate feeling by sensory organs, more complex organisms have developed emotion, which is basically the ability to predict future feelings, based on simple model of their surroundings. The next milestone for organism complexity would be reason, which involves more accurate and precise model of reality.
It is possible to replace feelings with other form of information to determine if the current situation have overall good/bad effect to the existence of an agent. That's why I put feelings and emotions as instrumental goals, rather than ultimate/terminal goal.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 19/11/2019 11:50:04
https://www.prindlepost.org/2018/05/just-how-useful-is-the-trolley-problem/
Quote
Philosophy can be perceived as a rather dry, boring subject. Perhaps for that very reason, divulgers have attempted to use stimulating and provocative thought experiments and hypothetical scenarios, in order to arouse students and get them to think about deep problems.

Surely one of the most popular thought experiments is the so-called “Trolley Problem”, widely discussed across American colleges as a way to introduce ethics. It actually goes back to an obscure paper written by Philippa Foot in the 1960s. Foot wondered if a surgeon could ethically kill one healthy patient in order to give her organs to five sick patients, and thus save their life. Then, she wondered whether the driver of a trolley on course to run over five people could divert the trolley onto another track in which only one person would be killed.


As it happens, when presented with these questions, most people agree it is not ethical for the surgeon to kill the patient and distribute her organs thus saving the other five, but it is indeed ethical for the driver to divert the trolley, thus killing one and saving the five. Foot was intrigued what the difference would be between both cases.

She reasoned that, in the first case, the dilemma is between killing one and letting five die, whereas in the second case, the dilemma is between killing one and killing five. Foot argued that there is a big moral difference between killing and letting die. She considered negative duties (duties not to harm others) should have precedence over positive duties (duties to help others), and that is why letting five die is better than killing one.

This was a standard argument for many years, until another philosopher, Judith Jarvis Thomson, took over the discussion and considered new variants of the trolley scenario. Thomson considered a trolley going down its path about to run over five people, and the possibility of diverting it towards another track where only one person would be run over. But, in this case, the decision to do so would not come from the driver, but rather, from a bystander who pulls a lever in order to divert the trolley.

The bystander could simply do nothing, and let the five die. But, when presented with this scenario, most people believe that the bystander has the moral obligation to pull the lever. This is strange, as now, the dilemma is not between killing one and killing five, but instead, killing one and letting five die. Why can the bystander pull the lever, but the surgeon cannot kill the healthy person?
In the case of surgeon version of trolley problem, I think many people would make following assumptions that make them reluctant to make the sacrifice:
- there is some non-zero chance that the surgery would fail.
- the five patients' conditions are somehow the consequence of their own fault, such as not living a healthy life, thus make them deserve their failing organs.
- on the other hand, the healthy person to be sacrificed is given credit for living a healthy life.
- many people would likely see the situation in that healthy person's perspective.

In presenting the problem while preventing the second assumption into equation, we can state that the five patients are victims of mass shooting, hence their failing organs have nothing to do with their lifestyle. Furthermore, to tip the balance further, the healthy person could be the mass shooter, or at least someone who let the mass shooting happens.
Or in alternative scenario, the five patients are heroes that risk their lives to stop the mass shooter and save others' lives while the healthy patient is a coward running away from the shooting location.
Title: Re: Is there a universal moral standard?
Post by: Halc on 20/11/2019 00:48:00
In the case of surgeon version of trolley problem, I think many people would make following assumptions that make them reluctant to make the sacrifice:
- there is some non-zero chance that the surgery would fail.
- the five patients' conditions are somehow the consequence of their own fault, such as not living a healthy life, thus make them deserve their failing organs.
- on the other hand, the healthy person to be sacrificed is given credit for living a healthy life.
- many people would likely see the situation in that healthy person's perspective.
Foot was correct in noticing that people don't really hold to the beliefs they claim.  A hypothetical situation (trolley) yields a different answer than a real one (such as the surgery policy described actually being implemented as policy).

Your objections seem to just be trying to avoid the issue.  Let's assume the surgery carries no risks.  The one dies, the others go on to live full lives.  This is like assuming no friction in simple physics questions, or assuming the trolley will not overturn when it hits the switch at speed, explode and kill 20.  Adding variables like this just detracts from the question being asked.

Next point attempts to put a level of blame on the conditions of the 5.  So let's discard that.  All have faulty organs (different ones) due to accidents or something, but not due to unhealthy choices being made.  In fact, the reasoning is rejected if only the one healthy person carries all the blame.  It is considered (rightly so) unethical to harvest a healthy condemned criminal in order to save the lives of all these innocents in need. Now why is that? Certainly make no sense on David's utilitarian measurement.

There is another solution: You have these 5 people each in need of a different organ from the one healthy person.  So they draw lots and the loser gives his organs to the other 4. That's like one of the 5 trolley victims getting to be the hero by throwing the other 4 off the tracks before the trolley hits and kills him.  Win win, and yet even this isn't done in practice. Why not?  What is the actual moral code which typically drives practical policy?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 20/11/2019 06:45:59
Your objections seem to just be trying to avoid the issue.  Let's assume the surgery carries no risks.  The one dies, the others go on to live full lives.  This is like assuming no friction in simple physics questions, or assuming the trolley will not overturn when it hits the switch at speed, explode and kill 20.  Adding variables like this just detracts from the question being asked.
It's the opposite. I'm trying to identify the reason why people change their mind when the situation is slightly changed, one parameter at a time.
It is considered (rightly so) unethical to harvest a healthy condemned criminal in order to save the lives of all these innocents in need. Now why is that?



I have some possible reason to think about.
- Perhaps the crime isn't considered severe enough for death penalty.
- Fear of revenge from the victim's relatives. There's always non-zero chance the secret will be revealed.
- Hope that there might be better options without sacrificing anyone, such as technological advancement.
- The lost of those five lives are not that big deal. Life can still go on as usual. Millions of lives had died due to accident, natural disasters, epidemic, famine, etc. without anyone getting their hands dirty of homicide.

In practice, people may choose differently among one another as well as between theory and practice because of their anxiety, time pressure, different knowledge and experience related to the situation at hand.

There is another solution: You have these 5 people each in need of a different organ from the one healthy person.  So they draw lots and the loser gives his organs to the other 4. That's like one of the 5 trolley victims getting to be the hero by throwing the other 4 off the tracks before the trolley hits and kills him.  Win win, and yet even this isn't done in practice. Why not?  What is the actual moral code which typically drives practical policy?
In practice, that is a very rare circumstance.
The cost/resource required could be high. Who will pay the operation? The uncertainty of cost and benefit would make surgeons avert risks by simply doing nothing and noone would blame them.
It might also have been done already, but we never know because it is kept secret to avoid backlash and public outcry.
Title: Re: Is there a universal moral standard?
Post by: Halc on 20/11/2019 13:39:43
Quote from: Halc
It is considered (rightly so) unethical to harvest a healthy condemned criminal in order to save the lives of all these innocents in need. Now why is that?
I have some possible reason to think about.
Again, you seem to be searching for loopholes rather than focusing on the fundamental reasons why we choose to divert the trolley on a paper philosophy test but not in practice. I think there is a reason, but the best way to to see it is to consider the most favorable case, and wonder why it is still rejected.  You seem to be looking for the less favorable cases, which is looking in the wrong direction.

Quote
- Perhaps the crime isn't considered severe enough for death penalty.
"Condemned criminal" means it is severe enough. The death sentence has been made.
Quote
- Fear of revenge from the victim's relatives. There's always non-zero chance the secret will be revealed.
There's a secret involved? I was suggesting this be above board. Not sure who the victim is here, the criminal or the victims of whatever crimes he committed. If the former, he's already got the death penalty and his relatives already know it. Changing the sentence to 'death by disassembly' shouldn't be significantly different from their POV than say death by lethal injection (which renders the organs unusable for transplants).

Quote
- Hope that there might be better options without sacrificing anyone, such as technological advancement.
People in need of transplants often have short life expectancy, certainly shorter than advancement of technology.  OK, they've made I think a few mechanical hearts, and the world is covered with mechanical kidneys (not implantable ones though).  A dialysis machine does not fit in a torso. No mechanical livers. It's transplant or die. Not sure what other organs are life-saving.  There are eye transplants, but that just restores sight, not life.

Speaking of livers, they do consider 'blame'.  An alcoholic is far less likely to get a liver transplant, unless of course he has enough money/fame. Mickey Mantle is a prime example, drinking his liver into failure and got the transplant at age 48. He lived only 2 months after getting it.  So actual morals in practice seems to be to give the scarce resource to the wealthy celebrity and not somebody who's more likely to get more years added to their life from having it done.
Sorry.  Side rant.

Quote
- The lost of those five lives are not that big deal. Life can still go on as usual.
With that reasoning, murder shouldn't even be illegal.
Quote
Millions of lives had died due to accident, natural disasters, epidemic, famine, etc. without anyone getting their hands dirty of homicide.
Ah, there's the standard.  Because putting the trolley on the track with one is an act of homicide (involves the dirtying of someone's hands), but the non-act of not saving 5 (or 4) people who could be saved is not similarly labeled a homicide. Negligent homicide is effectively death caused by failing to take action, so letting the trolley go straight is still homicide.
This H word is why I brought up the death-row guy, because his life is already forfeit, and it isn't a homicide to harvest him. The people who throw the switch to kill him are not charged with homicide. And there is no policy of saving lives with his organs, and I said 'rightly so' to that policy.

A specialty doctor could just decide to stay home one day to watch TV for once, without informing his hospital employer. As a result, 3 patients die. His hands are not 'dirty with homicide', and people die every day anyway, so there's nothing wrong with his choosing to blow the day off like that.
Sorry, I find this an immoral choice on the doctor's part.

Quote
Quote from: Halc
There is another solution: You have these 5 people each in need of a different organ from the one healthy person.  So they draw lots and the loser gives his organs to the other 4. That's like one of the 5 trolley victims getting to be the hero by throwing the other 4 off the tracks before the trolley hits and kills him.  Win win, and yet even this isn't done in practice.
In practice, that is a very rare circumstance.
In fact, I think it has never been done. But I'm asking why not, since it actually works better than the 'accidental' version they use now.

Quote
The cost/resource required could be high, especially if . Who will pay the operation?
Same person who pays when there is a donor found. It costs this money in both circumstances. High cost of the procedure actually is an incentive to do it. The hospitals make plenty of money over these sorts of things, so you'd think the solution I proposed would be found more attractive.
The cost (to save a given life this way) is higher in fact using accident victims, because being unplanned, the matching procedure must be done in absolute haste. Planning it like this (finding a group that are matches for each other and none likely to be approved for a transplant through normal channels) eliminates much of the cost. They can all be brought together into one building instead of needing to preserve and transport the organs all to different cities.

Quote
The uncertainty of cost and benefit would make surgeons avert risks by simply doing nothing and noone would blame them.
Surgeons always take risks, and sometimes people blame them. They say to watch out for surgeons who have too low of a failure rate for a risky procedure because either they cook the books or they are too incompetent to take on the higher risk patients. But people very much do blame surgeons who refuse to save lives when it is within their capability.

Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/11/2019 02:51:22
Again, you seem to be searching for loopholes rather than focusing on the fundamental reasons why we choose to divert the trolley on a paper philosophy test but not in practice. I think there is a reason, but the best way to to see it is to consider the most favorable case, and wonder why it is still rejected.  You seem to be looking for the less favorable cases, which is looking in the wrong direction.
The social experiments shows that different people give different answers for different reasons. They also changed their mind in different occasions, even when presented with exactly same situation. It might even be the case that some of them just performed coin toss to choose the answer.
Reasonable people would consider expected cost and benefit of each option, which can be classified as short term, mid term, and long term. Before you can decide which way is the right way, you need to explore every possible scenario to see the best option.
When viewed retrospectively, usually people would choose option with highest benefit and lower cost in long term.
If you are a software developer or law maker, looking for loopholes is an important part of your job. Those loopholes can be exploited which may cost unbearable damage if not mitigated properly. They may not be obvious at a glance, that's why the software/law should be scrutinized.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/11/2019 03:10:01
"Condemned criminal" means it is severe enough. The death sentence has been made.
Sorry for my limitations in English. It's not my native language. The dictionaries have several definitions for the word "condemn". Some says it can mean life imprisonment.


There's a secret involved? I was suggesting this be above board. Not sure who the victim is here, the criminal or the victims of whatever crimes he committed. If the former, he's already got the death penalty and his relatives already know it. Changing the sentence to 'death by disassembly' shouldn't be significantly different from their POV than say death by lethal injection (which renders the organs unusable for transplants).
In the surgeon version of the trolley problem, the secrecy is part of the scenario. Sorry for the mixed up.
I think one reason can be given in the condemned criminal scenario is the fear of future case where innocent persons could be falsely accused for death penalty just to harvest their organs.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/11/2019 03:48:18
There is another solution: You have these 5 people each in need of a different organ from the one healthy person.  So they draw lots and the loser gives his organs to the other 4. That's like one of the 5 trolley victims getting to be the hero by throwing the other 4 off the tracks before the trolley hits and kills him.  Win win, and yet even this isn't done in practice.

In fact, I think it has never been done. But I'm asking why not, since it actually works better than the 'accidental' version they use now.
Sacrificing one to get parts required to save many is routinely done in industry. But it's only done with machines/equipments, not human. Though it's often called cannibalizing.
I have already proposed this option in earlier post in this thread.
Maybe finding many people with compatible organs are not easy in practice. Or the hospitals don't have adequate resources to perform many surgery at once. They also need consent from the patient to be sacrificed, and perhaps also the recipients themselves.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/11/2019 03:56:35
Quote
- The lost of those five lives are not that big deal. Life can still go on as usual.
With that reasoning, murder shouldn't even be illegal.
In the past, it wasn't. Ask the Aztecs who sacrifice humans. Or Europeans collonizing Americas and killing the natives.
It's still happening in conflict zones, where too many people meet limited resource to survive. Some of those killings can even tip the life balance favorably.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/11/2019 04:08:47
A specialty doctor could just decide to stay home one day to watch TV for once, without informing his hospital employer. As a result, 3 patients die. His hands are not 'dirty with homicide', and people die every day anyway, so there's nothing wrong with his choosing to blow the day off like that.
Sorry, I find this an immoral choice on the doctor's part.
I don't know if all hospitals apply the same rules. But their employees have rights such as annual leaves. The duties to provide adequate resources for their operation include having backup doctors. So don't put so much pressure to the doctors.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 29/12/2019 06:35:57
For those who want to explore the arithmetic for morality, consider this statement.
Quote
Indeed, are happiness and misery mathematical entities that can be added or subtracted in the first place? Eating ice cream is enjoyable. Finding true love is more enjoyable. Do you think that if you just eat enough ice cream, the accumulated pleasure could ever equal the rapture of true love?
Homo Deus - Yuval Noah Harari.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 30/12/2019 14:04:54
Quote
Eating ice cream is enjoyable. Finding true love is more enjoyable. Do you think that if you just eat enough ice cream, the accumulated pleasure could ever equal the rapture of true love?

Not a universal example, by any means. There are some people who choose to eat to excess (say outside the 3σ region of the normal distribution) and end up with no friends. Some people are socially anhedonic and prefer any amount of ice cream to even a hint of love. Some people (me included) don't much like ice cream.

You can base your moral standard on an arithmetic mean, or some other statistic, but the definition of immorality requires an arbitrary limit on deviation.

Let's go back to deliberate killing. It is apparently OK for a soldier to kill a uniformed opponent at a distance, or even hand-to-hand, but not to execute a wounded opponent. But it is a moral imperative to execute a wounded animal of any other species. Or he could kill a plain-clothes spy, but arbitrarily butchering other civilians is a war crime. Except if said civilians happen to be in the vicinity of a legitimate (or reasonably suspected) bombing target...... Surely, of all the possible human interactions, acts of war should be cut and dried by now? But they aren't.
Title: Re: Is there a universal moral standard?
Post by: syhprum on 30/12/2019 15:23:28
The rules of warfare are made but winners it is OK to cunningly plan to set fire to a city like Tokyo or Dresden  burning hundreds of thousands of civilians to death but to execute 50 or so air force personnel that have brocken  out of a prison camp is a heinous war crime that will never be forgotten.
Fear not the rules of warfare will be settled one day we get plenty of practice. 
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 31/12/2019 12:34:28
Problem is that the rules of war evolve, as with less lethal forms of combat, in the light of previous conflicts. At the outbreak of WWII no combatant had the capability or intention of obliterating entire cities but the conflict evolved from blitzkrieg and trench warfare (at which the Germans and Japanese were particularly adept) to attrition of supplies, where the geographical separation of American and Russian factories from the front line eventually yielded the advantage to the Allies.

The technology of firestorm and nuclear bombing then changed the primary objective from infantry occupation of foreign territory to demonstrably unlimited aerial destruction of the homeland, but the relevant Geneva Conventions did not protect noncombatant civilians at the time. Problem nowadays is asymmetric warfare, with guerrillas embedded in compliant (if not complicit) civilian populations: Geneva has not caught up with Vietnam.         
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 01/01/2020 15:49:28
The "surgeon problem" is an interesting consequence of basing morals (and thence laws) on rights rather than wrongs. One man's right (to life) becomes another man's duty (to keep him alive). This is the fundamental objection to integrating UK law, based on a small number of wrongs, into European law which is based on a large number of rights, and it is worth looking for this distinctive qualitative aspect of any moral or ethical system.

The case was raised earlier of the surgeon who takes a day off, during which several patients die. The "right to life" means that the State has to provide best possible medical cover for all conditions at all times, whether as a national service or by buying treatment for those who can't afford it. This is quite different from a national or private service providing "best available within budget", which avoids individual moral dilemmas by substituting explicit terms of business (e.g. "surgery available Mon-Fri only") for an unlimited duty. Thus it is contractually wrong for the surgeon to take a day off without notice, or for the scalpels to be blunt, but the buck stops there.

Less spectacular, but more of a practical problem, is the EU "right to family life" which has actually prevented the deportation of an undesirable who claimed his cat was his family, and the proposed destination would not accept cats.   
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 07/01/2020 09:01:12
Moral rules are set to achieve some desired states in reliable manner, i.e. they produce more desired results in the long run.
Quote
In the broad and always disconcerting area of Ethics there seem to be two broad categories for identifying what makes acts ‘moral’:

Deontology: Acts are moral (or not) in themselves: it’s just wrong to kill or torture someone under most circumstances, regardless of the consequences. See Kant.

Consequentialism: Acts are moral according to their consequences: killing or torturing someone leads to bad results or sets bad precedents, so (sic) we should not do it.

Then there is Particularism: the idea that there are no clear moral principles as such.
https://charlescrawford.biz/2018/05/17/philosophy-trolley-problem-torture/
Even someone who embrace Deontology recognize that there are exceptions to their judgement toward some actions, as seen in the usage of the word most, instead of all circumstances. It shows that the moral value is not inherently attached to the actions themselves. It still depends on the circumstances instead, and the consequences are part of those.
All objections/criticisms to Consequentialism that I've seen so far get their points by emphasizing short term consequences which are in contrast to their long term overall consequences. If anybody know some counterexamples, please let me know.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/01/2020 02:54:57
Let's go back to deliberate killing. It is apparently OK for a soldier to kill a uniformed opponent at a distance, or even hand-to-hand, but not to execute a wounded opponent.
It may depends on the wound and circumstances. If it's so severe and there is no possibility to save them in time(e.g. hole through the lung), and letting them live only causes them to endure prolonged, meaningless pain, then executing them might be the best option.

But it is a moral imperative to execute a wounded animal of any other species. Or he could kill a plain-clothes spy, but arbitrarily butchering other civilians is a war crime. Except if said civilians happen to be in the vicinity of a legitimate (or reasonably suspected) bombing target...... Surely, of all the possible human interactions, acts of war should be cut and dried by now? But they aren't.
Cooperations are formed by common interests of involving parties. They are more reliable if they have common goals instead of spontaneous interests. They can be permanent with common terminal goals.
When there are discrepancies in terminal goals, they will understandably set different priority lists, which may cause conflicts and dispute. If those conflict of interest can not be negotiated, then war will break out.
So in order to create everlasting peace, we need to convince people about our common terminal goals, and build an adequately accurate and precise model of objective reality so we can act accordingly to achieve those goals in reliable manner.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/01/2020 04:30:47
Quote
Eating ice cream is enjoyable. Finding true love is more enjoyable. Do you think that if you just eat enough ice cream, the accumulated pleasure could ever equal the rapture of true love?
Not a universal example, by any means. There are some people who choose to eat to excess (say outside the 3σ region of the normal distribution) and end up with no friends. Some people are socially anhedonic and prefer any amount of ice cream to even a hint of love. Some people (me included) don't much like ice cream.

You can base your moral standard on an arithmetic mean, or some other statistic, but the definition of immorality requires an arbitrary limit on deviation.
The example above was meant as counterexample to classical method of utilitarian morality. Another prominent critic is the utility monster as discussed in my previous posts.
If we have found an ultimate terminal goal for conscious moral agents, we can set moral rules to achieve that goal. We can learn from AI researches to optimize the process of setting those moral rules and avoid making mistakes identified in that field, such as Goodhart's Curse. https://arbital.com/p/goodharts_curse/
Quote
Goodhart's Curse and meta-utility functions
An obvious next question is "Why not just define the AI such that the AI itself regards U as an estimate of V, causing the AI's U to more closely align with V as the AI gets a more accurate empirical picture of the world?"

Reply: Of course this is the obvious thing that we'd want to do. But what if we make an error in exactly how we define "treat U as an estimate of V"? Goodhart's Curse will magnify and blow up any error in this definition as well.

We must distinguish:

V, the true value function that is in our hearts.
T, the external target that we formally told the AI to align on, where we are hoping that T really means V.
U, the AI's current estimate of T or probability distribution over possible T.
U will converge toward T as the AI becomes more advanced. The AI's epistemic improvements and learned experience will tend over time to eliminate a subclass of Goodhart's Curse where the current estimate of U-value has diverged upward from T-value, cases where the uncertain U-estimate was selected to be erroneously above the correct formal value T.

However, Goodhart's Curse will still apply to any potential regions where T diverges upward from V, where the formal target diverges from the true value function that is in our hearts. We'd be placing immense pressure toward seeking out what we would retrospectively regard as human errors in defining the meta-rule for determining utilities. 1

Goodhart's Curse and 'moral uncertainty'
"Moral uncertainty" is sometimes offered as a solution source in AI alignment; if the AI has a probability distribution over utility functions, it can be risk-averse about things that might be bad. Would this not be safer than having the AI be very sure about what it ought to do?

Translating this idea into the V-T-U story, we want to give the AI a formal external target T to which the AI does not currently have full access and knowledge. We are then hoping that the AI's uncertainty about T, the AI's estimate of the variance between T and U, will warn the AI away from regions where from our perspective U would be a high-variance estimate of V. In other words, we're hoping that estimated U-T uncertainty correlates well with, and is a good proxy for, actual U-V divergence.

The idea would be that T is something like a supervised learning procedure from labeled examples, and the places where the current U diverges from V are things we 'forgot to tell the AI'; so the AI should notice that in these cases it has little information about T.

Goodhart's Curse would then seek out any flaws or loopholes in this hoped-for correlation between estimated U-T uncertainty and real U-V divergence. Searching a very wide space of options would be liable to select on:

Regions where the AI has made an epistemic error and poorly estimated the variance between U and T;
Regions where the formal target T is solidly estimable to the AI, but from our own perspective the divergence from T to V is high (that is, the U-T uncertainty fails to perfectly cover all T-V divergences).
The second case seems especially likely to occur in future phases where the AI is smarter and has more empirical information, and has correctly reduced its uncertainty about its formal target T. So moral uncertainty and risk aversion may not scale well to superintelligence as a means of warning the AI away from regions where we'd retrospectively judge that U/T and V had diverged.
Other interesting reading around AI problems.
https://www.lesswrong.com/posts/vXzM5L6njDZSf4Ftk/defining-ai-wireheading
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/01/2020 04:49:02
Utilitarian morality suffers a problem stated as Goodhart's Law.
Quote
Goodhart's Law is named after the economist Charles Goodhart. A standard formulation is "When a measure becomes a target, it ceases to be a good measure." Goodhart's original formulation is "Any observed statistical regularity will tend to collapse when pressure is placed upon it for control purposes."

For example, suppose we require banks to have '3% capital reserves' as defined some particular way. 'Capital reserves' measured that particular exact way will rapidly become a much less good indicator of the stability of a bank, as accountants fiddle with balance sheets to make them legally correspond to the highest possible level of 'capital reserves'.

Decades earlier, IBM once paid its programmers per line of code produced. If you pay people per line of code produced, the "total lines of code produced" will have even less correlation with real productivity than it had previously.

And the research below made a breakthrough in deciphering how the body’s cells sense touch, including pain and pleasure.
https://www.nature.com/articles/d41586-019-03955-w?utm_source=twt_nnc&utm_medium=social&utm_campaign=naturenews&sf227836567=1
Quote
Touch underlies the functioning of almost every tissue and cell type, says Patapoutian. Organisms interpret forces to understand their world, to enjoy a caress and to avoid painful stimuli. In the body, cells sense blood flowing past, air inflating the lungs and the fullness of the stomach or bladder. Hearing is based on cells in the inner ear detecting the force of sound waves.
It shows why morality based on pain and pleasure is susceptible to problems identified as winner's, optimizer's and Goodhart's curses. https://arbital.com/p/goodharts_curse/
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 14/01/2020 17:28:13
Decades earlier, IBM once paid its programmers per line of code produced. If you pay people per line of code produced, the "total lines of code produced" will have even less correlation with real productivity than it had previously.
A fine example. Slightly off topic from universal morality, but I've always distinguished between production and management. Production workers should get paid per unit product since they have no other choice or control. The function of management is to optimise, so managers should be paid only from a profit share. The IBM example is interesting since a line of code is not product but a component: if you can achieve the same result with less code, you have a more efficient product: the program or subroutine is the product.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 15/01/2020 09:55:24
Decades earlier, IBM once paid its programmers per line of code produced. If you pay people per line of code produced, the "total lines of code produced" will have even less correlation with real productivity than it had previously.
A fine example. Slightly off topic from universal morality, but I've always distinguished between production and management. Production workers should get paid per unit product since they have no other choice or control. The function of management is to optimise, so managers should be paid only from a profit share. The IBM example is interesting since a line of code is not product but a component: if you can achieve the same result with less code, you have a more efficient product: the program or subroutine is the product.
This example emphasizes the discrepancy between longterm goal with short term goal. Just like the name suggest, long term goals have measurable results after a long time has passed since the goal setting, hence without other tools, we might not know wether or not they are going to be achieved, or even if we are going to the right direction. That's why we need short term goals, to help us evaluate our actions and see if they are aligned with our long term goals. In process control system, we can use Smith predictor which is a predictive controller designed to control systems with a significant feedback time delay. We must carefully choose the design of the predictor to be as accurate as possible to minimize process fluctuation.
The same logic also applies to moral rules. They are shortcut to help us achieve long term goals as conscious agents. We need to be more transparent of why those rules should be followed, and what circumstances may trigger exceptions. Most cultures suggest that killing, lying, stealing are bad, but they found exceptions for them.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 22/01/2020 08:40:14
Using artificial intelligence to solve moral problems will inevitably come to a question
What can be the differences between intelligence and consciousness?

https://www.quora.com/What-are-the-differences-between-consciousness-and-intelligence
Quote
Glyn Williams, Answered Aug 11, 2014

I personally define intelligence as the ability to solve problems.

And while we often attempt to solve problems using conscious methods. (Visualize a problem, visualize potential solutions etc)  - it is clear from nature that problems can be solved without intent of any sort.

Evolutionary biology has solved the problem of flight at least 4 times. Without a single conscious-style thought in its non-head.

Chess playing computers can solve chess problems, by iterating though all possible moves.  Again without a sense of self.

Consciousness as it is usually defined, is type of intelligence that is associated with the problems of agency.  If you are a being and have to do stuff - then that might be called awareness or consciousness.

It's also worth noting that being conscious doesn't necessarily having high intelligence.
https://en.wikipedia.org/wiki/IQ_classification#Historical_IQ_classification_tables
Quote
IQ Range ("ratio IQ")   IQ Classification
175 and over   Precocious
150–174   Very superior
125–149   Superior
115–124   Very bright
105–114   Bright
95–104   Average
85–94   Dull
75–84   Borderline
50–74   Morons
25–49   Imbeciles
0–24   Idiots
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 22/01/2020 09:02:51
Moral rules are set to achieve some desired states in reliable manner, i.e. they produce more desired results in the long run.
Quote
In the broad and always disconcerting area of Ethics there seem to be two broad categories for identifying what makes acts ‘moral’:

Deontology: Acts are moral (or not) in themselves: it’s just wrong to kill or torture someone under most circumstances, regardless of the consequences. See Kant.

Consequentialism: Acts are moral according to their consequences: killing or torturing someone leads to bad results or sets bad precedents, so (sic) we should not do it.

Then there is Particularism: the idea that there are no clear moral principles as such.
https://charlescrawford.biz/2018/05/17/philosophy-trolley-problem-torture/
Even someone who embrace Deontology recognize that there are exceptions to their judgement toward some actions, as seen in the usage of the word most, instead of all circumstances. It shows that the moral value is not inherently attached to the actions themselves. It still depends on the circumstances instead, and the consequences are part of those.
All objections/criticisms to Consequentialism that I've seen so far get their points by emphasizing short term consequences which are in contrast to their long term overall consequences. If anybody know some counterexamples, please let me know.
Here is another objection to deontological morality. There are circumstances where following one moral rule will inevitably violating other moral rules. Which rules must we keep following then, which can be abandoned? How to set priority for those rules? Is the priority fixed, or might it still depend on the circumstances?

In modern times, slavery has been classified as one of the most immoral acts. But this wasn't the case for majority of human history. It wasn't even in the list of ten commandments, which still have many adherents. But this is understandable since at that time, worse actions such as genocide were considered normal and had been done repeatedly by prominent moral authorities such as prophets, which presumably had higher moral standards than their peers.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 22/01/2020 16:22:41
prominent moral authorities such as prophets, which presumably had higher moral standards than their peers.
Illegitimate presumption! Priests, politicians, philosophers, prophets, and perverts in general, all profess to have higher moral standards than the rest of us, but so did Hitler and Trump. "By their deeds shall ye know them" (Matthew 7:16) is probably the least questionable line in the entire Bible.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 23/01/2020 04:21:12
prominent moral authorities such as prophets, which presumably had higher moral standards than their peers.
Illegitimate presumption! Priests, politicians, philosophers, prophets, and perverts in general, all profess to have higher moral standards than the rest of us, but so did Hitler and Trump.
Quote
presumption
/prɪˈzʌm(p)ʃ(ə)n/
noun
1.
an idea that is taken to be true on the basis of probability.
"underlying presumptions about human nature"
That definition seems to be using Bayesian inference, hence there is still a chance that it turns out to be false.
I was talking about moral authority instead of formal authority, which you seem to use as counter examples.
I think that we can safely presume that many of their peers have lower moral standard. While they might not be the majority, but collective actions of a group are often depends on its most vocal members.

"By their deeds shall ye know them" (Matthew 7:16) is probably the least questionable line in the entire Bible.

Agreed.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 23/01/2020 17:56:54
I think that we can safely presume that many of their peers have lower moral standard.
Lower than Hitler and Trump? Really?

Priests, politicians, and other parasites, assert their moral authority. "Proof by assertion" is not valid.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 24/01/2020 03:44:04
I think that we can safely presume that many of their peers have lower moral standard.
Lower than Hitler and Trump? Really?

Priests, politicians, and other parasites, assert their moral authority. "Proof by assertion" is not valid.
I was talking about moral authority instead of formal authority, which you seem to use as counter examples.
I think even many followers of Hitler and Trump who view them as legitimate formal authorities don't view them as moral authorites. Many people are more morally bankrupt, but they don't come to prominence due to lack of power or influence.
By the way, I was talking about genocides done by ancient moral authorities, which makes slavery slipped away from list of immoral acts. https://en.wikipedia.org/wiki/Saul#Rejection
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 24/01/2020 08:50:32
Most people think that killing or hurting animals are immoral, especially animals showing high level intelligence. 
From the point of view of universal utopia, we can evaluate that killing as bad act because it wastes data processing capabilities. That evaluation may comes up unconsciously because it's hardwired in human brain. Some people may be able to suppress that thought due to brain plasticity, while some don't even have the same hard wiring to begin with, such as in psychopaths.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 24/01/2020 16:57:10
Killing is sometimes essential, sometimes morally imperative.

True carnivores have no option, and humans who live in arctic regions are entirely dependent on killing highly intelligent species like whales and seals. I encouraged my kids to shoot and fish, with the proviso that they had to prepare and eat everything they killed. Result: three reasonably accomplished hunters (one a chef) and one vegetarian. No moral problem.

Normal humans don't like to see animals suffer, so we impose a legal imperative not to prolong the life of sick or injured animals unreasonably.

Some perverts claim supernatural authority for criminalising assisted suicide, others claim the same authority for mass killing and public executions. Priests, politicians and philosophers relish the suffering of others which is why they had no friends at school. Baffles me why we allow them any temporal authority.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 04/02/2020 03:37:08
It's also worth noting that being conscious doesn't necessarily having high intelligence.
At least there are two things required for consciousness or self awareness of an agent.
First is ability to represent itself in its internal model of its environment. As an illustration, if you put a map of your country on the floor, there will be a point on the map that is touching the actual point it refers to.
Quote
Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country.
https://en.wikipedia.org/wiki/Brouwer_fixed-point_theorem#Illustrations
In a real conscious agents, some part of the agent's data storage must represent some property of the agent itself.
The next is the existence of preference for one state over the others. One of the most common examples in animal world is pleasure over pain. Thus a map, even a dynamic one, is not conscious due to lack of preference.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 04/02/2020 10:13:08
Here another excerpt from Ray Kurzweil's book, "Singularity Is Near".
Quote
Another key feature of the human brain is the ability to make predictions, including predictions about the results
of its own decisions and actions. Some scientists believe that prediction is the primary function of the cerebral cortex,
although the cerebellum also plays a major role in the prediction of movement.
Interestingly, we are able to predict or anticipate our own decisions. Work by physiology professor Benjamin
Libet at the University of California at Davis shows that neural activity to initiate an action actually occurs about a
third of a second before the brain has made the decision to take the action. The implication, according to Libet, is that
the decision is really an illusion, that "consciousness is out of the loop." The cognitive scientist and philosopher Daniel
Dennett describes the phenomenon as follows: "The action is originally precipitated in some part of the brain, and off
fly the signals to muscles, pausing en route to tell you, the conscious agent, what is going on (but like all good officials
letting you, the bumbling president, maintain the illusion that you started it all)."114
A related experiment was conducted recently in which neurophysiologists electronically stimulated points in the
brain to induce particular emotional feelings. The subjects immediately came up with a rationale for experiencing
those emotions. It has been known for many years that in patients whose left and right brains are no longer connected,
one side of the brain (usually the more verbal left side) will create elaborate explanations ("confabulations") for
actions initiated by the other side, as if the left side were the public-relations agent for the right side.

The most complex capability of the human brain—what I would regard as its cutting edge—is our emotional
intelligence. Sitting uneasily at the top of our brain's complex and interconnected hierarchy is our ability to perceive
and respond appropriately to emotion, to interact in social situations, to have a moral sense, to get the joke, and to
respond emotionally to art and music, among other high-level functions. Obviously, lower-level functions of
perception and analysis feed into our brain's emotional processing, but we are beginning to understand the regions of
the brain and even to model the specific types of neurons that handle such issues.
These recent insights have been the result of our attempts to understand how human brains differ from those of
other mammals. The answer is that the differences are slight but critical, and they help us discern how the brain
processes emotion and related feelings. One difference is that humans have a larger cortex, reflecting our stronger
capability for planning, decision making, and other forms of analytic thinking. Another key distinguishing feature is
that emotionally charged situations appear to be handled by special cells called spindle cells, which are found only in
humans and some great apes. These neural cells are large, with long neural filaments called apical dendrites that
connect extensive signals from many other brain regions. This type of "deep" interconnectedness, in which certain
neurons provide connections across numerous regions, is a feature that occurs increasingly as we go up the
evolutionary ladder. It is not surprising that the spindle cells, involved as they are in handling emotion and moral
judgment, would have this form of deep interconnectedness, given the complexity of our emotional reactions.
What is startling, however, is how few spindle cells there are in this tiny region: only about 80,000 in the human
brain (about 45,000 in the right hemisphere and 35,000 in the left hemisphere). This disparity appears to account for
the perception that emotional intelligence is the province of the right brain, although the disproportion is modest.
Gorillas have about 16,000 of these cells, bonobos about 2,100, and chimpanzees about 1,800. Other mammals lack
them completely.
In the Wikipedia article, this cell type is also found in cetaceans and elephants.
Quote
Dr. Arthur Craig of the Barrow Neurological Institute in Phoenix has recently provided a description of the
architecture of the spindle cells.115 Inputs from the body (estimated at hundreds of megabits per second), including
nerves from the skin, muscles, organs, and other areas, stream into the upper spinal cord. These carry messages about
touch, temperature, acid levels (for example, lactic acid in muscles), the movement of food through the gastrointestinal
tract, and many other types of information. This data is processed through the brain stem and midbrain. Key cells
called Lamina 1 neurons create a map of the body representing its current state, not unlike the displays used by flight
controllers to track airplanes.
The information then flows through a nut-size region called the posterior ventromedial nucleus (VMpo), which
apparently computes complex reactions to bodily states such as "this tastes terrible," "what a stench," or "that light
touch is stimulating." The increasingly sophisticated information ends up at two regions of the cortex called the insula.
These structures, the size of small fingers, are located on the left and right sides of the cortex. Craig describes the
VMpo and the two insula regions as "a system that represents the material me."
Although the mechanisms are not yet understood, these regions are critical to self-awareness and complicated
emotions. They are also much smaller in other animals. For example, the VMpo is about the size of a grain of sand in
macaque monkeys and even smaller in lower-level animals. These findings are consistent with a growing consensus
that our emotions are closely linked to areas of the brain that contain maps of the body, a view promoted by Dr.
Antonio Damasio at the University of Iowa.116 They are also consistent with the view that a great deal of our thinking
is directed toward our bodies: protecting and enhancing them, as well as attending to their myriad needs and desires.
Very recently yet another level of processing of what started out as sensory information from the body has been
discovered. Data from the two insula regions goes on to a tiny area at the front of the right insula called the
frontoinsular cortex. This is the region containing the spindle cells, and tMRI scans have revealed that it is particularly
active when a person is dealing with high-level emotions such as love, anger, sadness, and sexual desire. Situations
that strongly activate the spindle cells include when a subject looks at her romantic partner or hears her child crying.
Anthropologists believe that spindle cells made their first appearance ten to fifteen million years ago in the as-yet
undiscovered common ancestor to apes and early hominids (the family of humans) and rapidly increased in numbers
around one hundred thousand years ago. Interestingly, spindle cells do not exist in newborn humans but begin to
appear only at around the age of four months and increase significantly from ages one to three. Children's ability to
deal with moral issues and perceive such higher-level emotions as love develop during this same time period.
The spindle cells gain their power from the deep interconnectedness of their long apical dendrites with many other
brain regions. The high-level emotions that the spindle cells process are affected, thereby, by all of our perceptual and
cognitive regions. It will be difficult, therefore, to reverse engineer the exact methods of the spindle cells until we have
better models of the many other regions to which they connect. However, it is remarkable how few neurons appear to
be exclusively involved with these emotions. We have fifty billion neurons in the cerebellum that deal with skill
formation, billions in the cortex that perform the transformations for perception and rational planning, but only about
eighty thousand spindle cells dealing with high-level emotions. It is important to point out that the spindle cells are not
doing rational problem solving, which is why we don't have rational control over our responses to music or over
falling in love. The rest of the brain is heavily engaged, however, in trying to make sense of our mysterious high-level
emotions.

And here is the description from Wikipedia:
Quote
Spindle neurons, also called von Economo neurons (VENs), are a specific class of mammalian cortical neurons characterized by a large spindle-shaped soma (or body) gradually tapering into a single apical axon (the ramification that transmits signals) in one direction, with only a single dendrite (the ramification that receives signals) facing opposite. Other cortical neurons tend to have many dendrites, and the bipolar-shaped morphology of spindle neurons is unique here.

Spindle neurons are found in two very restricted regions in the brains of hominids (humans and other great apes): the anterior cingulate cortex (ACC) and the fronto-insular cortex (FI), but recently they have been discovered in the dorsolateral prefrontal cortex of humans.[1] Spindle cells are also found in the brains of a number of cetaceans,[2][3][4] African and Asian elephants,[5] and to a lesser extent in macaque monkeys[6] and raccoons.[7] The appearance of spindle neurons in distantly related clades suggests that they represent convergent evolution—specifically, as an adaptation to accommodate the increasing size of these distantly-related animals' brains.
https://en.wikipedia.org/wiki/Spindle_neuron
(https://upload.wikimedia.org/wikipedia/commons/1/16/Spindle-cell.png)
Cartoon of a normal pyramidal cell (left) compared to a spindle cell (right)

Quote
Spindle neuron concentrations
ACC
The largest number of ACC spindle neurons are found in humans, fewer in the gracile great apes, and fewest in the robust great apes. In both humans and bonobos they are often found in clusters of 3 to 6 neurons. They are found in humans, bonobos, chimpanzees, gorillas, orangutans, some cetaceans, and elephants.[16]:245 While total quantities of ACC spindle neurons were not reported by Allman in his seminal research report (as they were in a later report describing their presence in the frontoinsular cortex, below), his team's initial analysis of the ACC layer V in hominids revealed an average of ~9 spindle neurons per section for orangutans (rare, 0.6% of section cells), ~22 for gorillas (frequent, 2.3%), ~37 for chimpanzees (abundant, 3.8%), ~68 for bonobos (abundant/clusters, 4.8%), ~89 for humans (abundant/clusters, 5.6%).[17]

Fronto-insula

All of the primates examined had more spindle cells in the fronto-insula of the right hemisphere than in the left. In contrast to the higher number of spindle cells found in the ACC of the gracile bonobos and chimpanzees, the number of fronto-insular spindle cells was far higher in the cortex of robust gorillas (no data for Orangutans was given). An adult human had 82,855 such cells, a gorilla had 16,710, a bonobo had 2,159, and a chimpanzee had a mere 1,808 – despite the fact that chimpanzees and bonobos are great apes most closely related to humans.

Dorsolateral PFC
Von Economo neurons have been located in the Dorsolateral prefrontal cortex of humans[1] and elephants.[5] In humans they have been observed in higher concentration in Brodmann area 9 (BA9) – mostly isolated or in clusters of 2, while in Brodmann area 24 (BA24) they have been found mostly in clusters of 2-4.[1]

Quote
Clinical significance
Abnormal spindle neuron development may be linked to several psychotic disorders, typically those characterized by distortions of reality, disturbances of thought, disturbances of language, and withdrawal from social contact[citation needed]. Altered spindle neuron states have been implicated in both schizophrenia and autism, but research into these correlations remains at a very early stage. Frontotemporal dementia involves loss of mostly spindle neurons.[18] An initial study suggested that Alzheimer's disease specifically targeted von Economo neurons; this study was performed with end-stage Alzheimer brains in which cell destruction was widespread, but later it was found that Alzheimer's disease doesn't affect the spindle neurons.

The research results mentioned above support assertion that humans have higher consciousness level than other animals. They also provide some ways to rank other animals based on their capacity to experience emotions.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 09/02/2020 23:41:56
The research results mentioned above support assertion that humans have higher consciousness level than other animals. They also provide some ways to rank other animals based on their capacity to experience emotions.

1. Please define consciousness and "level of consciousness"
2. Please show how you measured it in at least three species (a mammal, an insect, a fish)
3. Is religious intolerance indicative of rank in the same sense as altruism? Please list some non-human species that exhibit religious intolerance.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 10/02/2020 15:19:55
Here is the standard definition by dictionary.
Quote
  con·scious·ness
/ˈkän(t)SHəsnəs/
noun
the state of being awake and aware of one's surroundings.
"she failed to regain consciousness and died two days later"
Similar:
awareness
wakefulness
alertness
responsiveness
sentience
Opposite:
unconsciousness
the awareness or perception of something by a person.
plural noun: consciousnesses
"her acute consciousness of Mike's presence"
Similar:
awareness of
knowledge of the existence of
alertness to
sensitivity to
realization of
cognizance of
mindfulness of
perception of
apprehension of
recognition of
the fact of awareness by the mind of itself and the world.
"consciousness emerges from the operations of the brain"
In the context of morality, I've tried to give the proper description here.
It's also worth noting that being conscious doesn't necessarily having high intelligence.
At least there are two things required for consciousness or self awareness of an agent.
First is ability to represent itself in its internal model of its environment. As an illustration, if you put a map of your country on the floor, there will be a point on the map that is touching the actual point it refers to.
Quote
Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country.
https://en.wikipedia.org/wiki/Brouwer_fixed-point_theorem#Illustrations
In a real conscious agents, some part of the agent's data storage must represent some property of the agent itself.
The next is the existence of preference for one state over the others. One of the most common examples in animal world is pleasure over pain. Thus a map, even a dynamic one, is not conscious due to lack of preference.

I also mentioned previously that consciousness is a multidimensional parameter, just like intelligence, health, and wealth.
Consciousness level of an agent depends on the accuracy and precision of the agent's model of reality, which are affected by many parameters such as memory capacity, memory reliability/error resistance, data processing speed, sensing and actuating power and precision.
To measure consciousness level, we can combine those parameters using some formula/algorithm and project them onto a specified axis. One convenient parameter to serve this purpose, as I pointed earlier, is the time span of plans an agent can make and execute effectively. An alternative can be made with emphasize on statistical probability, i.e. what's the chance an agent can successfully execute a specified plan for a defined time period.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/02/2020 04:49:30
2. Please show how you measured it in at least three species (a mammal, an insect, a fish)
3. Is religious intolerance indicative of rank in the same sense as altruism? Please list some non-human species that exhibit religious intolerance.
Mammal, insects, and fishes are large groups with large in group variance. But I think we can still use the method I described above to measure their individual level of consciousness. We must also be aware of the distinction between effective and potential level of consciousness. A hunting shark is effectively more conscious than a human under general anesthetic. On the other hand, a human baby has higher level of potential consciousness than a smart dog.
Religious intolerance can be attributed to incorrect model of reality in conscious agent's data prosessing unit.
I don't know any non-human animals showing behavior of religious intolerance in the strictest meaning. The closest thing I know is tribal intolerance which often leads to genocide and cannibalism in chimpanzees. But if the group size is enlarged to include interspecies relationships, then lions killing baby cheetahs and pack hunting of sharks can be mentioned here.
(https://www.thesuperfins.com/wp-content/uploads/2016/12/f-shark-hunt-together.jpg)
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/02/2020 15:18:41
So, we can view consciousness as combination of intelligence and self awareness, which are respectively related to data processing ability and data accuracy of internal model representing the agent itself and its environment.
Both are multidimensional parameters themselves, which can also be quantified independently.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 11/02/2020 16:31:01
Interestingly, none of the dictionary definitions has anything to do with intelligence or selfawareness. It's all about responding to, or being capable of responding to, a stimulus. Which is the characteristic of all living things.

A shark can respond to a drop of blood in a swimming pool, which makes it billions of times more conscious than you or me.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/02/2020 22:09:12
Interestingly, none of the dictionary definitions has anything to do with intelligence or selfawareness. It's all about responding to, or being capable of responding to, a stimulus. Which is the characteristic of all living things.

A shark can respond to a drop of blood in a swimming pool, which makes it billions of times more conscious than you or me.
As I said, consciousness is a multidimensional parameter. Perhaps some species of sharks have higher sensitivity to certain chemicals in water compared to human. But they are not conscious about what happens on land, nor they are aware of killing asteroids coming toward the earth.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/02/2020 22:23:47
The concept of consciousness is useful to setup moral rules. Many people use it to justify rights or priviledges of beings, where beings with higher level of consciousness get more rights or priviledges. Unsurprisingly, the ones who setup those moral rules tend to overestimate their own level of consciousness while underestimating others.
While I agree that concept of consciousness is important in setting up moral rules, I prefer to approach the problem from the other direction. In this thread I have argued that moral rules are shortcuts or tools to help achieving some ultimate goals. Hence proper moral rules are those which effectively and efficiently direct available resources to get closer to that goals.
Making correct decisions in long term affairs requires immense amount of information and data processing capability, which are not available for many conscious agents. Thus short cut moral rules are needed to overcome the limitations. Of course those short cut rules don't always produce the best outcome, but we can rely on Pareto rules and hope that it will work most of the time.
IMO, the conscious level of an agent is useful to select apropriate moral rules for them. For a house cat or dog, not peeing or shitting all over the place might be enough. For little kids we can apply simple rules such as not to lie. Interestingly, white lie stories such as Santa Claus or tooth fairy might be helpful. Religions are sometimes adequate for ancient people. But as we get closer to technological singularity, we need more powerful moral rules to prevent large scale conflicts which can be devastating.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 11/02/2020 22:32:46
And you have no idea of what interests sharks, and very little notion of what lies at the bottom of the sea. The dictionary definition of consciousness is about the ability to respond to a stimulus, not about the range or nature of stimuli that might trigger a response. It isn't defined as multidimensional but anydimensional. A thing is either conscious or not. Most humans have no idea that asteroids even exist. There are very few blind astronomers or deaf musicians, and some people do not feel pain, but live humans all possess consciousness.

I detect a fellow skeptic in the area of rights and privileges!
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/02/2020 23:33:22
If most people express objection to my usage of the term consciousness, I don't mind to invent some new terms to better represent what I mean here.
Even in the common usage, consciousness have some levels.

(https://www.researchgate.net/profile/Sharon_Edwards9/publication/11215540/figure/tbl2/AS:601712289124362@1520470797326/Words-used-to-describe-level-of-consciousness.png)
https://www.researchgate.net/publication/11215540_Using_the_Glasgow_Coma_Scale_analysis_and_limitations/figures?lo=1
https://en.m.wikipedia.org/wiki/Altered_level_of_consciousness
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/02/2020 04:13:15
I detect a fellow skeptic in the area of rights and privileges!
If that's the case, I think you'll enjoy this performance of George Carlin describing rights and priviledges.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 12/02/2020 12:00:59
Even in the common usage, consciousness have some levels.
All of which are easily observed in all animals and even have analogs in the plant world. They do not distinguish between species.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/02/2020 22:54:38
Consciousness level in the meaning that's relevant to morality extends from zero to infinity. Comparing it to the list above is like comparing whole electromagnetic spectrum to colors of rainbow.

The list above only cover small portion of extended conscousness, which is relevant for medical treatment. Can we really say that an illiterate patient have the same level of self awareness as a medical doctor?
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 13/02/2020 00:49:25
Very much so. You don't have to be literate to be a narcissist (Donald Trump struggles with words in lower case and has the style and vocabulary of a 6-year-old) or anorexic - two extremes of selfawareness. After 70 hours without sleep, few junior doctors are aware of anything, never mind themselves.

As for consciousness, you seem now to be defining it as "something bigger than its definition". An amusing take on Russell's Paradox but not very helpful.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 13/02/2020 03:18:25
Very much so. You don't have to be literate to be a narcissist (Donald Trump struggles with words in lower case and has the style and vocabulary of a 6-year-old) or anorexic - two extremes of selfawareness. After 70 hours without sleep, few junior doctors are aware of anything, never mind themselves.

As for consciousness, you seem now to be defining it as "something bigger than its definition". An amusing take on Russell's Paradox but not very helpful.

Having inaccurate model of reality reduce the agent's consciousness level, since it would render their plan's execution less effective. Illiterate patients may not be aware that they have Lymphatic system in their body (or cerebellum, duodenum, or other internal organs).

If you want to stay with strict definitions already written in current dictionaries, you're welcome. But even now, extending the meaning of the word "consciousness" is not new.
Quote
Consciousness is the state or quality of awareness.
https://en.wikipedia.org/wiki/Consciousness_(disambiguation)

If we follow the pattern from the list of consiousness level above, we can conclude that higher level of consiousness reflects higher accuracy of an agent's internal model representing objective reality. Irresponsive agent makes its internal model doesn't follow the change of its surrounding, which makes it less accurate. We can call a state of consciousness above and below those on the list as super-consciousness and sub-consciousness, respectively, but they are level of consciousness nonetheless.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 13/02/2020 13:05:03
Having inaccurate model of reality reduce the agent's consciousness level, since it would render their plan's execution less effective.
Though barely literate and with no concept of reality, Trump is extremely effective in executing his plan to build a big wall and get re-elected. Who cares about reality when you can shout into a microphone?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 14/02/2020 02:33:26
Having inaccurate model of reality reduce the agent's consciousness level, since it would render their plan's execution less effective.
Though barely literate and with no concept of reality, Trump is extremely effective in executing his plan to build a big wall and get re-elected. Who cares about reality when you can shout into a microphone?
It only happens with the help of Trump enablers who seek for personal gains. But it won't last long if things continue that way.  Objective reality has limited tolerance. When long term damages become more apparent, more people will start to realize it and try to make a change.
Though many people have expressed concern that a great political power is lead by a toddler inside an old man body, history tells us that some real toddlers had been in that position.
https://www.goodorient.com/blog/child-emperors-in-china-history/
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 15/02/2020 11:47:05
But what we have here is an evil man pretending to be naïve.  The damage done by his heroes lasted for decades.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 15/02/2020 13:20:08
Scientists have continuously improved their understanding about consciousness. Here is one of newest results.
Quote
In a wild new experiment conducted on monkeys, scientists discovered that a tiny, but powerful area of the brain may enable consciousness: the central lateral thalamus. Activation of the central lateral thalamus and deep layers of the cerebral cortex drives pathways in the brain that carry information between the parietal and frontal lobe in the brain, the study suggests.
This brain circuit works as a sort-of “engine for consciousness,” the researchers say, enabling conscious thought and feeling in primates.

To zero in on this brain circuit, a scientific team put macaque monkeys under anesthesia, then stimulated different parts of their brain with electrodes at a frequency of 50 Hertz. Essentially, they zapped different areas of the brain and observed how the monkeys responded. When the central lateral thalamus was stimulated, the monkeys woke up and their brain function resumed — even though they were STILL UNDER ANESTHESIA. Seconds after the scientists switched off the stimulation, the monkeys went right back to sleep.

This research was published Wednesday in the journal Neuron.

“Science doesn’t often leave opportunity for exhilaration, but that’s what that moment was like for those of us who were in the room,” co-author Michelle Redinbaugh, a researcher at the University of Wisconsin, Madison, tells Inverse.
https://www.inverse.com/mind-body/3d-brain-models-crucial-stage-of-human-development
https://www.cell.com/neuron/fulltext/S0896-6273(20)30005-2
Most people agree that consciousness plays a central role in morality. Hence understanding consciousness is necessary to discuss about morality productively. IMO anyone who claims that consciousness cannot be understood scientifically has commited some kind of arrogance, namely "if I can't understand something, noone else can."
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 15/02/2020 13:44:52
It would be a lot easier to understand something if you could define it. So far you have rejected the clinical, dictionary definition and asserted that the word means some abstract characteristic of living things that cannot be defined or measured, but can be used to rank the things that possess it.  Not a fruitful starting point.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 15/02/2020 13:48:48
But what we have here is an evil man pretending to be naïve.  The damage done by his heroes lasted for decades.
It takes a closer look to determine if he is indeed an inherently evil man. It's possible that he suffers some mental illness which makes him believes his own lies.
Whatever the cause is, it's a moral responsibility of the society in general to mitigate the damages had done.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 15/02/2020 14:10:37
It would be a lot easier to understand something if you could define it. So far you have rejected the clinical, dictionary definition and asserted that the word means some abstract characteristic of living things that cannot be defined or measured, but can be used to rank the things that possess it.  Not a fruitful starting point.
If you carefully read my posts in this thread, I have tried to provide a useful definition of consciousness to discuss about morality several times already. I also showed that it is an extended version of clinical definition manifested in glasgow list. If the levels in the list is likened to a handful colors of the rainbow, then a concept of consciousness required to be useful in building moral rules is like the whole spectrum of electromagnetic wave.
It isn't defined as multidimensional but anydimensional. A thing is either conscious or not
Your definition above makes consciousness less relevant to building moral rules.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 16/02/2020 00:32:34
Quite so. There's no point in building anything on an undefined and indefinable foundation.

Any rule must have a purpose, to enhance or prevent something. If you construct moral rules to enhance cooperation and happiness, and prevent  conflict and unhappiness, you have the means to test their effectiveness and permit evolution of the system in the light of your findings.

It is also worth remembering that we do not live in a static, perfect world. There will always be hard cases and exceptions, which need to be dealt with as such and not necessarily to impact the general framework. Simple case: you should pay your taxes. But if your house has just burnt down, your overriding imperative is to shelter your family, not to give the government money to squander on railway consultants. Simpler still: you shouldn't kill civilians, but there's no point in coming second in a fight.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/02/2020 01:10:57
What I mean with multidimensionality of consciousness is analogous to multidensionality of intelligence, which can be broken down to several parameters, such as verbal, numerical, spatial, and memory strength. Some people with  similar intelligence level may have different strength and weakness in those parameters. The final assessment thus depends on the formula or algorithm used to combine those parameters into a single value useful to compare intelligence, at least in relative scale.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/02/2020 02:33:18
I think we can all agree that a good moral rule is a useful one. But follow up question naturally comes up: useful according to who?
Only conscious agents can have something useful. That's why the concept of consciousness is important here. One tool can be useful to analyze the problem is anthropic principle.
A good universal moral rule must be useful for any conscious agent universally.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/02/2020 02:46:54
It is also worth remembering that we do not live in a static, perfect world. There will always be hard cases and exceptions, which need to be dealt with as such and not necessarily to impact the general framework. Simple case: you should pay your taxes. But if your house has just burnt down, your overriding imperative is to shelter your family, not to give the government money to squander on railway consultants. Simpler still: you shouldn't kill civilians, but there's no point in coming second in a fight.
A legitimate exception means that we acknowledge a higher priority moral rule than the one we are going to break. A mature society should provide the list of highest priority moral rules in hierarchical structure to help their members make a quick decisions when facing hard cases. Autonomous vehicles and other AI with significant impacts to society must also have that hierarchy incorporated into their algorithm.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 16/02/2020 12:26:08
The final assessment thus depends on the formula or algorithm used to combine those parameters into a single value useful to compare intelligence, at least in relative scale.
In other words, the measure of consciousness is whatever Hamdani Yusuf says it is, unless it's measured by someone else, since there is no universal arbiter of the formula. Not sure how that advances our discussion .
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 16/02/2020 12:30:41
A legitimate exception means that we acknowledge a higher priority moral rule than the one we are going to break

No, it means that expediency sometimes trumps morality, particularly where any other course of action would incapacitate the moral agent. Or as The Boss tells me "Smith & Wesson beats four aces" (she was raised in the  Midwest).
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 16/02/2020 12:38:27
I think we can all agree that a good moral rule is a useful one. But follow up question naturally comes up: useful according to who?

Depends on context. The planet, Society, British society, Yorkshiremen, family and friends, family only, or oneself? Or how about some Good Samaritan altruism? As long as you don't invoke any deities, the answer is usually fairly straightforward since the consequences of any action tend to diminish with distance from the source.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/02/2020 23:01:04
The final assessment thus depends on the formula or algorithm used to combine those parameters into a single value useful to compare intelligence, at least in relative scale.
In other words, the measure of consciousness is whatever Hamdani Yusuf says it is, unless it's measured by someone else, since there is no universal arbiter of the formula. Not sure how that advances our discussion .
The concept of IQ has been around for more than a century without my involvement.
Quote
  An intelligence quotient (IQ) is a total score derived from a set of standardized tests designed to assess human intelligence.[1] The abbreviation "IQ" was coined by the psychologist William Stern for the German term Intelligenzquotient, his term for a scoring method for intelligence tests at University of Breslau he advocated in a 1912 book.[2]
https://en.m.wikipedia.org/wiki/Intelligence_quotient
While concept of intelligence is meant to represent problem solving capability, the concept of consciousness includes the ability to determine which problems to solve first.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 16/02/2020 23:37:07
The concept of IQ has been around for more than a century without my involvement.
The concept has, but its only definition is "something to do with quizzes, with a normal distribution and a mean score of 100". The results you get for any particular test vary according to the language and culture within which you apply it!

And anyway, we aren't talking about intelligence, but asking for your definition of consciousness. A decent computer can probably score 200+ on the best IQ tests. Would that signify consciousness, or even intelligence?

I have massive respect for your contributions to this forum, but beware - everyone who invokes "consciousness" seems to end up digging a hole for himself to fall into! 
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/02/2020 07:06:09
The concept has, but its only definition is "something to do with quizzes, with a normal distribution and a mean score of 100". The results you get for any particular test vary according to the language and culture within which you apply it!

And anyway, we aren't talking about intelligence, but asking for your definition of consciousness. A decent computer can probably score 200+ on the best IQ tests. Would that signify consciousness, or even intelligence?
 
IQ test is specifically designed to measure human intelligence. Average human can be modeled as hardware and software which take inputs, process the data, and generate output. They are assumed to already have some commonly used software for data processing such as concept of number, letters, grammar, basic geometry, etc. Without proper software, even the best computer hardware can't solve many problems.
The ability to solve problems is adequate to score points on intelligence. For consciousness, there are additional requirements, such as self awareness.
Quote
Historically, IQ was a score obtained by dividing a person's mental age score, obtained by administering an intelligence test, by the person's chronological age, both expressed in terms of years and months. The resulting fraction (quotient) is multiplied by 100 to obtain the IQ score.[3] For modern IQ tests, the median raw score of the norming sample is defined as IQ 100 and scores each standard deviation (SD) up or down are defined as 15 IQ points greater or less.[4] By this definition, approximately two-thirds of the population scores are between IQ 85 and IQ 115. About 2.5 percent of the population scores above 130, and 2.5 percent below 70.[5][6]

Scores from intelligence tests are estimates of intelligence. Unlike, for example, distance and mass, a concrete measure of intelligence cannot be achieved given the abstract nature of the concept of "intelligence".[7] IQ scores have been shown to be associated with such factors as morbidity and mortality,[8][9] parental social status,[10] and, to a substantial degree, biological parental IQ. While the heritability of IQ has been investigated for nearly a century, there is still debate about the significance of heritability estimates[11][12] and the mechanisms of inheritance.[13]

IQ scores are used for educational placement, assessment of intellectual disability, and evaluating job applicants. Even when students improve their scores on standardized tests, they do not always improve their cognitive abilities, such as memory, attention and speed.[14] In research contexts, they have been studied as predictors of job performance[15] and income.[16] They are also used to study distributions of psychometric intelligence in populations and the correlations between it and other variables. Raw scores on IQ tests for many populations have been rising at an average rate that scales to three IQ points per decade since the early 20th century, a phenomenon called the Flynn effect. Investigation of different patterns of increases in subtest scores can also inform current research on human intelligence.
https://en.wikipedia.org/wiki/Intelligence_quotient

In other words, the measure of consciousness is whatever Hamdani Yusuf says it is, unless it's measured by someone else, since there is no universal arbiter of the formula. Not sure how that advances our discussion .
The formula of the test can be fine tuned to approach desired result. The arbiter for the IQ test is job performance, which is useful for hiring managers.
Quote
Job performance
According to Schmidt and Hunter, "for hiring employees without previous experience in the job the most valid predictor of future performance is general mental ability."[15] The validity of IQ as a predictor of job performance is above zero for all work studied to date, but varies with the type of job and across different studies, ranging from 0.2 to 0.6.[122] The correlations were higher when the unreliability of measurement methods was controlled for.[10] While IQ is more strongly correlated with reasoning and less so with motor function,[123] IQ-test scores predict performance ratings in all occupations.[15] That said, for highly qualified activities (research, management) low IQ scores are more likely to be a barrier to adequate performance, whereas for minimally-skilled activities, athletic strength (manual strength, speed, stamina, and coordination) are more likely to influence performance.[15] The prevailing view among academics is that it is largely through the quicker acquisition of job-relevant knowledge that higher IQ mediates job performance. This view has been challenged by Byington & Felps (2010), who argued that "the current applications of IQ-reflective tests allow individuals with high IQ scores to receive greater access to developmental resources, enabling them to acquire additional capabilities over time, and ultimately perform their jobs better."[124]

In establishing a causal direction to the link between IQ and work performance, longitudinal studies by Watkins and others suggest that IQ exerts a causal influence on future academic achievement, whereas academic achievement does not substantially influence future IQ scores.[125] Treena Eileen Rohde and Lee Anne Thompson write that general cognitive ability, but not specific ability scores, predict academic achievement, with the exception that processing speed and spatial ability predict performance on the SAT math beyond the effect of general cognitive ability.[126]

The US military has minimum enlistment standards at about the IQ 85 level. There have been two experiments with lowering this to 80 but in both cases these men could not master soldiering well enough to justify their costs.

To serve similar purpose, measuring consciousness level can be useful to select public leaders and law makers, since their decisions affect many other people.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 18/02/2020 17:27:48
All the quotes seem to suggest is that if you select people with a relevant test, they will perform better than average or those that fail the test. But the key is relevance. A blind man with an IQ of 130 probably won't make a good pilot. Bench pressing 100 kilos is quite a feat, but a footballer needs quite different feet. 

So you assert that we should select lawmakers on the grounds of consciousness, but the only definition you have given seems to be "IQ plus selfawareness". Every animal I have encountered is self-aware. The extreme seems to be narcissism, which is obviously undesirable.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 19/02/2020 04:51:20


All the quotes seem to suggest is that if you select people with a relevant test, they will perform better than average or those that fail the test. But the key is relevance. A blind man with an IQ of 130 probably won't make a good pilot. Bench pressing 100 kilos is quite a feat, but a footballer needs quite different feet. 

So you assert that we should select lawmakers on the grounds of consciousness, but the only definition you have given seems to be "IQ plus selfawareness". Every animal I have encountered is self-aware. The extreme seems to be narcissism, which is obviously undesirable.
Unaided blind man has reduced awareness compared to otherwise normal men. Advanced technology can provide some ways to compensate the handicap, or even give more advantage, such as additional infrared, ultraviolet, and radar vision unavailable to the unaided normal human. An average man aided by a powerful AI directly connected to his brain may easily beat the smartest persons in many tasks requiring high intelligence.
That's why I mentioned the need to consider distinctions between effective and potential level of consciousness.
If you are inside an autonomous vehicle, you would prefer that the system controlling the vehicle is a proven and reliable system with ability to create accurate model of reality around it, and the system has preference to keep you safe in it to get to your destination instead of getting you run into building or down a cliff.
If you selectively dismantle some aspects that build general consciousness, it's unsurprising that you will get undesired results. In the case of narcissism, the agent has inaccurate model of reality, which significantly reduces its measure of general consciousness.
Quote
Empirical studies
Within the field of psychology, there are two main branches of research into narcissism: (1) clinical and (2) social psychology.

These two approaches differ in their view of narcissism, with the former treating it as a disorder, thus as discrete, and the latter treating it as a personality trait, thus as a continuum. These two strands of research tend loosely to stand in a divergent relation to one another, although they converge in places.

Campbell and Foster (2007)[23] review the literature on narcissism. They argue that narcissists possess the following "basic ingredients":

Positive: Narcissists think they are better than others.[26]
Inflated: Narcissists' views tend to be contrary to reality. In measures that compare self-report to objective measures, narcissists' self-views tend to be greatly exaggerated.[27]
Agentic: Narcissists' views tend to be most exaggerated in the agentic domain, relative to the communion domain.[clarification needed][26][27]
Special: Narcissists perceive themselves to be unique and special people.[28]
Selfish: Research upon narcissists' behaviour in resource dilemmas supports the case for narcissists as being selfish.[29]
Oriented toward success: Narcissists are oriented towards success by being, for example, approach oriented.[clarification needed][30]
https://en.wikipedia.org/wiki/Narcissism#Empirical_studies

The measure of general consciousness of an agent is its effectiveness to achieve long term goals. Many ways can be used, including increasing the input resolution, additional sensing methods, increasing memory capacity and data processing speed, having self error correcting mechanism, influencing other agents to help the cause, manipulating its environments, etc. Since the measure will contain a lot of uncertainty, then the result will be statistical in nature, instead of deterministic one.
So the key parameter for consciousness is the accuracy of internal model of the agent in representing parts objective reality which have significant impact to the achievement of the agent's goal in the long term.
The result of the general consciousness assessment of an agent is not used to justify right or priviledge of that agent, but instead to select appropriate set of moral rules which they can follow/obey effectively and efficiently to achieve desired results in the long term. Simply put, with great power comes great responsibility.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 19/02/2020 11:32:46
In the case of narcissism, the agent has inaccurate model of reality, which significantly reduces its measure of general consciousness.
Sadly, Donald Trump has a more accurate model of reality and grasp of the controls than his morally superior opponents. It's much easier to manipulate the machinery of politics and the gullibility of the electorate if you really understand what you are doing, in the current context. He's not the first or the last self-centered demagogue to succeed in politics, even if he loses money in business.

Like Putin, his long-term goal is life presidency. He has a slight problem with the constitution preventing that, but the intermediate goal of re-election is clearly beyond doubt, and a constitutional amendment only requires the majority he already has in the Senate.

Quote
[Positive: Narcissists think they are better than others
. Speculation. What we know is that they act as though they are better than others.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/02/2020 02:52:55
I think we can all agree that a good moral rule is a useful one. But follow up question naturally comes up: useful according to who?

Depends on context. The planet, Society, British society, Yorkshiremen, family and friends, family only, or oneself? Or how about some Good Samaritan altruism? As long as you don't invoke any deities, the answer is usually fairly straightforward since the consequences of any action tend to diminish with distance from the source.

When you wrote the planet, can I assume that you meant collective conscious agents living on it? As far as I know, planets are not conscious agents. They don't have internal model of objective reality representing themselves in their environments. They don't have preference either. We can't say if the earth prefer current condition over Hadean period. Jupiter didn't seem to mind to be hit by Shoemaker-Levy 9 comets.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/02/2020 10:36:25
There are two fundamental dangers to psychological and social stability, which are supposed to be protected by moral rules. They are religious fundamentalism on the right, and moral relativism and nihilism on the left.
The former usually takes form as being convinced by false premises and keeping away from error corrections.
The later usually are related to radical scepticism. https://en.wikipedia.org/wiki/Radical_skepticism
Quote
Radical skepticism or radical scepticism is the philosophical position that knowledge is most likely impossible.[1] Radical skeptics hold that doubt exists as to the veracity of every belief and that certainty is therefore never justified. To determine the extent to which it is possible to respond to radical skeptical challenges is the task of epistemology or "the theory of knowledge".[2]

Several Ancient Greek philosophers, including Plato, Cratylus, Carneades, Arcesilaus, Aenesidemus, Pyrrho, and Sextus Empiricus have been viewed as having expounded theories of radical skepticism.

In modern philosophy, two representatives of radical skepticism are Michel de Montaigne (most famously known for his skeptical remark, Que sçay-je ?, 'What do I know?' in Middle French; modern French Que sais-je ?) and David Hume (particularly as set out in A Treatise of Human Nature, Book 1: "Of the Understanding").

As radical skepticism can be used as an objection for most or all beliefs, many philosophers have attempted to refute it. For example, Bertrand Russell wrote “Skepticism, while logically impeccable, is psychologically impossible, and there is an element of frivolous insincerity in any philosophy which pretends to accept it.”
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/02/2020 11:50:56
When you wrote the planet, can I assume that you meant collective conscious agents living on it? As far as I know, planets are not conscious agents. They don't have internal model of objective reality representing themselves in their environments. They don't have preference either. We can't say if the earth prefer current condition over Hadean period. Jupiter didn't seem to mind to be hit by Shoemaker-Levy 9 comets.
Reward and punishment as tools to enforce moral rules can only be applied to conscious agents, especially those with clear preferences. Otherwise, we need other ways to make an agent behave in good manners.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 21/02/2020 21:41:27
When you wrote the planet, can I assume that you meant collective conscious agents living on it?
Of course not! Since we haven't come up with a useful definition of consciousness, I couldn't possibly mean that! The planet is the physical context in which we act.   

The "two common dangers" are actually one - philosophy. Like alcohol, it can be amusing in small doses but utterly destructive if you let it rule your life. Religion/relativism, whisky/beer, just different flavors, same poison.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 21/02/2020 21:47:37
Otherwise, we need other ways to make an agent behave in good manners.
The characteristic of many animals, especially humans, is their realisation that you can usually achieve more by collaboration than by competition. Thus we appreciate a sort of long-term integrated reward and most of us value that above immediate selfgratification. We use punishment and reward to bring into line those who don't.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 22/02/2020 21:49:26
When you wrote the planet, can I assume that you meant collective conscious agents living on it?
Of course not! Since we haven't come up with a useful definition of consciousness, I couldn't possibly mean that! The planet is the physical context in which we act.   

The "two common dangers" are actually one - philosophy. Like alcohol, it can be amusing in small doses but utterly destructive if you let it rule your life. Religion/relativism, whisky/beer, just different flavors, same poison.
If you don't want to call extended consciousness as I described previously as consciousness, that's fine. You can call it extended consciousness then. I've explain why consciousness can be useful in setting moral rules only if it is extended from clinical sense. A baby can be fully conscious clinically, but we can't expect them to follow moral rules intended for adults.
Religious fundamentalism commits false positive error type; it accepts a hypothesis that turn out to be false. Whereas moral relativism commits false negative error type; it rejects any hypotheses, including the correct one.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 23/02/2020 01:08:33
Otherwise, we need other ways to make an agent behave in good manners.
The characteristic of many animals, especially humans, is their realisation that you can usually achieve more by collaboration than by competition. Thus we appreciate a sort of long-term integrated reward and most of us value that above immediate selfgratification. We use punishment and reward to bring into line those who don't.
How can we punish earth that created earthquakes and kills millions directly and indirectly? Or asteroids for hitting earth?

There are some balance between collaboration and competition. It's due to economic law of diminishing marginal utility. When there are more available resource or opportunity, collaboration is preferred. But when there are too many conscious agents or the resource are scarce, competition would be preferred.
Technological advancement can increase the amount of available resources. But as long as they are finite, we must keep reproduction rate under control. Otherwise there would be too much redundancy, which would be a suboptimal situation.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 23/02/2020 09:56:39
You have put your finger on the weakness of most religions. Nature is completely indifferent to the fate of living things. Thus morality can only function in the limited context of whatever species finds it useful.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 24/02/2020 06:31:26
Religious fundamentalism commits false positive error type; it accepts a hypothesis that turn out to be false. Whereas moral relativism commits false negative error type; it rejects any hypotheses, including the correct one.
The follow up question woud be: Is it possible to determine if something is true or false? how?
Descartes had identified that some scepticism is necessary to get to the truth, but there must be some limit to it, otherwise no knowledge can be produced.
Quote
(English:) Accordingly, seeing that our senses sometimes deceive us, I was willing to suppose that there existed nothing really such as they presented to us; And because some men err in reasoning, and fall into Paralogisms, even on the simplest matters of Geometry, I, convinced that I was as open to error as any other, rejected as false all the reasonings I had hitherto taken for Demonstrations; And finally, when I considered that the very same thoughts (presentations) which we experience when awake may also be experienced when we are asleep, while there is at that time not one of them true, I supposed that all the objects (presentations) that had ever entered into my mind when awake, had in them no more truth than the illusions of my dreams. But immediately upon this I observed that, whilst I thus wished to think that all was false, it was absolutely necessary that I, who thus thought, should be something; And as I observed that this truth, I think, therefore I am,[e] was so certain and of such evidence that no ground of doubt, however extravagant, could be alleged by the Sceptics capable of shaking it, I concluded that I might, without scruple, accept it as the first principle of the philosophy of which I was in search.[h]
https://en.wikipedia.org/wiki/Cogito,_ergo_sum
Quote
This proposition became a fundamental element of Western philosophy, as it purported to form a secure foundation for knowledge in the face of radical doubt. While other knowledge could be a figment of imagination, deception, or mistake, Descartes asserted that the very act of doubting one's own existence served—at minimum—as proof of the reality of one's own mind; there must be a thinking entity—in this case the self—for there to be a thought.
Quote
While we thus reject all of which we can entertain the smallest doubt, and even imagine that it is false, we easily indeed suppose that there is neither God, nor sky, nor bodies, and that we ourselves even have neither hands nor feet, nor, finally, a body; but we cannot in the same way suppose that we are not while we doubt of the truth of these things; for there is a repugnance in conceiving that what thinks does not exist at the very time when it thinks. Accordingly, the knowledge,[m] I think, therefore I am,[e] is the first and most certain that occurs to one who philosophizes orderly.
Quote
That we cannot doubt of our existence while we doubt, and that this is the first knowledge we acquire when we philosophize in order.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 25/02/2020 02:52:49
Quote
The Search for Truth
Descartes, in a lesser-known posthumously published work dated as written ca. 1647[13] and titled La Recherche de la Vérité par La Lumiere Naturale (The Search for Truth by Natural Light),[14] wrote:

(Latin:) … Sentio, oportere, ut quid dubitatio, quid cogitatio, quid exsistentia sit antè sciamus, quàm de veritate hujus ratiocinii : dubito, ergo sum, vel, quod idem est, cogito, ergo sum[e] : plane simus persuasi.

(English:) … [I feel that] it is necessary to know what doubt is, and what thought is, [what existence is], before we can be fully persuaded of this reasoning — I doubt, therefore I am — or what is the same — I think, therefore I am.[p]
https://en.wikipedia.org/wiki/Cogito,_ergo_sum#The_Search_for_Truth
So let's start to analyze this reasoning by definitions of existence, thinking, and doubting.
The dictionary says.
Quote
existence
/ɪɡˈzɪst(ə)ns,ɛɡˈzɪst(ə)ns/
noun
the fact or state of living or having objective reality.
Quote
think
/θɪŋk/
verb
1.
have a particular belief or idea.
Quote
verb
verb: doubt; 3rd person present: doubts; past tense: doubted; past participle: doubted; gerund or present participle: doubting
1.
feel uncertain about.
"I doubt my ability to do the job"

question the truth or fact of (something).
"who can doubt the value and necessity of these services?"
Sinonim: think something unlikely, have (one's) doubts about, question, query, be dubious, lack conviction, have reservations about

disbelieve or lack faith in (someone).
"I have no reason to doubt him"
Sinonim: disbelieve, distrust, mistrust, suspect, lack confidence in, have doubts about, be suspicious of, have suspicions about, have misgivings about, feel uneasy about, feel apprehensive about, call into question, query, question, challenge, dispute, have reservations about

feel uncertain, especially about one's religious beliefs.
Sinonim: be undecided, have doubts, be irresolute, be hesitant
Here is my summary of Decartes' idea. To search for the truth, we need to have the ability to doubt. To doubt something, we must have internal model meant to represent objective reality, and we must realize that those two do not always agree. To think about objective reality, the thinker must have internal model meant to represent it. To possess an internal model which represent objective reality, it must exist in objective reality.
So basically, this idea ends up relying on anthropic principle which I mentioned earlier in this thread. The same principle is also the basis for a universal moral standard, which is the subject of this thread.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 25/02/2020 03:40:08
For any true statement, there are infinitely many alternatives that are false.
Since the existence of the thinker is the only thing that can't be doubted, it must be defended at all cost.
Finally we get to the last question: how. There are some basic strategies to preserve information which I borrow from IT business:
Choosing robust media.
Creating multilayer protection.
Creating backups.
Create diversity to avoid common mode failures.

The existence of a thinker is subject to natural selection.
Thinkers who has backups tend to be better at survival than those who don't.
Thinkers who reproduce backups to replace the destroyed copies tend to survive better, otherwise, all of the copies will eventually break down.
Thinkers who actively protect their copies tend to survive better than those who don't.
Thinkers who produce better version of themselves at survival tend to survive better than who don't.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 25/02/2020 06:24:38
Since the existence of the thinker is the only thing that can't be doubted, it must be defended at all cost.
Cogito ergo sum is just one of an infinite number of possible axioms. It's not a strong foundation.

Best to avoid philosophy and stick to science.   Scientific knowledge is the residue of disprovable hypotheses that have not been disproved. That's all there is. "Common" knowledge is the bunch of hypotheses, rules of thumb and tabulated data that we have found adequate for everyday use.

None of which has anything to do with morality. We obviously can't act in contradiction  to the laws of physics, but morality is about how we should act within those constraints.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 25/02/2020 08:23:17
Cogito ergo sum is just one of an infinite number of possible axioms. It's not a strong foundation.
Decartes demonstrated by reductio ad absurdum, that if a thinker rejects its own existence, it leads to contradiction.
Quote
At the beginning of the second meditation, having reached what he considers to be the ultimate level of doubt—his argument from the existence of a deceiving god—Descartes examines his beliefs to see if any have survived the doubt. In his belief in his own existence, he finds that it is impossible to doubt that he exists. Even if there were a deceiving god (or an evil demon), one's belief in their own existence would be secure, for there is no way one could be deceived unless one existed in order to be deceived.

But I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I, too, do not exist? No. If I convinced myself of something [or thought anything at all], then I certainly existed. But there is a deceiver of supreme power and cunning who deliberately and constantly deceives me. In that case, I, too, undoubtedly exist, if he deceives me; and let him deceive me as much as he can, he will never bring it about that I am nothing, so long as I think that I am something. So, after considering everything very thoroughly, I must finally conclude that the proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind. (AT VII 25; CSM II 16–17[v])

There are three important notes to keep in mind here. First, he claims only the certainty of his own existence from the first-person point of view — he has not proved the existence of other minds at this point. This is something that has to be thought through by each of us for ourselves, as we follow the course of the meditations. Second, he does not say that his existence is necessary; he says that if he thinks, then necessarily he exists (see the instantiation principle). Third, this proposition "I am, I exist" is held true not based on a deduction (as mentioned above) or on empirical induction but on the clarity and self-evidence of the proposition. Descartes does not use this first certainty, the cogito, as a foundation upon which to build further knowledge; rather, it is the firm ground upon which he can stand as he works to discover further truths.[35] As he puts it:

Archimedes used to demand just one firm and immovable point in order to shift the entire earth; so I too can hope for great things if I manage to find just one thing, however slight, that is certain and unshakable. (AT VII 24; CSM II 16)
https://en.wikipedia.org/wiki/Cogito,_ergo_sum#Interpretation
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 25/02/2020 08:44:04
Best to avoid philosophy and stick to science.   Scientific knowledge is the residue of disprovable hypotheses that have not been disproved. That's all there is. "Common" knowledge is the bunch of hypotheses, rules of thumb and tabulated data that we have found adequate for everyday use.

None of which has anything to do with morality. We obviously can't act in contradiction  to the laws of physics, but morality is about how we should act within those constraints.
Why so? Scientific experiments can be costly, while available resources are finite. We must prioritize which ones to be done first. That's where philosophy comes into play.
How do you determine which act should be done, which shouldn't, morally speaking?
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 25/02/2020 11:19:25
Do unto others as you would have them do unto you. Simples!

If another does unto me as I would not like, an eye for an eye is just retribution.

As for the cost of scientific experiments, I think it was Harold Wilson who said "if you think education is expensive, try ignorance". Most scientific investigation derives from product failure, so the budget is set according to how many lives it might save to know what went wrong

"Blue sky" research has its own justification. Ronald Reagan asked, at the Lawrence Livermore laboratory, how their work contributed to the defence of the nation. The response was "It is what makes the nation worth defending." Some curiosity-driven medical research is justified on a risk/benefit ratio: if it does little harm but might lead to a big reward  in areas we haven't considered, let's investigate. Other non-failure research falls into the category of public art: we fly to the moon or launch orbital telescopes principally out of public interest.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 25/02/2020 11:52:28
Do unto others as you would have them do unto you. Simples!

If another does unto me as I would not like, an eye for an eye is just retribution.
Does this rule applicable universally, regardless of personality, gender, race, ideology, nationality, species?
How can this rule help to solve moral problems such as trolley problem?
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 25/02/2020 12:02:40
The moral imperative is universal as long as you accept the "eye for an eye" part. Ideology is philosophy and therefore is at best irrelevant and at worst poisonous. Species has some limitation as all animals have to eat things that were formerly alive, but AFAIK all "normal" humans prefer a clean kill, except for oysters.

The trolley problem isn't a moral issue. It's one of statistics.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 26/02/2020 03:44:03
The moral imperative is universal as long as you accept the "eye for an eye" part. Ideology is philosophy and therefore is at best irrelevant and at worst poisonous. Species has some limitation as all animals have to eat things that were formerly alive, but AFAIK all "normal" humans prefer a clean kill, except for oysters.

The trolley problem isn't a moral issue. It's one of statistics.
I can see that you use a very narrow definition of morality, thus many problems most people regard as moral issues are not covered.
Golden rule has limitations when dealing with asymmetrical relationships, such as parents to kids, humans to animals, normal to disabled.
The eye on eye is even narrower, since it only deals with negative behavior. It only speaks about what shouldn't be done, while saying nothing about what should be done.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 26/02/2020 03:51:18
Why so? Scientific experiments can be costly, while available resources are finite. We must prioritize which ones to be done first. That's where philosophy comes into play.
As for the cost of scientific experiments, I think it was Harold Wilson who said "if you think education is expensive, try ignorance". Most scientific investigation derives from product failure, so the budget is set according to how many lives it might save to know what went wrong

"Blue sky" research has its own justification. Ronald Reagan asked, at the Lawrence Livermore laboratory, how their work contributed to the defence of the nation. The response was "It is what makes the nation worth defending." Some curiosity-driven medical research is justified on a risk/benefit ratio: if it does little harm but might lead to a big reward  in areas we haven't considered, let's investigate. Other non-failure research falls into the category of public art: we fly to the moon or launch orbital telescopes principally out of public interest.
What is the portion of US annual budget dedicated to scientific experiments? Why can't it be 100%?
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 26/02/2020 10:44:27
Because, as Lincoln pointed out, a country consists of a defensible border, and the irreducible function of government is to raise enough taxes to pay the army that defends it. The secondary functions like enforcing rights and prosecuting wrongs take up a fair bit of the budget, and it is generally preferable to hand out welfare payments rather than have the unemployed steal food. Then there's the cost of the greater glorification of the Fuhrer: whilst the Queen travels in a Range Rover or whatever aircraft the military has available (literally - if the Royal Flight is on operations, they charter Jim Smith's Air Taxi or join a BA scheduled flight) , El Presidente Trump is so unpopular that he needs a motorcade of 20 armoured Lincolns and umpteen motorbikes to go shopping. Next come the banks: crooks who are too big to fail, so must get their bonuses when there is nobody left to cheat.   Whatever is left, can be spent on science, arts, or general bribery and chicanery.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 26/02/2020 10:46:20
I can see that you use a very narrow definition of morality, thus many problems most people regard as moral issues are not covered.

Can you provide an example?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 27/02/2020 02:30:04
I can see that you use a very narrow definition of morality, thus many problems most people regard as moral issues are not covered.

Can you provide an example?

The trolley problem.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 27/02/2020 03:00:51
Because, as Lincoln pointed out, a country consists of a defensible border, and the irreducible function of government is to raise enough taxes to pay the army that defends it. The secondary functions like enforcing rights and prosecuting wrongs take up a fair bit of the budget, and it is generally preferable to hand out welfare payments rather than have the unemployed steal food. Then there's the cost of the greater glorification of the Fuhrer: whilst the Queen travels in a Range Rover or whatever aircraft the military has available (literally - if the Royal Flight is on operations, they charter Jim Smith's Air Taxi or join a BA scheduled flight) , El Presidente Trump is so unpopular that he needs a motorcade of 20 armoured Lincolns and umpteen motorbikes to go shopping. Next come the banks: crooks who are too big to fail, so must get their bonuses when there is nobody left to cheat.   Whatever is left, can be spent on science, arts, or general bribery and chicanery.
The resources are divided some way as to best preserve the existence of conscious system, according to the knowledge/understanding of the current system. If someday they are convinced that there is a better way to spend their resources to achieve their ultimate goal due to improved knowledge or change of their environment, they will change the budgetary structure/composition.
Preserving some myth to make so many people work together systematically has its own benefits, as was pointed out by Yuval Noah Harari in his book "Sapiens". But if the myth is already debunked and no longer believed by the member of the organization, they should invent a new myth or story which is more believable. Otherwise, there would be a risk of revolts, or at least dissents among the organization members, and the system won't work effectively anymore. That's where the cogito ergo sum comes into play, which provide the fundamental starting point that is certain and unshakable.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 27/02/2020 09:16:30
The trolley problem.
What's the moral question? You can do something or nothing. Doing something will result in one death, doing nothing will result in five deaths. One is less than five. Failing to act can be considered negligent or even complicit.

Such decisions have to be made from time to time. A classic was the sacrifice of the Calais garrison to delay the German advance towards Dunkirk in 1940. Fortunately the Allies were commanded by soldiers, who are paid to find solutions, not philosophers, who are paid to invent problems.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 27/02/2020 09:21:49
spend their resources to achieve their ultimate goal
The ultimate goal of a politician is to be re-elected. This is achieved by judicious spending of other people's money, spouting meaningless slogans, and licking the arse of whoever can bring you the most votes.

Astute demagogues (Hitler, Thatcher, Blair, Trump) have no interest in promoting cooperative behaviour. Defending the electorate from "the enemy within" (Jews, coalminers...), or inventing a new external enemy (Argentinians, Iraquis, Mexicans...) can be a vote winner. The trick, of course, is to choose an enemy you can defeat.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 28/02/2020 01:15:45
The trolley problem.
What's the moral question? You can do something or nothing. Doing something will result in one death, doing nothing will result in five deaths. One is less than five. Failing to act can be considered negligent or even complicit.

Such decisions have to be made from time to time. A classic was the sacrifice of the Calais garrison to delay the German advance towards Dunkirk in 1940. Fortunately the Allies were commanded by soldiers, who are paid to find solutions, not philosophers, who are paid to invent problems.
The survey results show that slight modifications to the original trolley problem had made many people switch their desicions. It means that people in the survey have different priorities or knowledge about the problem. For moral relativists, it would make no difference which decision you'd take, even if your decision is made solely based on coin toss. But for the rest of us, there should be some basic principles to judge if an action is considered moral or not.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 28/02/2020 01:21:51
The moral imperative is universal as long as you accept the "eye for an eye" part. Ideology is philosophy and therefore is at best irrelevant and at worst poisonous. Species has some limitation as all animals have to eat things that were formerly alive, but AFAIK all "normal" humans prefer a clean kill, except for oysters.

The trolley problem isn't a moral issue. It's one of statistics.
I can see that you use a very narrow definition of morality, thus many problems most people regard as moral issues are not covered.
Golden rule has limitations when dealing with asymmetrical relationships, such as parents to kids, humans to animals, normal to disabled.
The eye on eye is even narrower, since it only deals with negative behavior. It only speaks about what shouldn't be done, while saying nothing about what should be done.

Here is an example where eye for an eye doesn't work as moral guidance.
An old man rapes his own little kid many times over a period of ten years.

Here is another one.
A man borrow some money and use it for gambling. He dies before paying the debt.

A man kills his neighbor's dog for being noisy.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 28/02/2020 02:55:37
spend their resources to achieve their ultimate goal
The ultimate goal of a politician is to be re-elected. This is achieved by judicious spending of other people's money, spouting meaningless slogans, and licking the arse of whoever can bring you the most votes.

Astute demagogues (Hitler, Thatcher, Blair, Trump) have no interest in promoting cooperative behaviour. Defending the electorate from "the enemy within" (Jews, coalminers...), or inventing a new external enemy (Argentinians, Iraquis, Mexicans...) can be a vote winner. The trick, of course, is to choose an enemy you can defeat.
It can only happen in a democratic society. Moreover, what would they do if they got reelected? Can they just rest in peace? If not, then it can't be their actual ultimate/terminal goal.
Deception to gain political power only work if the constituents are gullible enough to believe it. They can systematically dumb down their people, but that would bring unwanted consequences in the long term.
Title: Re: Is there a universal moral standard?
Post by: Europan Ocean on 28/02/2020 09:49:33
There is the professional empathy test.
Sikhs developed their morals on a number of religions' commonalities.
Title: Re: Is there a universal moral standard?
Post by: Europan Ocean on 28/02/2020 09:51:05
Coming up from the south to the US are people would generally choose to vote Democrat. And drug cartels and human traffickers are among them.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 28/02/2020 15:34:36
So no drug dealer or pimp would vote Republican. Why not?  Surely these are the very people who favour private enterprise and low taxes? Or are they hoping for state-funded addiction and prostitution in the Land of the Free?
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 28/02/2020 15:42:48
Here is an example where eye for an eye doesn't work as moral guidance.An old man rapes his own little kid many times over a period of ten years.
Let the punishment fit the crime. There has never been a problem recruiting a public hangman.

 
Quote
Here is another one.A man borrow some money and use it for gambling. He dies before paying the debt.

Here's another old Jewish saying (and it was good enough for Spock to use in the second TV series) "Fool me once, shame on you. Fool me twice, shame on me." Never lend without security. Unless, of course, you are a bank that is "too big to fail", in which case the taxpayer will pay your bonus. 

Quote
A man kills his neighbor's dog for being noisy.
Wrong, of course. He should have spent a fortune getting a court order to have the dog destroyed by a professional. How else can lawyers make a living?

Cynical? Moi? 
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 28/02/2020 16:03:39
It can only happen in a democratic society. Moreover, what would they do if they got reelected? Can they just rest in peace? If not, then it can't be their actual ultimate/terminal goal.
"All political careers end in failure" (Churchill). Or death (Calverd). It's a bit like skiing - you proceed to ever more difficult and dangerous runs until you break something. But what a ride!

Quote
Deception to gain political power only work if the constituents are gullible enough to believe it.
Never underestimate the gullibility of the electorate. "Make America Great" my arse. WTF does that actually mean?  Destroy the social fabric, support mass murder, pardon criminals, and put ignorant prejudiced scum on the Supreme Court bench. It's a vote winner!
Quote
They can systematically dumb down their people, but that would bring unwanted consequences in the long term.
  In Thatcher's case, dementia. In Blair's case, loadsamoney. The Nazi high command enjoyed feasts and adulation up to the point where the Red Army were literally breaking the door down. Cologne, Dresden, Hamburg...just show up and say something defiant over the smouldering ruins, and das volk will cheer as always. 
Title: Re: Is there a universal moral standard?
Post by: Europan Ocean on 01/03/2020 11:45:57
So no drug dealer or pimp would vote Republican. Why not?  Surely these are the very people who favour private enterprise and low taxes? Or are they hoping for state-funded addiction and prostitution in the Land of the Free?
No, either party prosecutes such crimes. The Democrats for some reason gain the votes from the Mexican immigrants. Controlled immigration can be like gerrymandering.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 01/03/2020 21:46:06
So the stuff about criminals in your reply #388 above was irrelevant.

If UK politicians are anything to go by, those on the left are usually corrupted by money, those on the right by sex. So a few mixed criminals in any immigrant group won't have much net effect on politics. 
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/03/2020 05:52:37
Utilitarians tried to build a moral system based on pain and pleasure, although in application, they need to put some flexibility in the definitions to conform to currently accepted norms. The golden rule is offered as a rule of thumb, but the application often needs compromises, especially when an asymmetrical relationship is involved, or difference in personal preferences.
A universal moral standard should cover all moral cases without exception, at least in principle. Its implementation is only limited by the laws of physics and the knowledge of conscious beings implementing it. It should be able to unambiguously answer moral problems such as many variations of trolley problem, as long as the cause and effect relationships in each case can be clearly defined. Other moral problems such as genome editing should also be answered without much trouble.
Quote
Human Nature lays out these tantalizing possibilities alongside some even more far-out applications, like Crispr-ing pigs to grow human organs. Then viewers spend time with Steven Hsu, the chief scientific officer at Genomic Prediction, a company that generates genetic scorecards for prospective parents’ IVF embryos. Hsu believes that using Crispr to create children free of disease will one day be routine, and that parents who leave their genetic recombination up to chance will be the ones deemed unethical by societies of the future.
https://www.wired.com/story/crisprs-origin-story-comes-to-life-in-a-new-documentary/
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/03/2020 05:57:52
those on the left are usually corrupted by money, those on the right by sex.
If impeached presidents are useful as indicator, then the US would be a different story.
Title: Re: Is there a universal moral standard?
Post by: evan_au on 12/03/2020 08:59:29
Quote from: hamdani yusuf
How can this rule help to solve moral problems such as trolley problem?
99.9% of the morality of the trolley problem is resolved by:
- An annual thorough inspection of the brakes, lights, windscreen wipers, etc...
- A several-times daily check of the engine, brakes etc when each new driver starts his/her shift.
- Keeping to the speed limit appropriate for the conditions
- Reporting any brake problems as soon as they occur, rather than waiting until there are 6 people tied to the tracks.

Quote
The concept of IQ has been around for more than a century
EQ or "Emotional Quotient" has been around for much less time, but it relates to an emotional connection with people, rather than an intellectual connection.

Whether a leader can be emotionally connected to millions of people is an open question...
See: https://en.wikipedia.org/wiki/Emotional_intelligence
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/03/2020 10:12:50
99.9% of the morality of the trolley problem is resolved by:
- An annual thorough inspection of the brakes, lights, windscreen wipers, etc...
- A several-times daily check of the engine, brakes etc when each new driver starts his/her shift.
- Keeping to the speed limit appropriate for the conditions
- Reporting any brake problems as soon as they occur, rather than waiting until there are 6 people tied to the tracks.
I don't know how you can come up with the number. It seems like you only considered the original version of trolley problem where it happens accidentally. But there are some variations where it is deliberately set up by some villains such as in superhero movies.
I already mentioned that there are many variations of trolley problem, from changes in minor detail to almost entirely different setup such as transplant problem. The changes could be simply the number of persons in each track, personal relationship with some persons on the track, the knowledge of personality of some people on the track, the necessity of actively sacrificing one person to save the many, etc. Other variations replace some of the persons with pets or something else valuable.
The core question is how to set a proper order of priority between different options when both of them have negative/undesired impacts. In other word, it eventually asks for a function to be optimized by some algorithm based on some moral standards. A more general problem would also includes options with positive/desired impacts.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 12/03/2020 11:06:31
Important not to confuse moral and emotional issues. The moral issue is (or should be) what would be judged in a court of one's peers or by the Man on the Clapham Omnibus. Sadly, allowing third party "impact statements" in court has I think diluted the legitimacy of the process except in cases where the perpetrator clearly intended to inflict suffering on the third party: guilt and punishment should not depend on the fluency of friends and relatives.

Evan has made a valid point. Negligence is a moral issue.   
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 12/03/2020 11:11:39
those on the left are usually corrupted by money, those on the right by sex.
If impeached presidents are useful as indicator, then the US would be a different story.
The US is weird in many ways, beginning with pinning the wrong colors on their political parties and ending up with electing a drooling idiot as president, despite his coming second in the popular vote.  Impeachment for sexual shenanigans is quite absurd: any modern French or British politician  would say "so what?" as long as there was no compromise of national security.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 13/03/2020 04:25:33
EQ or "Emotional Quotient" has been around for much less time, but it relates to an emotional connection with people, rather than an intellectual connection.

Whether a leader can be emotionally connected to millions of people is an open question...
See: https://en.wikipedia.org/wiki/Emotional_intelligence

Quote
The Oxford Dictionary definition of emotion is "A strong feeling deriving from one's circumstances, mood, or relationships with others."[22] Emotions are responses to significant internal and external events.[23]

Emotions can be occurrences (e.g., panic) or dispositions (e.g., hostility), and short-lived (e.g., anger) or long-lived (e.g., grief).[24] Psychotherapist Michael C. Graham describes all emotions as existing on a continuum of intensity.[25] Thus fear might range from mild concern to terror or shame might range from simple embarrassment to toxic shame.[26] Emotions have been described as consisting of a coordinated set of responses, which may include verbal, physiological, behavioral, and neural mechanisms.[27]

Emotions have been categorized, with some relationships existing between emotions and some direct opposites existing. Graham differentiates emotions as functional or dysfunctional and argues all functional emotions have benefits.[28]

In some uses of the word, emotions are intense feelings that are directed at someone or something.[29] On the other hand, emotion can be used to refer to states that are mild (as in annoyed or content) and to states that are not directed at anything (as in anxiety and depression). One line of research looks at the meaning of the word emotion in everyday language and finds that this usage is rather different from that in academic discourse.[30]

In practical terms, Joseph LeDoux has defined emotions as the result of a cognitive and conscious process which occurs in response to a body system response to a trigger.[31]

https://en.wikipedia.org/wiki/Emotion#Definitions

IMO, emotion emerged as a product of evolution due to the advantages it brings by speeding up response to some particular situations. Minute details in a situation may change very rapidly, but the outline of the situation usually have longer period. In science fiction movie, an example I can think of is the battle mode and caring mode of Baymax in Big Hero 6. In battle mode, even a slight movement can be interpreted as telegraphing punches, which might be responded fiercely.
The same stimulus may get very different response when occurs in different state of emotion.The correct application of emotion is useful for organims' survival such as in fight or flight situation.
A simpler version of mode changes can be seen in sonar usage of bats who change the sonar pulses to become more frequently when charging a prey compared to normal flight. Cost and benefit of those modes determine when to activate them, although it's more likely happens instinctively. It just happens that the bats with correct instinct are more likely to survive and thrive compared to those who don't.
Instinctive switching of modes/emotional state it can be overridden by higher level of consiousness through reason, understanding of cause and effect, and  preference to different results. Once again it can be shown that emotion is useful as a tool or instrumental goal which in many situation can help the achievement of the terminal goal. But in more complex situations it can backfire and give unwanted results instead.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 13/03/2020 08:01:34
While concept of intelligence is meant to represent problem solving capability, the concept of consciousness includes the ability to determine which problems to solve first.
It is generally assumed that given the same amount of information/knowledge, people with higher intelligence are more likely and quickly to solve problems compared to those with lower intelligence. So some knowledge and wisdom are excluded from measurement of intelligent. We can get high score in IQ test without knowing about Maxwell's equations or history of USA. Our physical prowess don't seem to matter either.
On the other hand, extended consciousness takes all of those (or lack of those) into account, as long as they significantly affect the ability of consious agents to achieve their goals.
Quote
A disability is any condition that makes it more difficult for a person to do certain activities or interact with the world around them. These conditions, or impairments, may be cognitive, developmental, intellectual, mental, physical, sensory, or a combination of multiple factors. Impairments causing disability may be present from birth or occur during a person's lifetime.
https://en.wikipedia.org/wiki/Disability
Gullibility or lack of critical thinking would significantly reduce the measure of consciousness level of agents, although it may be insignificant to the measure of their intelligence. This often happens to people who were discouraged to question the authorities in the early stage of their life.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 15/03/2020 16:48:38
While concept of intelligence is meant to represent problem solving capability, the concept of consciousness includes the ability to determine which problems to solve first.

Brain-implanted rats and human addicts will solve the problem of getting the next fix rather than getting the next meal. This behaviour may seem illogical to you, but if you use it to determine consciousness, you are applying your arbitrary values to another entity in a different environment, so it's subjective. Think about a parent who knowingly sacrifices himself to save a child: same outcome (self destruction) for the same stimulus (feeling good). 
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/03/2020 11:05:40
Brain-implanted rats and human addicts will solve the problem of getting the next fix rather than getting the next meal. This behaviour may seem illogical to you, but if you use it to determine consciousness, you are applying your arbitrary values to another entity in a different environment, so it's subjective. Think about a parent who knowingly sacrifices himself to save a child: same outcome (self destruction) for the same stimulus (feeling good).
The difference is the outcome in the long run. The sacrifice of parents are compensated by the survival of children who inherit most of parent's characteristics, and probably some improvements, and acumulated knowledge of the society. Without adequate compensation, self destruction is always a bad behavior.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 16/03/2020 12:17:16
The compensation may be relief of chronic pain, emotional suffering, or obvious looming disaster. Or simply to give a lifetime's accumulated wealth to one's children instead of wasting it on terminal "care". I fully intend to take my own life rather than suffer pain and indignity.  Who authorised any of the aforementioned old perverts to judge "adequate"?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/03/2020 09:04:26
The compensation may be relief of chronic pain, emotional suffering, or obvious looming disaster. Or simply to give a lifetime's accumulated wealth to one's children instead of wasting it on terminal "care". I fully intend to take my own life rather than suffer pain and indignity.
With adequate knowledge, we should be able to kill pain without unintended side effects.

Quote
Who authorised any of the aforementioned old perverts to judge "adequate"?
The conscious agents who are still alive in the future, just like we judge actions of people from previous generations.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 17/03/2020 11:27:30
Authorisation is an a priori activity. The law forbids assisted suicide, and within my lifetime it was even an offence to attempt to take one's own  life. This disgusting legislation seems to stem from the religious beliefs of perverts who think that suffering is in some way a Good Thing. Fine, if they want to suffer, but they have no moral authority to imposed their revolting ideas on others.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/03/2020 02:48:45
Laws have changed from time to time. In the past, there are even law that makes genocide an obligation.
https://en.wikipedia.org/wiki/Saul#Rejection
Quote
Several years after Saul’s victory against the Philistines at Michmash Pass, Samuel instructs Saul to make war on the Amalekites and to "utterly destroy" them,[14] in fulfilment of a mandate set out Deuteronomy 25:19:

When the Lord your God has given you rest from all your enemies on every hand, in the land that the Lord your God is giving you as an inheritance to possess, you shall blot out the remembrance of Amalek from under heaven; do not forget.
Having forewarned the Kenites who were living among the Amalekites to leave, Saul goes to war and defeats the Amalekites. Saul kills all the men, women, children and poor quality livestock, but leaves alive the king and best livestock. When Samuel learns that Saul has not obeyed his instructions in full, he informs Saul that God has rejected him as king due to his disobedience. As Samuel turns to go, Saul seizes hold of his garments and tears off a piece; Samuel prophesies that the kingdom will likewise be torn from Saul. Samuel then kills the Amalekite king himself. Samuel and Saul each return home and never meet again after these events (1 Samuel 15:33-35).
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 18/03/2020 10:13:19
Now there's a problem! Some laws are made by politicians or perverts for their own aggrandisement, some for the sake of social cohesion, and some as an emergency provision. The case you quote suggests personal aggrandisement: the war was over and the prophecy was to "blot out the remembrance", i.e. to re-educate, not eradicate, the population.

Not much evidence of an acceptable moral standard in the statute books, nor the bible, I fear.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 16/04/2020 09:07:59
Now there's a problem! Some laws are made by politicians or perverts for their own aggrandisement, some for the sake of social cohesion, and some as an emergency provision. The case you quote suggests personal aggrandisement: the war was over and the prophecy was to "blot out the remembrance", i.e. to re-educate, not eradicate, the population.

Not much evidence of an acceptable moral standard in the statute books, nor the bible, I fear.
Here is the more complete quote.
Quote
Several years after Saul’s victory against the Philistines at Michmash Pass, Samuel instructs Saul to make war on the Amalekites and to "utterly destroy" them,[14] in fulfilment of a mandate set out Deuteronomy 25:19:

When the Lord your God has given you rest from all your enemies on every hand, in the land that the Lord your God is giving you as an inheritance to possess, you shall blot out the remembrance of Amalek from under heaven; do not forget.
I don't know how you translate that into re-education. Let's scrutinize this.
Quote
Saul kills all the men, women, children and poor quality livestock, but leaves alive the king and best livestock. When Samuel learns that Saul has not obeyed his instructions in full
So, the instructions are to kill all the men (including the king), women, children and livestock (either poor or best quality). Saul did kill all the men (except the king), women, children and livestock (except the best quality), thus Saul has not obeyed his instructions in full.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 16/04/2020 12:03:42
,
Quote
you shall blot out the remembrance of Amalek from under heaven;
I don't know how you translate that into re-education.

Worth studying post-1945 Japanese and German laws and history books to see how it can be done under relatively benign occupation. There are for instance several historic German aircraft that fly regularly in films and airshows in the UK and USA with their original markings, but have to be repainted without the swastika to fly in German airspace. Also worth noting how  revolutionaries like to destroy the statues of former dictators, and today's students take umbrage at images of their colleges' colonialist founders and benefactors. Evacuating Dunkirk was a remarkable achievement, but was it really a victory? And what was WWI all about?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 17/04/2020 11:12:24
I'll just put these memes here to remind us the current situation due to the pandemic.
Quote
The trolley problem demonstrates just how dire the coronavirus pandemic is becoming — with a touch of surrealist humor, of course.
https://mashable.com/article/trolley-problem-coronavirus-meme/

(https://pbs.twimg.com/media/ET4zE5XWsAIr98S?format=png&name=small)

(https://pbs.twimg.com/media/ET4kG75WsAorZVU?format=jpg&name=small)

(https://pbs.twimg.com/media/ET42ftBVAAAvAli?format=jpg&name=small)

(https://pbs.twimg.com/media/ET4_ErAU0AAjVbv?format=jpg&name=small)

(https://pbs.twimg.com/media/ET082FjWkAA9h6-?format=jpg&name=small)


Title: Re: Is there a universal moral standard?
Post by: alancalverd on 17/04/2020 12:17:35
And Presidential Executive Orders are made to optimise the shareholdings of the President, as in any other aspiring banana republic.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 27/04/2020 09:47:16
Pushing the guy in front of the trolley
https://statmodeling.stat.columbia.edu/2019/05/23/41068/
Quote
So. I was reading the London Review of Books the other day and came across this passage by the philosopher Kieran Setiya:

Some of the most striking discoveries of experimental philosophers concern the extent of our own personal inconsistencies . . . how we respond to the trolley problem is affected by the details of the version we are presented with. It also depends on what we have been doing just before being presented with the case. After five minutes of watching Saturday Night Live, Americans are three times more likely to agree with the Tibetan monks that it is permissible to push someone in front of a speeding train carriage in order to save five. . . .

I’m not up on this literature, but I was suspicious. Watching a TV show for 5 minutes can change your view so strongly?? I was reminded of the claim from a few years ago, that subliminal smiley faces had huge effects on attitudes toward immigration—it turns out the data showed no such thing. And I was bothered, because it seemed that a possibly false fact was being used as part of a larger argument about philosophy. The concept of “experimental philosophy”—that’s interesting, but only if the experiments make sense.
Quote
And, just to be clear, I agree that there’s nothing special about an SNL video or for that matter about a video at all. My concern about the replication studies is more of a selection issue: if a new study doesn’t replicate the original claim, then a defender can say it’s not a real replication. I guess we could call that “the no true replication fallacy”! Kinda like those notorious examples where people claimed that a failed replication didn’t count because it was done in a different country, or the stimulus was done for a different length of time, or the outdoor temperature was different.

The real question is, what did they find and how do these findings relate to the larger claim?

And the answer is, it’s complicated.

First, the two new studies only look at the footbridge scenario (where the decision is whether to push the fat man), not the flip-the-switch-on-the-trolley scenario, which is not so productive to study because most people are already willing to flip the switch. So the new studies to not allow comparison the two scenarios. (Strohminger et al. used 12 high conflict moral dilemmas; see here)

Second, the two new studies looked at interactions rather than main effects.
Trolley problem and its variations used as tools to find moral principles have their benefits as well as limitations. At least they give some sense of practicality by placing us in a possible real world situations which require us to make moral decisions, instead of just imagining abstracts to weigh in which moral principles should be prioritized over the others. But they also introduce uncertainty about cause and effect relationship of available actions in some people's mind. Some people tried to find third option to break the dilemma.
Instead of making things clear to help us make firm decisions, the variations may have added more complexity like shown in this comic.
https://existentialcomics.com/comic/106
(https://static.existentialcomics.com/comics/trolleyMadness1.png)
(https://static.existentialcomics.com/comics/trolleyMadness2.png)
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 27/04/2020 10:42:54
The article below discuss ethics from a more practical point of view, which is algorithm for self driving cars.
https://psmag.com/economics/is-the-trolley-problem-derailing-the-ethics-of-self-driving-cars
Quote
"The Trolley Problem"—as the above situation and its related variations are called—is a mainstay of introductory ethics courses, where it is often used to demonstrate the differences between utilitarian and Kantian moral reasoning. Utilitarianism (also called consequentialism) judges the moral correctness of an action based solely on its outcome. A utilitarian should switch the tracks. Just do the math: One dead is better than five, in terms of outcomes. Kantian, or rule-based, ethics relies on a set of moral principles that must be followed in all situations, regardless of outcome. A Kantian might not be able to justify switching the track if, say, their moral principles hold actively killing someone to be worse than being a bystander to death.
Quote
The rise of autonomous vehicles has given the thought experiment a renewed urgency. If a self-driving car has to choose between crashing into two different people—or two different groups of people—how should it decide which to kill, and which to spare? What value system are we coding into our machines?

These questions about autonomous vehicles have, for years, been haunting journalists and academics. Last month, the Massachusetts Institute of Technology released the results of its "Moral Machine," an online survey of two million people across 200 countries, demonstrating their preferences for, well, who they'd prefer a self-driving car to kill. Should a car try to hit jaywalkers, rather than people following the rules for crossing? Senior citizens rather than younger people? People in better social standing than those less well-regarded?

Quote
One concern I have is with regard to how the moral machine project has been publicized is that, for ethicists, looking at what other cultures think about different ethical questions is interesting, but [that work] is not ethics. It might cause people to think that all that ethics is is just about surveying different groups and seeing what their values are, and then those values are the right ones. I'm concerned about moral relativism, which is already very troubling with our world, and this may be playing with that. In ethics, there's a right and there's a wrong, and this might confuse people about what ethics is. We don't call people up and then survey them.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 27/04/2020 12:56:39
The trolley problem is fairly typical of "problems" in philosophy. It's actually a hyperproblem because its statement is deliberately incomplete and evolves in response to each proposed solution.
We have to make decisions every day based on incomplete information, but in real life you rarely get the next bit of information in time to change your mind, which is why the question asked in a court of enquiry is always prefaced with "Given what you knew at the time, why did you....." Or to quote Sully "That is excellent flying. Now, knowing exactly what was going to happen, how many times did you simulate the emergency approach before you got it right?" "Thirteen". Roll credits.

The selfdriving "moral machine" is nonsense. Page One of the highway code says, in effect,  "never drive faster than you can stop in the distance you can see". Being wholly unemotional , never distracted and never tired, the selfdriving car is fully aware of its stopping distance so never has to make a choice. If a pedestrian chooses to run into the road in less than the stopping distance, that's his problem, not the car's.   
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 29/04/2020 08:46:40
Cogito ergo sum is just one of an infinite number of possible axioms. It's not a strong foundation.
Decartes demonstrated by reductio ad absurdum, that if a thinker rejects its own existence, it leads to contradiction.
Quote
At the beginning of the second meditation, having reached what he considers to be the ultimate level of doubt—his argument from the existence of a deceiving god—Descartes examines his beliefs to see if any have survived the doubt. In his belief in his own existence, he finds that it is impossible to doubt that he exists. Even if there were a deceiving god (or an evil demon), one's belief in their own existence would be secure, for there is no way one could be deceived unless one existed in order to be deceived.

But I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I, too, do not exist? No. If I convinced myself of something [or thought anything at all], then I certainly existed. But there is a deceiver of supreme power and cunning who deliberately and constantly deceives me. In that case, I, too, undoubtedly exist, if he deceives me; and let him deceive me as much as he can, he will never bring it about that I am nothing, so long as I think that I am something. So, after considering everything very thoroughly, I must finally conclude that the proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind. (AT VII 25; CSM II 16–17[v])

There are three important notes to keep in mind here. First, he claims only the certainty of his own existence from the first-person point of view — he has not proved the existence of other minds at this point. This is something that has to be thought through by each of us for ourselves, as we follow the course of the meditations. Second, he does not say that his existence is necessary; he says that if he thinks, then necessarily he exists (see the instantiation principle). Third, this proposition "I am, I exist" is held true not based on a deduction (as mentioned above) or on empirical induction but on the clarity and self-evidence of the proposition. Descartes does not use this first certainty, the cogito, as a foundation upon which to build further knowledge; rather, it is the firm ground upon which he can stand as he works to discover further truths.[35] As he puts it:

Archimedes used to demand just one firm and immovable point in order to shift the entire earth; so I too can hope for great things if I manage to find just one thing, however slight, that is certain and unshakable. (AT VII 24; CSM II 16)
https://en.wikipedia.org/wiki/Cogito,_ergo_sum#Interpretation

The cogito ergo sum provide subjective certainty as a starting point. To get to objective certainty, we need to collect and assemble more information and knowledge to build an accurate and precise model of objective reality.

The video is titled "The Self - A Thought Experiment".
Spoiler: show
An omniscient conscious being doesn't have subjectivity.

Quote
Professor Patrick Stokes of Deakin University gives a thought experiment from Thomas Nagel. This comes from a talk given at the Ethics Centre from an episode of the podcast The Philosopher's Zone.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 29/04/2020 09:38:28
Typical philosopher's problem. Based on a dangerously faulty premise! The list of everything in the universe must include the list itself, but the existence of the list is itself a fact that must now be added to the list, so we must add the fact that we have added a fact to the list.....

But a philosopher would set that aside, allowing an infinitely expanding  list (on the basis that cogito ergo sum applies also to lists). Now look yourself up in the list. You are doing something that isn't already on the list, so we have to add that to the description of you, ad infinitum... The problem becomes one of mathematics: you can't define "you" on the basis of that particular model. It's an inherently crap model because it imposes divergency on any proposed solution.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 29/04/2020 10:42:29
"The Trolley Problem"—as the above situation and its related variations are called—is a mainstay of introductory ethics courses, where it is often used to demonstrate the differences between utilitarian and Kantian moral reasoning. Utilitarianism (also called consequentialism) judges the moral correctness of an action based solely on its outcome. A utilitarian should switch the tracks. Just do the math: One dead is better than five, in terms of outcomes. Kantian, or rule-based, ethics relies on a set of moral principles that must be followed in all situations, regardless of outcome. A Kantian might not be able to justify switching the track if, say, their moral principles hold actively killing someone to be worse than being a bystander to death.
I wonder what a Kantian would think if the 6 people on the track are equally valuable to him, e.g. all of them are his own twin kids. Will he let 5 of them die for his principle?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 29/04/2020 10:58:44
Typical philosopher's problem. Based on a dangerously faulty premise! The list of everything in the universe must include the list itself, but the existence of the list is itself a fact that must now be added to the list, so we must add the fact that we have added a fact to the list.....

But a philosopher would set that aside, allowing an infinitely expanding  list (on the basis that cogito ergo sum applies also to lists). Now look yourself up in the list. You are doing something that isn't already on the list, so we have to add that to the description of you, ad infinitum... The problem becomes one of mathematics: you can't define "you" on the basis of that particular model. It's an inherently crap model because it imposes divergency on any proposed solution.

In real life, we always consider practicality. Not all data have equal significance to the end result. Some may cancel each other. They may also have some form of redundancy, which we can exploit in data compression.
(https://wikimedia.org/api/rest_v1/media/math/render/svg/247535cef4b9b94eabeb16908cf72436cd01d0c9)
This continued fraction can be used as illustration.

In many situations we don't need infinite precision. We can often make good decision with finite information.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 29/04/2020 11:28:11
Like any other rules, moral rules are also made to serve some purpose. For example, game rules are set to make the game more interesting for most people, so the game will be kept being played. That's why we get something like hands ball and off side rules in foot ball, or rocade and en passant in chess.
Likewise for moral rules. I conclude that their purpose is to preserve the existence of consciousness in objective reality. Due to incomplete information and limited resource to perform actions, we need to deal with probability theory. Something is morally good if it can be demonstrated to increase the probability of preserving consciousness and bad if it can be demonstrated to decrease the probability of preserving consciousness. Without adequate support, we can't decide if something is morally good or bad.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 29/04/2020 14:59:32
In many situations we don't need infinite precision. We can often make good decision with finite information.
Precision isn't the problem. It's the more fundamental issue of the properties of a set which is member of itself - maths, not philosophy or morals!
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 29/04/2020 15:13:48
I conclude that their purpose is to preserve the existence of consciousness in objective reality.
I bet you can't define any of those words!

How about moral rules being the lubricant of society?

An ideal hermit doesn't need any rules of behavior towards others, but the moment you introduce a second person into a finite universe you have introduced the possibility of damaging conflict if each pursues an entirely selfcentered existence. Some rules of behaviour are therefore necessary if both are to prosper and collaborate (collaboration generally produces greater prosperity). If you generalise from obviously pragamatic limits (I won't kill you because that will reduce the manpower available for hunting) towards a hypothetical society (murder is bad) you are building a moral code.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 04/05/2020 11:05:55
I bet you can't define any of those words!
We can look up the dictionary to find the definition of each words. Some words may have different meanings according to context. The meaning of words may change over time following evolution of languages.
I usually used the words according to the definitions found in dictionary. I also have explained why the definition of consciousness in narrow clinical context is inadequate to build argumentation about morality, and how we can extend it to make it useful here. If you think there are better words to represent what I mean, please tell me.
Without delving too deeply into the definition of morality or ethics, I think we can usefully approach the subject through "universal". The test is whether any person considered normal by his peers, would make the same choice or judgement as any other in a case requiring subjective evaluation.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 04/05/2020 22:41:04
their purpose is to preserve the existence of consciousness in objective reality.
I can't think of better words to represent what you mean because I have no idea what you mean!

When Moses wrote "thou shalt not commit adultery" I rather think he was trying to preserve peace among a tribe. Maybe he knew what consciousness is, and could distinguish between objective reality and some less desirable environment for it, but I don't see these words, or any substitute for them, in Exodus!   
Title: Re: Is there a universal moral standard?
Post by: evan_au on 04/05/2020 23:49:48
Quote from: hamdani yusuf
for moral rules... I conclude that their purpose is to preserve the existence of consciousness in objective reality.
Many species have been observed to have rules of moral behavior that work for them.

But we can't easily define consciousness in humans, let alone define what it means for other species (even familiar ones like the domesticated dog).
- Of course, the anthropocentric chauvinists default to "consciousness is unique to humans..."
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 05/05/2020 05:26:12
How about moral rules being the lubricant of society?
Moral rules are not limited to be the lubricant of society. They also cover individual affairs, such as keeping oneself sober and healthy, and avoid suicidal behaviors.

What's so moral about mass suicide anyway? Here is an example where a lubricant of society gone wrong.
Quote
Mass murder in Jonestown

Houses in Jonestown, Guyana, the year after the mass murder-suicide, 1979
Later that same day, 909 inhabitants of Jonestown,[94] 304 of them children, died of apparent cyanide poisoning, mostly in and around the settlement's main pavilion.[95] This resulted in the greatest single loss of American civilian life (murder + suicide, though not on American soil) in a deliberate act until the September 11 attacks.[96] The FBI later recovered a 45-minute audio recording of the suicide in progress.[97]

On that tape, Jones tells Temple members that the Soviet Union, with whom the Temple had been negotiating a potential exodus for months, would not take them after the airstrip murders. The reason given by Jones to commit suicide was consistent with his previously stated conspiracy theories of intelligence organizations allegedly conspiring against the Temple, that men would "parachute in here on us," "shoot some of our innocent babies" and "they'll torture our children, they'll torture some of our people here, they'll torture our seniors." Jones's prior statements that hostile forces would convert captured children to fascism would lead many members who held strong opposing views to fascism to view the suicide as valid. [98]

With that reasoning, Jones and several members argued that the group should commit "revolutionary suicide" by drinking cyanide-laced grape-flavored Flavor Aid. Later-released Temple films show Jones opening a storage container full of Kool-Aid in large quantities. However, empty packets of grape Flavor Aid found on the scene show that this is what was used to mix the solution, along with a sedative. One member, Christine Miller, dissents toward the beginning of the tape.[98]

When members apparently cried, Jones counseled, "Stop these hysterics. This is not the way for people who are socialists or communists to die. No way for us to die. We must die with some dignity." Jones can be heard saying, "Don't be afraid to die," that death is "just stepping over into another plane" and that it's "a friend." At the end of the tape, Jones concludes: "We didn't commit suicide; we committed an act of revolutionary suicide protesting the conditions of an inhumane world."[98]

According to escaping Temple members, children were given the drink first by their own parents; families were told to lie down together.[99] Mass suicide had been previously discussed in simulated events called "White Nights" on a regular basis.[83][100] During at least one such prior White Night, members drank liquid that Jones falsely told them was poison.[83][100]
https://en.wikipedia.org/wiki/Jim_Jones#Mass_murder_in_Jonestown

Further questions could be raised, what kind of society are we talking about? Is it restricted to human society? Can the scope be extended to other animals, such as a raft of penguins? Can it be extended further to unicellular organisms? Can it be extended even further to inanimate objects?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 05/05/2020 06:13:26
In many situations we don't need infinite precision. We can often make good decision with finite information.
Precision isn't the problem. It's the more fundamental issue of the properties of a set which is member of itself - maths, not philosophy or morals!
I wasn't talking about a set which is a member of itself. I posted the video to show that the more information we have, the more objective we can become.
The cogito ergo sum provide subjective certainty as a starting point. To get to objective certainty, we need to collect and assemble more information and knowledge to build an accurate and precise model of objective reality.
To overcome subjectivity, our model of objective reality doesn't necessarily contain complete information of itself. It only needs to contain representation of itself in the model. A windows desktop is a commonly seen example.
(https://www.thewindowsclub.com/wp-content/uploads/2013/10/this-pc-windows-8-1.jpg)

I've also described this in another thread.
The progress to build better AI and toward AGI will eventually get closer to the realization of Laplace demon which is already predicted as technological singularity.
Quote
The better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.
https://pathmind.com/wiki/neural-network
Quote
In some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.

That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.

Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 05/05/2020 06:42:21
Quote from: hamdani yusuf
for moral rules... I conclude that their purpose is to preserve the existence of consciousness in objective reality.
Many species have been observed to have rules of moral behavior that work for them.

But we can't easily define consciousness in humans, let alone define what it means for other species (even familiar ones like the domesticated dog).
- Of course, the anthropocentric chauvinists default to "consciousness is unique to humans..."

We can start with a narrow and simple definition of consciousness which is widely accepted, such as in clinical context. Immediately we will realize that it is too narrow to be useful for determining moral rules. We clearly need to extend it as I've shown in previous posts here https://www.thenakedscientists.com/forum/index.php?topic=75380.msg591376#msg591376
https://www.thenakedscientists.com/forum/index.php?topic=75380.msg592256#msg592256
Human is the only currently known extant biological entity with the ability to create artificial consciousness. Ray Kurzweil has argued that consciousness doesn't even have to be biological.
Quote
Given that superintelligence will one day be technologically feasible, will people choose to develop it? This
question can pretty confidently be answered in the affirmative. Associated with every step along the road to
superintelligence are enormous economic payoffs. The computer industry invests huge sums in the next
generation of hardware and software, and it will continue doing so as long as there is a competitive pressure
and profits to be made. People want better computers and smarter software, and they want the benefits these
machines can help produce. Better medical drugs; relief for humans from the need to perform boring or
dangerous jobs; entertainment—there is no end to the list of consumer-benefits. There is also a strong military
motive to develop artificial intelligence. And nowhere on the path is there any natural stopping point where
technophobics could plausibly argue "hither but not further."
—NICK BOSTROM, “HOW LONG BEFORE SUPERINTELLIGENCE?” 1997

It is hard to think of any problem that a superintelligence could not either solve or at least help us solve.
Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a
superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a
superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through
the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also
create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could
assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful
gameplaying, relating to each other, experiencing, personal growth, and to living closer to our ideals.
—NICK BOSTROM, “ETHICAL ISSUES IN ADVANCED ARTIFICIAL INTELLIGENCE," 2003

Will robots inherit the earth? Yes, but they will be our children.
—MARVIN MINSKY, 1995
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 05/05/2020 10:16:30
There have been already studies similar to the usage of consciousness level to determine morality, such as Lawrence Kohlberg's stages of moral development.
Quote
Lawrence Kohlberg's stages of moral development constitute an adaptation of a psychological theory originally conceived by the Swiss psychologist Jean Piaget. Kohlberg began work on this topic while being a psychology graduate student at the University of Chicago in 1958 and expanded upon the theory throughout his life.[1][2][3]

The theory holds that moral reasoning, a necessary (but not sufficient) condition for ethical behavior,[4] has six developmental stages, each more adequate at responding to moral dilemmas than its predecessor.[5] Kohlberg followed the development of moral judgment far beyond the ages studied earlier by Piaget, who also claimed that logic and morality develop through constructive stages.[6][5] Expanding on Piaget's work, Kohlberg determined that the process of moral development was principally concerned with justice and that it continued throughout the individual's life, a notion that led to dialogue on the philosophical implications of such research.[7][8][2]

The six stages of moral development occur in phases of pre-conventional, conventional and post-conventional morality. For his studies, Kohlberg relied on stories such as the Heinz dilemma and was interested in how individuals would justify their actions if placed in similar moral dilemmas. He analyzed the form of moral reasoning displayed, rather than its conclusion and classified it into one of six stages.[2][9][10][11]
https://en.wikipedia.org/wiki/Lawrence_Kohlberg%27s_stages_of_moral_development
Quote
Kohlberg's six stages can be more generally grouped into three levels of two stages each: pre-conventional, conventional and post-conventional.[9][10][11] Following Piaget's constructivist requirements for a stage model, as described in his theory of cognitive development, it is extremely rare to regress in stages—to lose the use of higher stage abilities.[16][17] Stages cannot be skipped; each provides a new and necessary perspective, more comprehensive and differentiated than its predecessors but integrated with them.[16][17]

Kohlberg's Model of Moral Development
Level 1 (Pre-Conventional)
1. Obedience and punishment orientation
(How can I avoid punishment?)
2. Self-interest orientation
(What's in it for me?)
(Paying for a benefit)

Level 2 (Conventional)
3. Interpersonal accord and conformity
(Social norms)
(The good boy/girl attitude)
4. Authority and social-order maintaining orientation
(Law and order morality)

Level 3 (Post-Conventional)
5. Social contract orientation
6. Universal ethical principles
(Principled conscience)

The understanding gained in each stage is retained in later stages, but may be regarded by those in later stages as simplistic, lacking in sufficient attention to detail.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 05/05/2020 10:20:54
https://en.wikipedia.org/wiki/Heinz_dilemma
The Heinz dilemma is a frequently used example in many ethics and morality classes. One well-known version of the dilemma, used in Lawrence Kohlberg's stages of moral development, is stated as follows[1]:
Quote
A woman was on her deathbed. There was one drug that the doctors thought might save her. It was a form of radium that a druggist in the same town had recently discovered. The drug was expensive to make, but the druggist was charging ten times what the drug cost him to produce. He paid $200 for the radium and charged $2,000 for a small dose of the drug. The sick woman's husband, Heinz, went to everyone he knew to borrow the money, but he could only get together about $1,000 which is half of what it cost. He told the druggist that his wife was dying and asked him to sell it cheaper or let him pay later. But the druggist said: “No, I discovered the drug and I'm going to make money from it.” So Heinz got desperate and broke into the man's laboratory to steal the drug for his wife. Should Heinz have broken into the laboratory to steal the drug for his wife? Why or why not?

Quote
From a theoretical point of view, it is not important what the participant thinks that Heinz should do. Kohlberg's theory holds that the justification the participant offers is what is significant, the form of their response. Below are some of many examples of possible arguments that belong to the six stages:
(https://upload.wikimedia.org/wikipedia/commons/thumb/4/4a/Kohlberg_Model_of_Moral_Development.svg/800px-Kohlberg_Model_of_Moral_Development.svg.png)
(https://www.thenakedscientists.com/forum/index.php?action=dlattach;topic=75380.0;attach=30644)
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 05/05/2020 11:40:29
Moral rules are not limited to be the lubricant of society. They also cover individual affairs, such as keeping oneself sober and healthy, and avoid suicidal behaviors.
As long as I don't burden others, I can see no wrong in getting drunk, overeating or killing myself by these or other means. Thus no first-order moral implications: the key is whether or not I burden others by my actions, which would indeed break the protective film of lubricant. In a civilised society these actions are not illegal, though they may exclude you from some aspects of a social contract through "contributory negligence".

You might compare Jonestown with Masada, where 1000 defenders committed suicide after a 2 year siege rather than be enslaved by the Romans. In the Jonestown case it was pretty clear that the defenders had committed crimes against others so the moral implications are clear, even if their personal judgement was suspended in favour of the ravings of a priest. At Masada the defenders had committed no wrong but made a strategic decision based on the known proclivities of the Romans who had been occupying the country for a couple of hundred years.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 05/05/2020 12:22:20
Heinz is interesting. Theft is clearly against the better interests of society, but so is profiteering. The moral question is therefore one of determining a just return on the provision of essentials. Again, one can turn to a civilised society (i.e. pretty well everywhere except the USA) where essential healthcare is funded by the taxpayer, or take a wholly commercial view that life is a gamble and insurance companies gamble guaranteed regular income against occasional unlimited expenditure, or assume the US posture that "Smith & Wesson beats four aces". As long as people are free to choose the society they live in, there's no moral principle at stake, and Heinz seems to be living in the USA. Every shopkeeper balances the cost of security against the cost of theft, which is why you can't buy the Crown Jewels in Tesco, and you don't need armoured glass and an armed guard over a bin of potatoes.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 06/05/2020 11:25:02
I can't think of better words to represent what you mean because I have no idea what you mean!
I recommend you to read Ray Kurzweil's book Singularity is Near.  You'll get a clear picture of what I mean there. What amazed me is that the book was already written in 2004, which shows me how insightful the author is.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 06/05/2020 13:13:47
Moral rules are not limited to be the lubricant of society. They also cover individual affairs, such as keeping oneself sober and healthy, and avoid suicidal behaviors.
As long as I don't burden others, I can see no wrong in getting drunk, overeating or killing myself by these or other means. Thus no first-order moral implications: the key is whether or not I burden others by my actions, which would indeed break the protective film of lubricant. In a civilised society these actions are not illegal, though they may exclude you from some aspects of a social contract through "contributory negligence".

You might compare Jonestown with Masada, where 1000 defenders committed suicide after a 2 year siege rather than be enslaved by the Romans. In the Jonestown case it was pretty clear that the defenders had committed crimes against others so the moral implications are clear, even if their personal judgement was suspended in favour of the ravings of a priest. At Masada the defenders had committed no wrong but made a strategic decision based on the known proclivities of the Romans who had been occupying the country for a couple of hundred years.
IMO, suicidal behavior can only be acceptable if we know that there are other conscious beings which are not suicidal, and get some benefit from our death.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 06/05/2020 14:18:38
Johan (brother of Heinz) has a painful terminal illness with no hope of recovery. He has spent all his money on failed treatment and is now living on the street.

Wilhelm (their cousin) is stinking rich with no debts, and has four adult children with big student loans to repay, and the same genetic condition as Joachim.

According to your ethics, W should top himself ASAP but J must stay in the gutter (and avoid being hit by a bus) until the Good Lord calls him to rest. 

I disagree.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 06/05/2020 16:56:57
There have been already studies similar to the usage of consciousness level to determine morality, such as Lawrence Kohlberg's stages of moral development.
We can find a pattern there where more developed moral stages show more inclusiveness and longer term goals. It is unsurprising since they require more thinking capabilities.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 06/05/2020 17:09:10
Johan (brother of Heinz) has a painful terminal illness with no hope of recovery. He has spent all his money on failed treatment and is now living on the street.

Wilhelm (their cousin) is stinking rich with no debts, and has four adult children with big student loans to repay, and the same genetic condition as Joachim.

According to your ethics, W should top himself ASAP but J must stay in the gutter (and avoid being hit by a bus) until the Good Lord calls him to rest. 

I disagree.
Can you tell me the reason?
What do you mean by top himself?

My comment on suicide sets the minimum requirement, but additional terms and conditions may apply according to the situation at hand. Imagine what would happen when the minimum requirement is not met. What if you are a character in the world of walking dead, not knowing any survivor who is not suicidal.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 06/05/2020 17:34:44
Apologies! "Top himself" is a very colloquial term for "commit suicide".   

My conclusion follows from your requirement:

IMO, suicidal behavior can only be acceptable if we know that there are other conscious beings which are not suicidal, and get some benefit from our death.


Nobody else will benefit from J's death, but W's kids will inherit his fortune.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 08/05/2020 05:23:32
Nobody else will benefit from J's death, but W's kids will inherit his fortune.
The existence of any human beings have their own costs and benefits to the society. Lost of one's life means there are more available resources for the others. But it also means lost of his/her contributions. In principle, we can calculate the balance, and find out which option brings more benefit for achieving the universal goal.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 08/05/2020 08:17:52
There is no universal goal in the case of suicide. The goal is to end or avert personal suffering by the most certain and final means.

Indeed the practical problem with decriminalising assisted suicide is to ensure that nobody is coerced towards death for the benefit of others.  So here's a good moral problem: how do you distinguish between a truly voluntary Will (that includes the costs and reasonable profit of whoever assists - I've always wanted to own a comfortable suicide hostel)  and excessive pressure from potential beneficiaries? 

In my scenario J had nothing, contributed nothing, and simply lived off scraps in dustbins, so would not be permitted to kill himself by your code of ethics, whereas W's death would profit several people and could therefore be permitted or even encouraged by society.  That's all wrong, surely?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 08/05/2020 08:25:21
https://en.wikipedia.org/wiki/Heinz_dilemma
The Heinz dilemma is a frequently used example in many ethics and morality classes. One well-known version of the dilemma, used in Lawrence Kohlberg's stages of moral development, is stated as follows[1]:
Quote
A woman was on her deathbed. There was one drug that the doctors thought might save her. It was a form of radium that a druggist in the same town had recently discovered. The drug was expensive to make, but the druggist was charging ten times what the drug cost him to produce. He paid $200 for the radium and charged $2,000 for a small dose of the drug. The sick woman's husband, Heinz, went to everyone he knew to borrow the money, but he could only get together about $1,000 which is half of what it cost. He told the druggist that his wife was dying and asked him to sell it cheaper or let him pay later. But the druggist said: “No, I discovered the drug and I'm going to make money from it.” So Heinz got desperate and broke into the man's laboratory to steal the drug for his wife. Should Heinz have broken into the laboratory to steal the drug for his wife? Why or why not?

Quote
From a theoretical point of view, it is not important what the participant thinks that Heinz should do. Kohlberg's theory holds that the justification the participant offers is what is significant, the form of their response. Below are some of many examples of possible arguments that belong to the six stages:
(https://www.thenakedscientists.com/forum/index.php?action=dlattach;topic=75380.0;attach=30644)

It's unfortunate that Kohlberg's theory doesn't help us in making a hard moral decision. It doesn't say what condition would make one option better than its alternative.
The reason why someone choose an option is indeed important to make sure that they are reliable when facing similar problems in the future. But not knowing which option is better given particular situations makes it useless as practical guidance. It is prone to be abused by someone with some knowledge in moral theories to serve their own interest.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 08/05/2020 08:40:18
There is no universal goal in the case of suicide. The goal is to end or avert personal suffering by the most certain and final means.

Indeed the practical problem with decriminalising assisted suicide is to ensure that nobody is coerced towards death for the benefit of others.  So here's a good moral problem: how do you distinguish between a truly voluntary Will (that includes the costs and reasonable profit of whoever assists - I've always wanted to own a comfortable suicide hostel)  and excessive pressure from potential beneficiaries? 

In my scenario J had nothing, contributed nothing, and simply lived off scraps in dustbins, so would not be permitted to kill himself by your code of ethics, whereas W's death would profit several people and could therefore be permitted or even encouraged by society.  That's all wrong, surely?
I think you've misunderstood my statement. Here is the more complete sentences in my post that you've cut.
Like any other rules, moral rules are also made to serve some purpose. For example, game rules are set to make the game more interesting for most people, so the game will be kept being played. That's why we get something like hands ball and off side rules in foot ball, or rocade and en passant in chess.
Likewise for moral rules. I conclude that their purpose is to preserve the existence of consciousness in objective reality. Due to incomplete information and limited resource to perform actions, we need to deal with probability theory. Something is morally good if it can be demonstrated to increase the probability of preserving consciousness and bad if it can be demonstrated to decrease the probability of preserving consciousness. Without adequate support, we can't decide if something is morally good or bad.
The consiousness in my post refers to the existence of known/verified conscious being in the universe, not a particular subjective conscious agent. Hence if the trend of technological advancement can be relied upon, my assertion would be:
Something is morally good if it can be demonstrated to increase the probability of the achievement of singularity, and bad if it can be demonstrated to decrease it.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 08/05/2020 12:00:53
My concern was in relation to this

IMO, suicidal behavior can only be acceptable if we know that there are other conscious beings which are not suicidal, and get some benefit from our death.

Nobody apart from J will benefit from his suicide, so you say that is wrong, but W's children might encourage W to commit suicide for their benefit, which you say is right.

I beg to differ - and so does the law!
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 10/05/2020 10:01:27
My concern was in relation to this

IMO, suicidal behavior can only be acceptable if we know that there are other conscious beings which are not suicidal, and get some benefit from our death.

Nobody apart from J will benefit from his suicide, so you say that is wrong, but W's children might encourage W to commit suicide for their benefit, which you say is right.

I beg to differ - and so does the law!
Whatever J consumed to stay alive would become available for someone else. There would be less waste to the environment.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 10/05/2020 10:24:16
J was living out of waste bins. His suicide will only benefit the population of urban foxes.

I think your moral code says that no matter how wretched, awful, unremittingly painful and pointless one's existence, suicide is only permitted if it benefits someone else. In my book, that is a disgusting attitude. Whose life is it?

Anyway, let's run with it.  The kamikaze pilot has sworn to die for the greater glory of the Emperor.  He has several choices, including defecting to the enemy, deliberately missing his target and crashing into the sea, killing a thousand enemy sailors, or even turning back to his base and wiping out the rest of the squadron. What would you do, and what would be the greater moral good? You may tackle the simpler problem of the suicide bomber if you wish.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 10/05/2020 11:24:15
To get the most universal moral rule, we can test them against various situations, and see which rules stand out all of them. In many ordinary situations, most common moral rules would pass. Fundamental rules must still be followed in some extreme cases, such as trolley problems and Heinz dilemma. If an exception can be justified when dealing with those extreme cases, that particular rule is not universally applicable.
Here is the most extreme case I can think of. A gamma ray burst suddenly attack earth killing all known conscious being, except you who is currently in a spaceship toward Mars.
You are the last conscious being in the universe. Your most fundamental moral duty is to survive. You'll need to improve yourself to be better at survival. You'll need to improve your knowledge and make better tools to help you survive. You may need to modify yourself, either genetically or by merging with robotics. You may need to create backup/clones to eliminate a single point failure. You may spread to different places and introduce diversity in the system to prevent common mode failure.
Once you have backup, your own survival is no longer the highest priority. It enables altruism so it's ok to sacrifice yourself if it can improve the chance that your duplicates will continue to survive.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 10/05/2020 11:40:25
J was living out of waste bins. His suicide will only benefit the population of urban foxes.

I think your moral code says that no matter how wretched, awful, unremittingly painful and pointless one's existence, suicide is only permitted if it benefits someone else. In my book, that is a disgusting attitude. Whose life is it?
In your case, someone elses get benefit from J's death, although it may not be felt significant. There would be more O2 and less CO2. More space. Less disease vector. Less sh1t and urine. If J's existence can't compensate the burden he brings to the others, then letting him go would be a better option, especially when he himself doesn't want to live anymore.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 10/05/2020 11:43:35
Anyway, let's run with it.  The kamikaze pilot has sworn to die for the greater glory of the Emperor.  He has several choices, including defecting to the enemy, deliberately missing his target and crashing into the sea, killing a thousand enemy sailors, or even turning back to his base and wiping out the rest of the squadron. What would you do, and what would be the greater moral good? You may tackle the simpler problem of the suicide bomber if you wish.
Given the knowledge of what would happen in the future, the option is obvious. He should defect to the enemy. Giving them information he have to help ending the war as quickly as possible.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 10/05/2020 12:26:33
In your case, someone elses get benefit from J's death, although it may not be felt significant. There would be more O2 and less CO2. More space. Less disease vector. Less sh1t and urine. If J's existence can't compensate the burden he brings to the others, then letting him go would be a better option, especially when he himself doesn't want to live anymore.
But that would be the case for any suicide. So it's a universally good thing to do. I think we agree.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 10/05/2020 12:34:14
Given the knowledge of what would happen in the future, the option is obvious. He should defect to the enemy. Giving them information he have to help ending the war as quickly as possible.

The essence of effective command is that the cannon fodder know nothing of value to the enemy. That way, prisoners become a burden rather than an asset.

Wars end when one side has won. Your solution presumes at least that the moral right is owned by the target.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/05/2020 09:19:50
In your case, someone elses get benefit from J's death, although it may not be felt significant. There would be more O2 and less CO2. More space. Less disease vector. Less sh1t and urine. If J's existence can't compensate the burden he brings to the others, then letting him go would be a better option, especially when he himself doesn't want to live anymore.
But that would be the case for any suicide. So it's a universally good thing to do. I think we agree.
IMO, death is a technical problem, which should be solved technically.
I don't think that suicide is a universally good thing to do. Imagine if everyone who thinks that suicide is a universally good thing to do, does commit suicide. That would leave a universe without conscious beings who thinks that suicide is a universally good thing to do. Thus we see anthropic principle at play here.
https://en.wikipedia.org/wiki/Anthropic_principle
Quote
The anthropic principle is a philosophical consideration that any data we collect about the universe is filtered by the fact that, in order for it to be observable in the first place, it must be compatible with the conscious and sapient life that observes it.
It should be obvious that suicidal behavior is self defeating.
IMO, suicidal behavior can only be acceptable if we know that there are other conscious beings which are not suicidal, and get some benefit from our death.
Consider an extreme situation that I posted here.
To get the most universal moral rule, we can test them against various situations, and see which rules stand out all of them. In many ordinary situations, most common moral rules would pass. Fundamental rules must still be followed in some extreme cases, such as trolley problems and Heinz dilemma. If an exception can be justified when dealing with those extreme cases, that particular rule is not universally applicable.
Here is the most extreme case I can think of. A gamma ray burst suddenly attack earth killing all known conscious being, except you who is currently in a spaceship toward Mars.
You are the last conscious being in the universe. Your most fundamental moral duty is to survive. You'll need to improve yourself to be better at survival. You'll need to improve your knowledge and make better tools to help you survive. You may need to modify yourself, either genetically or by merging with robotics. You may need to create backup/clones to eliminate a single point failure. You may spread to different places and introduce diversity in the system to prevent common mode failure.
Once you have backup, your own survival is no longer the highest priority. It enables altruism so it's ok to sacrifice yourself if it can improve the chance that your duplicates will continue to survive.


Evaluation of a moral action is done by analysing its costs and benefits. Losing someone's life means losing a computing resource and some actuating capability which can contribute to the achievement of a universal goal. A universal goal is likely to be achieved by those who acknowledge it, hence they can actively strive for it effectively and efficiently.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/05/2020 09:37:54
Given the knowledge of what would happen in the future, the option is obvious. He should defect to the enemy. Giving them information he have to help ending the war as quickly as possible.

The essence of effective command is that the cannon fodder know nothing of value to the enemy. That way, prisoners become a burden rather than an asset.

Wars end when one side has won. Your solution presumes at least that the moral right is owned by the target.

My answer above didn't take presumptions. It was made based on known fact about what would happen until long after the war ended.
Those pilot may not know the complete information. Thus they could make different decisions based on information hidden from them. Due to this hidden information, a morally good person can make a decision which eventually give bad results.
A doctor can give a common prescription to a patient with common symptoms of a common disease. But if the patient has a very rare condition unknown by the doctor which makes the prescription lethal, it doesn't make the doctor morally bad even if the patient die from taking it.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 11/05/2020 12:57:53
It should be obvious that suicidal behavior is self defeating.
Unless your objective in life is to kill others (like a bee, a kamikaze or a suicide bomber) or to avoid an unpleasant future, in which case it can be 100% effective.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 11/05/2020 13:06:39
My answer above didn't take presumptions. It was made based on known fact about what would happen until long after the war ended.
You suggested that defection would be the morally correct decision as it would shorten the war.

The Calais garrison was ordered to fight to the last man to protect the retreat to Dunkirk. Obvious suicide. They could have surrendered or even defected to clearly superior forces, allowing the Nazis to reach Dunkirk, wipe out the Allied armies, and thus shorten WWII by about 3 years. In what way would that have been morally correct? 
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 11/05/2020 13:08:54
Quote
IMO, death is a technical problem, which should be solved technically.]
No, it's the non-technical solution to the problem of overcrowding, mass starvation, and loss of capacity for independent survival.

Quote
You are the last conscious being in the universe. Your most fundamental moral duty is to survive.
Duty to whom?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/05/2020 21:41:15
It should be obvious that suicidal behavior is self defeating.
Unless your objective in life is to kill others (like a bee, a kamikaze or a suicide bomber) or to avoid an unpleasant future, in which case it can be 100% effective.
The bees don't go extinct because they only commit suicide to protect their duplicates.
To prevent unpleasant future, we can collectively build a system which can represent objective reality accurately and precisely, thus we can distribute available resources to achieve what we desire effectively and efficiently.
The fact that you're still alive to write this post is an evidence that you don't really think that suicide is a universally good moral action.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/05/2020 22:41:34
My answer above didn't take presumptions. It was made based on known fact about what would happen until long after the war ended.
You suggested that defection would be the morally correct decision as it would shorten the war.

The Calais garrison was ordered to fight to the last man to protect the retreat to Dunkirk. Obvious suicide. They could have surrendered or even defected to clearly superior forces, allowing the Nazis to reach Dunkirk, wipe out the Allied armies, and thus shorten WWII by about 3 years. In what way would that have been morally correct? 
It depends on which side you are in. If your side's ultimate goal isn't compatible with universal moral values, you better leave as soon as possible.
What would happen if Hirohito didn't surrender?
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 11/05/2020 22:44:03
No, it's the non-technical solution to the problem of overcrowding, mass starvation, and loss of capacity for independent survival.
All of those are technical problems which could be solved technically. They are due to lack of good planning which makes available resources couldn't be distributed properly to achieve desired result effectively and efficiently.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 11/05/2020 22:56:00
To prevent unpleasant future, we can collectively build a system which can represent objective reality accurately and precisely,
We already have a system that represents reality. It's  called reality. And we don't seem very good at dealing with it.
Quote
The fact that you're still alive to write this post is an evidence that you don't really think that suicide is a universally good moral action.
So the fact that I'm not completely penniless is evidence that I don't think it is morally good to donate to charity, eh? Come on, mate, you can do better than that! Moral does not mean compulsory.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 11/05/2020 23:03:34
What would happen if Hirohito didn't surrender?
My father, with around 400,000 others, would have invaded Japan, and after several years of war and millions more deaths one side would have imposed martial law on the other.
Title: Re: Is there a universal moral standard?
Post by: Colin2B on 11/05/2020 23:20:10
The bees don't go extinct because they only commit suicide to protect their duplicates.
Not a good example. Bees don’t expect to commit suicide, their sting has evolved to kill other insects and they can sting them repeatedly without dying when protecting the hive. When they (rarely) sting thick skinned mammals eg humans the sting gets lodged in the skin and if torn out will kill the bee.
Sometimes you will see the bee lodged in your skin, if you allow the bee to spin round or help it by holding it by the wings, it is possible for the sting to come out and the bee to survive.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/05/2020 05:00:52
The bees don't go extinct because they only commit suicide to protect their duplicates.
Not a good example. Bees don’t expect to commit suicide, their sting has evolved to kill other insects and they can sting them repeatedly without dying when protecting the hive. When they (rarely) sting thick skinned mammals eg humans the sting gets lodged in the skin and if torn out will kill the bee.
Sometimes you will see the bee lodged in your skin, if you allow the bee to spin round or help it by holding it by the wings, it is possible for the sting to come out and the bee to survive.
So it means that bee's death after stinging enemy is an unintended consequence, rather than desired result. A better result for them is when they can repel the enemy without killing themselves.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/05/2020 05:24:39
Duty to whom?
To future conscious beings who will bring singularity into reality.
Starting from cogito ergo sum, we can observe our surroundings and infer that we came from our parents, who in turn have came from their parents and so forth. They have carried out their duty to enable our existence. This can be extrapolated to the future.
With technological singularity, the transition to future concious beings do not necessarily involve death. It can be done smoothly without abrupt termination of an agent's consiousness and starting a new one from scratch. A new copy of conscious agent can be built already fully equipped with all necessary knowledge to explore the universe.
Title: Re: Is there a universal moral standard?
Post by: Colin2B on 12/05/2020 06:53:29
So it means that bee's death after stinging enemy is an unintended consequence, rather than desired result. A better result for them is when they can repel the enemy without killing themselves.
Much better. When the hive is attacked by a predator such as a wasp, the guard bees will often use a technique called balling. A large number of them will surround the wasp forming a ball with the wasp at the centre, they will then use their standard heat generating technique of dislocating their wings and vibrating the wing muscles to generate heat - much like we do when shivering. The temperature at the centre of the ball is enough to kill the wasp. 

Individual bees will sting a wasp and it is thought that the alarm pheromone released by stinging brings in other bees to attack, hence the ball. I’m not sure about this because newly emerged queens will seek out other queens and sting them to death, they also sting unemerged queens, still in their cells, through the cell wall and this doesn’t seem to initiate balling. That said, worker bees will ball and kill spare queens.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/05/2020 09:30:00
We already have a system that represents reality. It's  called reality. And we don't seem very good at dealing with it.
What I mean is something we can use to predict the future and simulate what would be the consequence if we do some actions so we can choose the options which would eventually bring us desired results. For example, we already have sequenced complete DNA of corona virus, but the tests for vaccine still need a long time. The system could speed up the trial and error process so we can get the result much faster.
Quote
Research is happening at breakneck speed. About 80 groups around the world are researching vaccines and some are now entering clinical trials.

The first human trial for a vaccine was announced last month by scientists in Seattle. Unusually, they are skipping any animal research to test its safety or effectiveness
In Oxford, the first human trial in Europe has started with more than 800 recruits - half will receive the Covid-19 vaccine and the rest a control vaccine which protects against meningitis but not coronavirus
Pharmaceutical giants Sanofi and GSK have teamed up to develop a vaccine
Australian scientists have begun injecting ferrets with two potential vaccines. It is the first comprehensive pre-clinical trial involving animals, and the researchers hope to test humans by the end of April
However, no-one know how effective any of these vaccines will be.

When will we have a coronavirus vaccine?
A vaccine would normally take years, if not decades, to develop. Researchers hope to achieve the same amount of work in only a few months.

Most experts think a vaccine is likely to become available by mid-2021, about 12-18 months after the new virus, known officially as Sars-CoV-2, first emerged.

That would be a huge scientific feat and there are no guarantees it will work.

Four coronaviruses already circulate in human beings. They cause common cold symptoms and we don't have vaccines for any of them.
https://www.bbc.com/news/health-51665497
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 12/05/2020 09:33:09
So the fact that I'm not completely penniless is evidence that I don't think it is morally good to donate to charity, eh?
It is evidence that you think there is something more important than donating to charity.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 12/05/2020 11:26:35
their sting has evolved to kill other insects and they can sting them repeatedly without dying when protecting the hive.
Off topic, but this is something that has always bothered me.
A barbed sting is not an obvious evolution as it does not confer any advantage on the first animal to evolve it, as it will die the fort time it deploys the weapon (maybe this is relevant - see the discussion on kamikaze!). Only queens reproduce, and their barbs are much smaller - they can sting multiple times - so somehow they have evolved a creature that is significantly different from themselves, but AFAIK the difference is due to the early nutrition of the grubs. Balling is used against other insects attacking the hive, as you say, but the sting is presumably used for single insect combat where its deployment is pyrrhic, or against the main enemy of the hive, mammals.

So a physical characteristic has evolved that benefits the society at the cost of the individual and destroys those that use it. This beats all the "chicken and egg" questions, and seriously questions the human uniqueness of consciousness, whatever that means.
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 18/05/2020 05:37:47
IMO, any actions can be classified morally into 3 categories :
- moral actions lead to desired conditions. The desired result can be achieved more reliably with better information.
- immoral actions lead to undesired conditions. The undesired result can be achieved more reliably with better information.
- amoral actions are indefferent to resulting conditions. The reliability of result isn't affected by any amount of information.

At a glance, they seem to be applicable for consequentialist ethics only, and not rule based ethics. But that's not the case, since rule based ethics merely elevate "obedience to some arbitrary rules" as the desired conditions. Those rules in turn need justification from a more fundamental principle.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 18/05/2020 08:38:53
Desired by whom? If you don't class genocide or rape as a moral action, you have led yourself into a circular argument: a moral action must be desired by a moral person, that is a person whose actions are moral...….   
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 19/05/2020 11:29:18
Desired by whom? If you don't class genocide or rape as a moral action, you have led yourself into a circular argument: a moral action must be desired by a moral person, that is a person whose actions are moral...….   
Desired by the conscious beings evaluating those actions, based on moral standards that they believe to be true. If they turn out to be in conflict with the universal moral standard, then they must have made one or more false assumptions.
Genetic algorithm can help us to solve complex problems with many variables and incomplete information. Take several populations where genocide or rape are believed to bring desired result for them. Take other populations which think otherwise. See which populations end up closer to the universal moral standard.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 19/05/2020 17:04:32
The Nazis had a huge parliamentary majority. "Death to the infidel" is believed by millions, some of whom consider rape to be their prerogative. "Stone the Catholics" is a moral imperative for many Protestants.

You can't claim that any of these offensive groups are in conflict with the Universal Moral Standard until you have defined the UMS, so we are still in a circular argument!
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 20/05/2020 11:17:19
The Nazis had a huge parliamentary majority. "Death to the infidel" is believed by millions, some of whom consider rape to be their prerogative. "Stone the Catholics" is a moral imperative for many Protestants.

You can't claim that any of these offensive groups are in conflict with the Universal Moral Standard until you have defined the UMS, so we are still in a circular argument!
I've stated in the opening of this thread.
I consider this topic as a spinoff of my previous subject
https://www.thenakedscientists.com/forum/index.php?topic=71347.0
It is split up because morality itself is quite complex and can generate a discussion too long to be covered there. 
So, I define universal moral standard as a moral standard which can help to achieve the universal ultimate goal, which I discuss in separate thread.
I've also opened another thread to discuss how to achieve that universal ultimate goal technically.
https://www.thenakedscientists.com/forum/index.php?topic=77747.0
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/05/2020 08:53:23
It would be useful to distinguish between moral rules and non-moral rules. A moral rule can only be obeyed by conscious beings. If a rule is obeyed by unconscious things, it can't be a moral rule. For example, right hand rule to determine electromagnetic force. Other natural laws such as Newton's, Planck's, Gauss', and thermodynamics laws can't be moral laws.
Another requirement to be a moral rule is that its goal is to improve wellbeing of conscious beings, although it may be implicit due to hidden assumptions. Rules of games don't meet this criterion, hence they are not moral rules.
Some assumptions made while setting a moral rule might be proven false, which make the rule to be immoral. For example, human sacrifice to appeas gods, caste system, kamikaze, etc.
Title: Re: Is there a universal moral standard?
Post by: alancalverd on 21/05/2020 13:05:09
Still circular! You have now defined a moral rule as one that is not immoral!

Samuel Johnson's definition of a net as "a reticulated assemblage of holes separated by string" was absurd but at least it was linear. 
Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/05/2020 14:09:11

Title: Re: Is there a universal moral standard?
Post by: hamdani yusuf on 21/05/2020 16:00:22
Still circular! You have now defined a moral rule as one that is not immoral!

Samuel Johnson's definition of a net as "a reticulated assemblag