Morality (from Latin: moralis, lit. 'manner, character, proper behavior') is the differentiation of intentions, decisions and actions between those that are distinguished as proper and those that are improper.[1] Morality can be a body of standards or principles derived from a code of conduct from a particular philosophy, religion or culture, or it can derive from a standard that a person believes should be universal.[2] Morality may also be specifically synonymous with "goodness" or "rightness".
Ethics or moral philosophy is a branch of philosophy that involves systematizing, defending, and recommending concepts of right and wrong conduct.[1] The field of ethics, along with aesthetics, concern matters of value, and thus comprise the branch of philosophy called axiology.[2]
Ethics seeks to resolve questions of human morality by defining concepts such as good and evil, right and wrong, virtue and vice, justice and crime. As a field of intellectual inquiry, moral philosophy also is related to the fields of moral psychology, descriptive ethics, and value theory.
From those specific cases we may be able to conclude a general rule behind the decisions made in those cases.Probably not.
Without delving too deeply into the definition of morality or ethics, I think we can usefully approach the subject through "universal". The test is whether any person considered normal by his peers, would make the same choice or judgement as any other in a case requiring subjective evaluation.Thank you for spending your precious time to join this discussion. I realize that there are many theories on morality and ethics as described in Wikipedia links, and many of them are incompatible to each other. So far I haven't found a general consensus among modern philosopher on this topic. May be that's why we can find those mutual debunking videos from Youtubers who have similar world view and usually agree with each other on most other topics.
This immediately leads to a sampling question. "Turn the other cheek" would be considered normal and desirable in some peer groups, whilst "an eye for an eye" might be de rigeur for others. Both strategies have evolutionary validity: think rabbits, which outbreed their predators, and lions, where only the strongest male gets to breed.
Homo sapiens is an odd creature We breed too slowly to survive as prey, and are too weak to be predators, but a very complex collaboration allows us to farm and hunt all we need. That said, although we can see the value of large scale collaboration (like bees and ants) it takes a long time to acquire the knowledge and skills needed to participate, so the small "family" unit (including communes and kibbutzim) is a prerequisite of survival.
Thus we grow up with at least two loyalties, to the immediate family that supports us, and to the wider community that supports the family. No problem if we have infinite resources and unlimited choice, but the decisions we make in restricted circumstances are what defines our morality, and it is fairly clear from daily accounts of religious wars and magistrates' court proceedings that either there is no universal concept of right and wrong, or that it can be set aside for personal gain.
Probably not.The example quoted by @alancalverd (eye for eye) shows the problem of trying to decide a universal ethic.While some might go for the lesser evil, Alan is likely to go for population reduction and set the trolly on the 5.We won't find out if we don't even try, do we?
We won't find out if we don't even try, do we?That depends what you are trying to find out. Your question is asking about a universal ethic/morality, but @alancalverd shows that it doesn’t exist.
That depends what you are trying to find out. Your question is asking about a universal ethic/morality, but @alancalverd shows that it doesn’t exist.Perhaps you are trying to devise a methodology to determine the ethic/morality that drives a particular individual or group in specific circumstances.I think Alan's post only shows that morality can be subjective, limited by space and time (in answering standard questions of who, where and when), but doesn't show that it can't be collective. If some moral standards can be shown to be universally applicable, that will answer the question of the topic.
I think the oft quoted saying an eye for an eye was meant to limit revenge not to encourage it.I think it is both ways. It can also be used to discourage the offense in the first place.
To answer the question properly we need to define the boundary of the subject. We need to answer standard questions : what, where, when, who, why, how.I find that in real life experiment, there are something significant not considered in the thought experiments. Those are uncertainty about the assertions in the narrative of the situation. Is it true that doing nothing will cause something bad to happen? (in the experiment in the video, not really). Is it true that our action will give us a more desired (or less undesired) result?
We can also explore the subject further using thought experiments and their variations such us trolley problem.
https://en.wikipedia.org/wiki/Trolley_problem
From those specific cases we may be able to conclude a general rule behind the decisions made in those cases. In my opinion, the trolley problem and its variations ask us what is the priority held by the decision maker, and what factors may influence it.
I found a trolley problem experiment in real life in this video:
I think Alan's post only shows that morality can be subjective, but doesn't show that it can't be collective. If some moral standards can be shown to be universally applicable, that will answer the question of the topic.
In this topic, I'm focusing on the search of similar values among different societies, because it is the requirement of something being universal. In your hypothetical case, destroying any life forms in other planet cannot be the universal moral standard, because it only applies when the lifeforms in that particular planet realize that there are other planets, and there exist other lifeforms there. Until then, this moral value has no guidance function, hence useless as moral standard.
...perhaps the moral standard of some planet in a galaxy far, far away might be to destroy any life form existing on any other planet in the universe (kind of like destroying potentially dangerous alien life forms).
I think you posted this before I finished editing my post about Bayesian inference that causes subjectivity in real life judgment of moral actions. The next question is, are there residual subjective factors when bayesian inference is excluded from the equation?
I spend some time sitting on medical research ethics committees. The general guidance seems to boil down to whether the balance of risk and benefit has been fully evaluated and presented such that the famous "man on the Clapham omnibus" would be able to make an informed decision to participate. But in making that judgement, we are often aware that even his brother on the Brooklyn omnibus has a slightly different perspective, and we can only guess at what the average Tokyo commuter might consider acceptable.
I'll try to answer standard questions, starting with "What". In most theories, morality can be seen as a method to distinguish between right and wrong, good and bad, proper and improper. It follows that to get to universal agreement on morality, we need first to agree on what is defined by the words right and wrong, good and bad, proper and improper. This inevitably lead us to the next question: who decides what's right and wrong, good and bad, proper and improper, and why?I'll refine the answer to what question later. Now I'll try address the who question.
Question of when and where can be more easily answered. A universal moral standard must be applicable anywhere and anytime.
Universal morality as in universally applied by people/aliens - no. Universal morality as in absolute morality - yes. There is an absolute morality, and most attempts at formulating moral rules are attempts to produce that underlying absolute morality. The reason we find so much in common between different attempts at formulating systems of moral rules is that they are all tapping into an underlying absolute morality which they are struggling to pin down precisely, but it is there.I realize that there are already diverse moral values followed by human on earth, even though we know that humanity is just a small portion of universe in terms of time and space. Finding out a moral standard which is applicable universally seems even more improbable.
What is absolute morality? The idea of "do unto others as you'd have them do unto you" captures most of it, but it's not quite right. "Always try your best to minimise harm (if that harm isn't cancelled out by the gains for the one who suffers it)" was one of my attempts to formulate the rule properly, and it does the job a lot better, but I'm not sure it's completely right. The correct solution is more of a method than a rule: it's to imagine that you are all the people (and indeed all the sentient beings) involved in a situation and to make yourself as happy as possible with the result of whatever action is determined to produce that maximum happiness. You must imagine that you will have to live each of their lives in turn, so if one of them kills one of the others, you will be both the killer and the one killed, but that killing will be the most moral action if it minimises your suffering and maximise your pleasure overall.
This is how intelligent machines will attempt to calculate what's moral in any situation, but they will often be incapable of accessing or crunching enough data in the time available to make ideal decisions - they can only ever do the best they can with what is available to them, playing the odds.
(This is a kind of utiliratrianism. The strongest objection I've seen to utilitarianism is the Mere Addition Paradox, but there's a major mathematical fault in that paradox and anyone rational should throw it in the bin where it belongs.)
But there's your problem - there is no universally applicable rule! Witness the ecstatic joy of the Hitler Jugend, and the total misery they wrought on everyone, including, eventually, themselves.We cannot prove the nonexistence of something. But we can prove that something that is offered is absurd, paradoxical, superfluous or suboptimum to explain some phenomena or to achieve desired results.
Truth will never be decided by opinion polls.They are merely stepping stones to get closer to the truth. They rely on the assumption that the constituents are mostly rational.
I did say the Golden Rule is faulty. That's why I came up with a better rule (the harm minimisation one) which removes the major problems with it, but I'm not sure it is perfect. What does appear to be perfect is the method of considering yourself to be all the people involved in a scenario. Let's apply it to the Trolley Problem. You are the person lying on one track which the trolley is not supposed to go down. In other lives, you are the ten people lying on another track which the trolley is scheduled to go down. In another life you are the person by the lever who has to make a decision. How many of yourself do you want to kill/save in this situation? Should you save the ten idiot versions of yourself who have lain down on a track which the trolley is scheduled to go down, or should you save the lesser idiot version of yourself who has lain down on the other track in the stupid assumption that the trolley won't go that way? It's a calculation that needs a lot of guessing unless you have access to a lot of information about the eleven people in question so that you can work out whether it's better to die ten times as mega-morons or once as a standard moron, but it's still a judgement that can be made on the basis of self-interest. All scenarios can be converted into calculations about self-interest on the basis that you are all of the players. This doesn't make the calculations easy, but it does provide a means for producing the best answer from the available information.If we propose minimizing harm as a fundamental moral rule, we need to agree first on its definition. If it's about inflicting pain, then giving painkiller should solve the problem, which is not the case.
Every hour you continue to exist is of the greatest help to the B.E.F. Government has therefore decided you must continue to fight. Have greatest possible admiration for your splendid stand. Evacuation will not (repeat not) take place, and craft required for above purposes are to return to Dover. Verity and Windsor to cover Commander Mine-sweeping and his retirement.
The Trolley Problem should never be dismissed as an academic exercise. Churchill's decision not to evacuate the Calais garrison in 1940 is a classic case of balancing the certain death of a few against the possible survival of many by delaying the German advance on Dunkirk. Imagine sending this signal:I'll cover that into more detail when answering how question.QuoteEvery hour you continue to exist is of the greatest help to the B.E.F. Government has therefore decided you must continue to fight. Have greatest possible admiration for your splendid stand. Evacuation will not (repeat not) take place, and craft required for above purposes are to return to Dover. Verity and Windsor to cover Commander Mine-sweeping and his retirement.
To answer why keeping the existence of conscient beings is a fundamental moral rule, we can use a method called reductio ad absurdum to its alternative.Alternatively, imagine that there are rules more fundamental than preservation of conscient beings. To make sure that those rules are followed, it requires that there exist conscient beings. That makes the preservation of conscient beings a prerequisite rule, and takes higher priority.
Imagine a rule that actively seeks to destroy conscient beings. It's basically a meme that's self destruct by destroying its own medium. Or conscient beings that don't follow the rule to actively keep their existence (or their copies) will likely be outcompeted by those who do, or struck by random events and cease to exist.
If we propose minimizing harm as a fundamental moral rule, we need to agree first on its definition.
If it's about inflicting pain, then giving painkiller should solve the problem, which is not the case.
If it's about causing death, then death penalty and euthanasia are in direct violation.
Hence there must be a more fundamental reason why this proposed rule works in most cases, but still have some exceptions.
Hence, keeping the existence of conscient beings is one of the most fundamental moral rules, if not the most.There seems to be some debate about which are conscient (conscious?) beings to which this moral rule applies...
Does it have any exceptions? Show me one.Imagine a genius who want to minimize suffering by creating a virus that makes people sterile. He prevents sufferings from countless number of people from the next generation.
In my previous post answering what question I said that there are spectrum of consciousness. There are multidimensional level of consciousness. In the data processing capabilities alone, there are depth and breadth of the neural networks, also processing speed and data storage capacity. Also data validity/robustness and error correction capability. In input/output system, there could be various level of accuracy and precision. Those levels apply generally wether or not they're organic/biological systems.Quote from: hamdani yusufHence, keeping the existence of conscient beings is one of the most fundamental moral rules, if not the most.There seems to be some debate about which are conscient (conscious?) beings to which this moral rule applies...
- Some apply it to just members of their own family or tribe
- Others apply it to just members of their own country or religion
- Thinking more broadly, are elephants conscious, or dolphins? How should we treat them?
- What about our pet dog or cat?
Finally we get to the last question: how. There are some basic strategies to preserve information which I borrow from IT business:In the opening of this topic I've said that it's a spin-off from my previous post titled universal utopia, which shown that consciousness is a product of natural process. The evolution of consciousness is a continuation/extention of biological evolution, which in turn a continuation of chemical and physical evolution. There I've said that creating copies is one important strategies to preserve a system's existence. It increases the chance of a system to survive random events in the environment. But it also requires more resources, which must be shared with other strategies to achieve goals effectively and efficiently.
Choosing robust media.
Creating multilayer protection.
Creating backups.
Create diversity to avoid common mode failures.
Does it have any exceptions? Show me one.Imagine a genius who want to minimize suffering by creating a virus that makes people sterile. He prevents sufferings from countless number of people from the next generation.
Or the virus makes people don't want to have kids.
Or replace the virus with a meme.
There must be a reason why people want to reproduce, to feel joy and happiness, avoid pain, but also willing to conserve resources, make sacrifices, be altruistic, feeling empathy, eradicate unwanted things, create laws, etc. They seem to be unrelated scattered pieces of puzzle. Here I want to assemble them into one big picture using a universal moral standard.
That is not a genius, but a selfish bastard who wants less enjoyment for others and more for himself (because he will feel happier if they don't exist). The reality is that people overwhelmingly enjoy existing, and the minority who don't enjoy it (usually because of difficult circumstances) live in the hope of better times to come. There is no valid excuse for eliminating them. They generally want to have children and can be deeply depressed if they are unable to do so. Modifying people not to want to have children is a monumental assault unless they willingly agree to it. You cannot simply convert an immoral action into a moral one by partially killing someone (by changing them to be less than they were before). If you kill someone, they don't mind being dead once their dead, but that's not an argument that painless murder is acceptable. Modifying people by force not to care about loss of capability is immoral in the extreme (except in extreme cases where it isn't, such as where a population needs to be reduced for environmental reasons, and even then it would need to be a case where some people need to stop breeding altogether in order to keep within sustainable limits - in such a case, you would have to do this to the people of lowest quality, and those should ideally be the ones with the lowest moral standards - there are a lot of rape-and-pillage genes which could do with eradication).
There must be a reason why people want to reproduce, to feel joy and happiness, avoid pain, but also willing to conserve resources, make sacrifices, be altruistic, feeling empathy, eradicate unwanted things, create laws, etc. They seem to be unrelated scattered pieces of puzzle. Here I want to assemble them into one big picture using a universal moral standard.
Science is a useful tool to achieve universal goals by improving accuracy and precision of models of reality, hence conscious being can make better plans and reduce unexpected results.There must be a reason why people want to reproduce, to feel joy and happiness, avoid pain, but also willing to conserve resources, make sacrifices, be altruistic, feeling empathy, eradicate unwanted things, create laws, etc. They seem to be unrelated scattered pieces of puzzle. Here I want to assemble them into one big picture using a universal moral standard.
This sounds like basic animal instincts without laws or religion, both of which evolve. It may be at some point in the future science becomes religion, and the laws protect all animals equally. This would of course involve irradicating all religious belief and accepting that all life forms were equal and food for the other. ?????
Finally we get to the last question: how. There are some basic strategies to preserve information which I borrow from IT business:Now I'll try to explain each of those strategies. For choosing robust media, biological evolution has provide brainy organisms. As far as I know, human species is the most successful one. In conjunction with other strategies, human developed written language, books, computer with various physical media such as magnetic and optical disks, also solid state memories.
Choosing robust media.
Creating multilayer protection.
Creating backups.
Create diversity to avoid common mode failures.
You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options:Let's start with the most basic version, with following assumptions:
Do nothing and allow the trolley to kill the five people on the main track.
Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the more ethical option?
It all comes down to how you handle the data to be as right as you can be for the available information. Add another fact and the answer can change - it can switch every time another piece of information is provided. Some of the information is prior knowledge of previous situations and the kinds of guesses that might be appropriate as a substitute for hard information. For example, if the only previous case involved a terrorist tying five old people to one line and a child to the other, that could affect the calculations a bit. Might it be a copycat terrorist? Was the previous case widely publicised or was it kept quiet? If the former, then the terrorist this time might have tied five children to one line and one old person to the other, hoping that the person by the lever will think, "I'm not falling for that trick - it'll be five old people and one child again, so I'll save the child," thereby leading to five children being killed.That's right. That's why we need moral rules in the first place, and we need a moral standard that we all can agree on. And we need to educate people about that, as young as possible to minimize damage they could do and maximize their contribution to the society.
The moral decision itself isn't hard - it's crunching the data to try to get the best outcome when there are lots of unknown factors that can make it close to random luck whether the less damaging outcome occurs, and if there's enough trickery involved, the best calculation could be guaranteed to result in the worse outcome simply because all the available data has been carefully selected to mislead the person (or machine) making the decision.
morality is a standard established by a ruling class; primarily to benefit themselves.Your moral rule cannot be a universal standard. Because they're limited in time and space. It doesn't apply when and where you don't have influence, such us before you born, after you die, or in other countries.
i concieve of, and establish my own morality...i am a one man ruling class; and my morality benefits myself and any others i choose to protect.
all others concept of morality can kiss my ass.
The universal rule should concern about the existence of consciousness in the eventual results, which is required by the timelessness of the rule.Since universal moral standard concerns about long term results, it would take a lot of factor to calculate, which might not make it practical. Bad results might come before the decision is made due to long duration of the calculation, and the factors influencing the calculation might have change before the calculation is complete.
In chess, the chess piece relative value system conventionally assigns a point value to each piece when assessing its relative strength in potential exchanges. These values help determine how valuable a piece is strategically. They play no formal role in the game but are useful to players and are also used in computer chess to help the computer evaluate positions..
Calculations of the value of pieces provide only a rough idea of the state of play. The exact piece values will depend on the game situation, and can differ considerably from those given here. In some positions, a well-placed piece might be much more valuable than indicated by heuristics, while a badly placed piece may be completely trapped and, thus, almost worthless.
Valuations almost always assign the value 1 point to pawns (typically as the average value of a pawn in the starting position). Computer programs often represent the values of pieces and positions in terms of 'centipawns' (cp), where 100 cp = 1 pawn, which allows strategic features of the position, worth less than a single pawn, to be evaluated without requiring fractions.
Edward Lasker said "It is difficult to compare the relative value of different pieces, as so much depends on the peculiarities of the position...". Nevertheless, he said that bishops and knights (minor pieces) were equal, rooks are worth a minor piece plus one or two pawns, and a queen is worth three minor pieces or two rooks (Lasker 1915:11).
The preference to save child over old people is based on following assumptions:
1. The old people will die soon anyway, while the child still have a long life to go.
2. Social and physical enviroment is conducive to raise children.
3. The child can be raised well so he/she can contribute positively to the society.
Again, if those assumptions can be proven false, the preference may change.
I'll show another variation of trolley problem, where the one sacrificed for the five was a relative or romantic partner. Survey data shows that respondents are much less likely to be willing to sacrifice their life.
Let's assume that there are no uncertainty about all of those assumptions. At a glance, it seems to be obvious that the doctor should kill that tourist and provide his healthy organs to those five dying persons and save their lives.
Since universal moral standard concerns about long term results, it would take a lot of factor to calculate, which might not make it practical. Bad results might come before the decision is made due to long duration of the calculation, and the factors influencing the calculation might have change before the calculation is complete.
Hence we need to create shortcut, rule of thumb, or hash table to deal with frequently occurring situations. They must be reasonably easy to calculate and work in most cases.
Thanks for contributing to this discussion. I agree with most of your post above, so I'll try to identify where we split opinions. It's likely that we took different assumptions.Let's assume that there are no uncertainty about all of those assumptions. At a glance, it seems to be obvious that the doctor should kill that tourist and provide his healthy organs to those five dying persons and save their lives.
No it doesn't - it is immediately obvious that one of the ill people can be sacrificed instead. However, you can introduce more information to rule that out - the healthy traveller's organs are compatible with all the others, but none of the others are compatible with each other. We now have a restored dilemma in which killing one person saves more. (This ignores organ rejection and decline - most transplanted hearts will fail within a decade, for example, but let's imagine that there's no such problem.
One of the important factors here is that no one wants to live in a world where they could be killed in such a way to save the lives of ill people (who wouldn't want to be saved in such a way either) - it's bad enough that you could die in accidents caused by factors outside of anyone's control, but you don't want to live in fear that you'll be selected for death to mend other people who may be to blame for their own medical problem or who may have bad genes which really shouldn't be passed on. You also don't want the fact that you've been careful to stay as healthy as possible to turn you into a preferred donor either - that could drive people to live unhealthy lives as it might be safer to risk being someone who needs a transplant than to be a good organ donor. However, if people's own morality is taken into account, it would serve someone right if they were used in this way if they've spent their life abusing others. As with all other moral issues, you have to identify as many factors as possible and then weight them appropriately so that the best outcome is more likely to be produced. A lot of the data needed to make ideal decisions isn't available yet though - it would take a lot of studying to find out how people feel in such situations and afterwards so that the total amount of harm can be counted up.
Since universal moral standard concerns about long term results, it would take a lot of factor to calculate, which might not make it practical. Bad results might come before the decision is made due to long duration of the calculation, and the factors influencing the calculation might have change before the calculation is complete. Hence we need to create shortcut, rule of thumb, or hash table to deal with frequently occurring situations. They must be reasonably easy to calculate and work in most cases. Their applications should align with the spirit of universal moral standard. This comparison might be made retrospectively when the decision has already been made before the calculation based on universal moral standard is finished. When they are in conflict, some exception should be made to the application of those shortcut rules.Biological evolution has provide us with a basic and simple shortcut rule, which is to avoid pain. This can be done through reflex which is very fast since it doesn't involve central nervous system. A little bit more complex rules are our instinct to seek for pleasure and to avoid suffering. I think hedonism and utilitarian are confusing the tool with the goal.
Biological evolution has provide us with a basic and simple shortcut rule, which is to avoid pain. This can be done through reflex which is very fast since it doesn't involve central nervous system. A little bit more complex rules are our instinct to seek for pleasure and to avoid suffering. I think hedonism and utilitarian are confusing the tool with the goal.
The mathematical resolution of the simplest trlley problem assumes that your universal moral standard is to maximise the number of live humans. Since this will inevitably lead to the starvation of our descendants, it is a questionable basis for ethics.
The mathematical resolution of the simplest trlley problem assumes that your universal moral standard is to maximise the number of live humans. Since this will inevitably lead to the starvation of our descendants, it is a questionable basis for ethics.A lot of disputes may arise if we don't agree with the scope of the subject of discussion. I've stated that universal moral standard is not limited as narrow as the existence of human beings, as long as there are conscious beings. It should have been in place before modern human exist, and it should still be in place when human has evolved into other species. as long as there exist conscious beings.
The goal is what is preferred in the long run. The rules used as the shortcut is the tool.Biological evolution has provide us with a basic and simple shortcut rule, which is to avoid pain. This can be done through reflex which is very fast since it doesn't involve central nervous system. A little bit more complex rules are our instinct to seek for pleasure and to avoid suffering. I think hedonism and utilitarian are confusing the tool with the goal.
I can't follow that. What's the tool there and what's the goal?
No matter what the species or timescale, if maximisation of the number of living organisms is the prime objective and it has the unfettered capacity to maximise, it will eventually run out of food or poison itself with its own excrement. Never mind humans, you can observe the endpoint with lemmings and yeast (which is why wine never exceeds 20% alcohol).I think you might want to revisit my answer to the what, who, when, where, why, and how questions about morality in post #9, #18, #29, #30, #33,#35, #39, #40, #41, #45 - $48.
The goal is what is preferred in the long run. The rules used as the shortcut is the tool.Biological evolution has provide us with a basic and simple shortcut rule, which is to avoid pain. This can be done through reflex which is very fast since it doesn't involve central nervous system. A little bit more complex rules are our instinct to seek for pleasure and to avoid suffering. I think hedonism and utilitarian are confusing the tool with the goal.
I can't follow that. What's the tool there and what's the goal?
No matter what the species or timescale, if maximisation of the number of living organisms is the prime objective and it has the unfettered capacity to maximise, it will eventually run out of food or poison itself with its own excrement. Never mind humans, you can observe the endpoint with lemmings and yeast (which is why wine never exceeds 20% alcohol).
"Do as you would be done by" looks like a more generally applicable motto, but the fact that it can't be applied to the trolley problem suggests that there may not be a single universal moral standard. And here's where my thinking became suddenly heretical and digressive:
In the absence of a universal principle, we often choose an arbitrary standard. "The man on the Clapham omnibus" serves for many legal questions but some people revert to a single figure and ask "what would Jesus do?" Sitting here, my first thought was "well, he wouldn't eat pork" (I've been refereeing a medical experiment that involves eating a standard fatty meal)...and then (apropos lemmings, I suppose) I wondered about the Gadarene swine. Who was herding pigs in Israel?
To answer your question, I need first to continue my assertion about progress of increasing complexity of shortcut rules provided by biological evolution. With increasing complexity, more factors can be included in the calculation to generate actionable output. More complex rules can accommodate more steps into the future and at some point, they appear as planned actions.The goal is what is preferred in the long run. The rules used as the shortcut is the tool.Biological evolution has provide us with a basic and simple shortcut rule, which is to avoid pain. This can be done through reflex which is very fast since it doesn't involve central nervous system. A little bit more complex rules are our instinct to seek for pleasure and to avoid suffering. I think hedonism and utilitarian are confusing the tool with the goal.
I can't follow that. What's the tool there and what's the goal?
So when you say utilitarianism is is confusing the tool with the goal, how is it confusing a shortcut with what's preferred in the long run? Where's the incompatibility between the two?
unfortunately, I must answer the problem with a simple conclusion:
no, there is no "universal moral standard".
there can never be a "universal moral standard" until every sentient species in the universe agrees upon the standard of the combined species.
what do you have against eating pussies?
[(edit: crikey - it doesn't like the singular of pussies!)*****
"I believe in the laws of karma."
I do not...why do they keep following me? :)
The "golden rule" is subject to sampling error.
It is fairly obvious that a "family" group (could be a biological family or a temporary unit like a ship's crew) will function better if its members can trust each other. The military understand this: selection for specialist duties includes checks for honesty and recognition that you are fighting for your mates first, your country second. So the "greatest happiness for the greatest number" (GHGN) metric is fairly easy to determine where N < 50, say.
Brexit provides a fine example of the breakdown of GHGN for very large N. There is no doubt that a customs union is good for business: whether you are an importer or an exporter, N is small and fewer rules and tariffs means more profit . But if the nation as a whole (N is large) imports more than it exports, increased business flow overall means more loss, hence devaluation and reduced public budgets. At its simplest, you could model a trading nation as consisting of just two businesses of roughly equal size and turnover Nimp ≈ Nexp. Good news for any sample of size ≤ 2N is bad news for the whole population if Nimp > Nexp by even a small amount, hence the interesting conundrum "EU good for British business, bad for Britain".
an excellent view of the golden rule from the capitalist's view.
but you did not give the socialist's view of the golden rule...it seems to me it would be different.
ATMD,
thank you for your well stated comments.
i do not wish to push your patience with a follow up question...but because this topic involves morality, may i ask:
in a just world, in which morality (of the "good" kind) serves the greater of the people; and socialism follows more of a philosopy of "Due unto others" (IOW those with the most should share evenly with those with the least)...how is it that the richest (i.e. capitalist) countries are thriving, while socialist societies around the world are finding their people fleeing to find help from the wealthiest capitalist societies?
might it be that God's morality dictates that the poor should serve the wealthy?
"God's morality" dictates whatever you want it to dictate. That is the reason for inventing gods.
if religious belief gives someone comfort, I am happy for them.
my major concern is that the religious doctrines handed down to modern day people were written originally by superstitious people and modified/altered over time; so that to my thinking they are pretty much unreliable as a guide.
Golden rule relies on the assumption that both parties are rational agents with compatible preferences. It doesn't work when the assumption isn't fulfilled, such as one sided love.The "golden rule" is subject to sampling error.
It is fairly obvious that a "family" group (could be a biological family or a temporary unit like a ship's crew) will function better if its members can trust each other. The military understand this: selection for specialist duties includes checks for honesty and recognition that you are fighting for your mates first, your country second. So the "greatest happiness for the greatest number" (GHGN) metric is fairly easy to determine where N < 50, say.
Brexit provides a fine example of the breakdown of GHGN for very large N. There is no doubt that a customs union is good for business: whether you are an importer or an exporter, N is small and fewer rules and tariffs means more profit . But if the nation as a whole (N is large) imports more than it exports, increased business flow overall means more loss, hence devaluation and reduced public budgets. At its simplest, you could model a trading nation as consisting of just two businesses of roughly equal size and turnover Nimp ≈ Nexp. Good news for any sample of size ≤ 2N is bad news for the whole population if Nimp > Nexp by even a small amount, hence the interesting conundrum "EU good for British business, bad for Britain".
I see nothing wrong with the Golden Rule. A business operates on profit making rather than morality, if it does not profit, it ceases to be a business over time. It is for everyone's best interest that a business can continue to serve its customers. Otherwise, where are the customers going to get their goods and services? Customers have to be willing to give profits to businesses as incentive to keep them operating.
Premise 1: As a seller I want to maximize profit.
Premise 2: As a buyer, I want to minimize the seller's profit (pay the lowest price).
Let's look at the Golden Rule when applied to business.
If we follow this rule to its full extent, the seller would want to give as much discount to the buyer as possible (because that would be what he would have wanted if he were the buyer). Conversely, the buyer would not ask for a single discount (because that would be what he would have wanted if he were the seller)
When the golden rule is applied, both of these actions cancel themselves out.
In the sampling error illustration, the nation exporting to Britain receives the surplus profits. Yes Britain incurs a trade deficit, but this trade deficit is exactly offset by the trade surplus of the other country. There is no change in the system, simply an aggregate flow of money from Britain to the exporting nation. The trade deficit is comparable to the profit that we as buyers are willing to give sellers so that they would continue to operate and provide us the goods and services that we need.
We can put some milestones in the continuum of complexity of shortcut rules. The next step from instinct is emotion. Emotion includes anticipation of near future events. We can feel sad/happy/fear/angry before events which potentially cause pleasure/pain actually happens.The next steps from emotion are thoughtful actions, which require the systems to simulate their environments in their internal memory, and then choose the action based on the most preferred calculated result. More complex systems allow for more reliable results due to better precision and accuracy of the models in their memory, incorporating more factors, wider range in space and time. They can plan their actions to get best result further into the future.
The progress of increasing complexity can be seen in development of human from fetus into an adult. Fetuses only have reflex. Babies have developed instincts. Toddlers may have shown emotions. Little kids can have planned actions for the results a few days ahead. Older kids can make longer term plans, perhaps into the next few years. Adult humans can have plan for the next decades. Wise men may have plans for the next centuries or millennia.We can put some milestones in the continuum of complexity of shortcut rules. The next step from instinct is emotion. Emotion includes anticipation of near future events. We can feel sad/happy/fear/angry before events which potentially cause pleasure/pain actually happens.The next steps from emotion are thoughtful actions, which require the systems to simulate their environments in their internal memory, and then choose the action based on the most preferred calculated result. More complex systems allow for more reliable results due to better precision and accuracy of the models in their memory, incorporating more factors, wider range in space and time. They can plan their actions to get best result further into the future.
the migrants now stranded at the southern USA border came from socialist countries. they saw the failures of the system, and might establish rules that overcome it's weaknesses.
I love George Carlin, he is considered as one of the best comedians of all timeI agree. Though some of his materials are considered too dark for pc culture people.
morality tests such as trolley problem is used to sort priorities of moral rules based on which action leads to the more preferable conditions.Unfortunately though that most social experiments involving trolley problem or its variants don't produce scientifically objective conclusion on which option is considered morally correct. They merely mention which one is chosen by most respondents, which may give different result when asked to different population samples at different time. It's also unclear which moral values are represented by each option.
Moral rules themselves are strategies to protect conscient beings from destructive actions by other conscient beings. It's part of multilayer protection strategy.
Moral rules can't be applied to fetuses or babies, since they lack of thoughtful action capability. Any damages caused by their action/inaction are not their fault.The progress of increasing complexity can be seen in development of human from fetus into an adult. Fetuses only have reflex. Babies have developed instincts. Toddlers may have shown emotions. Little kids can have planned actions for the results a few days ahead. Older kids can make longer term plans, perhaps into the next few years. Adult humans can have plan for the next decades. Wise men may have plans for the next centuries or millennia.We can put some milestones in the continuum of complexity of shortcut rules. The next step from instinct is emotion. Emotion includes anticipation of near future events. We can feel sad/happy/fear/angry before events which potentially cause pleasure/pain actually happens.The next steps from emotion are thoughtful actions, which require the systems to simulate their environments in their internal memory, and then choose the action based on the most preferred calculated result. More complex systems allow for more reliable results due to better precision and accuracy of the models in their memory, incorporating more factors, wider range in space and time. They can plan their actions to get best result further into the future.
"Religions and cults may arise from that."This video shows the difference between cult and religion with some examples.
I have long held that religions and cults are synonymous. religious beliefs start out as "cults", and if they survive condemnation and persecution, eventually are accepted as "religions".
belief that a man could rise from the dead, getting "clear" by paying large sums, or receiving knowledge from scrolls readable only with magic spectacles, were at the beginning of their creation, considered cults.
now, many accept them as true religions.
I'd like to share this entertaining take on moral rules. I hope you enjoy this. George Carlin - 10 CommandmentsCarlin's first commandment about honesty works most of the time, but it has limiting conditions. We should not be honest when communicating with someone doing immoral things, such as a mass shooter asking about people's hiding places, or how to fix a jamming gun. This means that there are moral rules with higher priority than honesty.
Apart from the exception above, there must be some positive value of honesty to make it widely accepted as a moral guidance.In normal situations, being honest is the simplest way of communication. Dishonesty requires additional steps of information process.
life is preferred to death, health is preferred to sickness, and happiness is preferred to sufferingSome questions naturally raises from those foundations of well being. Is there a priority among them? Which one has the highest priority? which is the lowest? how to determine that (what is the rule/criteria)? Is there any exception to that rule?
The best way to explore morality is through thought experiments. Create a scenario and then apply moral rules to it to see if they produce outcomes that feel right (because there are no better alternatives). If they obviously fail that test, they're almost always wrong, but you'll be comparing them with some internalised method of judgement whose rules you don't consciously understand, so what feels right could be wrong. Correct morality depends on thinking the scenario through from the point of view of all the players involved in it in order to be fair to all, and if we consciously use that as our way of calculating morality as well as doing this subconsciously (where we generate a feel for what's right), the two things should be the same and will always match up.Exactly. That's what I'll try to do next in this topic. I'll demonstrate how the universal moral rule that I've proposed previously can be used to answer the questions above.
Thought experiments cut through the waffle, showing which rules fall flat and which remain in play. Once some rules have been rejected in this way, they shouldn't keep being brought back in - they've been debunked already and shouldn't be left on the table.
To me it is a human construct and therefore even if two people agree on some moral issue or other, how each of them see it going to be different. So I think putting a word like universal beside morality etc. is failing to understand what it is in the first place.If you limit the applicability of the moral rules to human only, then of course putting the word universal makes it an oxymoron. Besides, you also have to define the boundaries of humanity itself, which separate human and non-human. Is a homo sapien fetus considered as human? What about other homo species such as Neanderthal and Denisovans? What about their hybrids with homo sapiens like many of us non-African people? What about future descendants of human who colonize Mars and evolve until their DNA no longer compatible with present human?
I'll recap my assertion into following points:3. We should evaluate action/decision based on their effect to the fulfillment of the ultimate goal. Due to imperfect information that we have and uncertainty of the far future, we may not be able to finish complete calculation in time. That's why we need rule of thumb, shortcut or simplified calculation to speed up the result while mostly produce correct answers. Hence the calculation output will take the form of probability or likelyhood.
1. There exists law of causality. Otherwise everything happens randomly, hence there's no point in making plans or responding to anything. In making a plan, a goal must be set, and some rules must be defined to respond to expected situations while executing it, so the goal can be achieved effectively.
2. Moral rules only apply to conscious beings. Hence keeping the existence of conscious being is one of the highest priority moral rules, if not the highest. If someone can propose another moral rule with even higher priority, it is necessary to have at least one conscious being to follow it. Hence keeping the existence of conscious being gets back as the highest priority.
There is no special form of morality for humans - morality, when done correctly, is universal, applying to animals, aliens and to all sentient things. Any attempt to define morality which excludes some sentient things because they don't fit the rules of that system is wrong, as is any attempt that has a bias towards humans.That's what I'm trying to prove here. Thanks for your contributions in this discussion. Critical thinkers like you are what I need to help me build a convincing argumentation by pointing out errors, uncover my blind spots, proposing possible alternatives and providing valuable new information.
It's [homo sapiens] currently the only known form of conscious being who is self sustainable.
...nobody on these boards has ever, to my knowledge, offered a useful definition of "conscious" that excluded any other species of plant or animal.
You're right. It turns out very hard to point out what makes human so special among other life forms which grant them higher priority if morality rules. But still, most people will argue that if a stranger human being and any other life forms are on each side of trolley problem's track, they will choose to save the human. Choosing otherwise will make them branded as immoral.It's [homo sapiens] currently the only known form of conscious being who is self sustainable.
I thin you are using a very narrow definition of conscious and a very broad definition of self-sustainable. We survive by collaboration and exploitation, and nobody on these boards has ever, to my knowledge, offered a useful definition of "conscious" that excluded any other species of plant or animal.
What happens if aliens turn up and apply our moral standards to us with the roles reversed? If we complain about their insistence that they matter and that we don't, they'll just tell us that we're primitive animals because we were stupid enough to consider ourselves to be superior to them, whereas if we hadn't made that mistake, they'd have recognised us as their equals. Getting morality wrong is to sign your own death warrant.High level of consciousness is manifested in the form of wisdom, which includes avoiding unnecessary risks. We should avoid mutual destruction, such as what we felt during cold war.
But still, most people will argue that if a stranger human being and any other life forms are on each side of trolley problem's track, they will choose to save the human. Choosing otherwise will make them branded as immoral.The default is to give strangers the benefit of the doubt and, like every other animal, to give preference to our own species in the absence of any other information. But given the choice between Donald Trump and a chicken, I'd save the chicken every time.
What makes humans special is other humans. From the point of view of every other species (except dogs) we are either food, competition for food, or predators. Nothing special. Even dogs have an equivocal attitude: one or two familiar dogs may help you hunt or protect you, but "dog eats baby" is an everyday headline and a hungry pack will happily kill an adult.Forming packs is nothing unusual. Termites and bees have a hugely structured society that plans ahead. Ants even farm other animals. Warfare between packs is usually rational (wolves defend their hunting territory against other packs) and occasionally irrational (marauding bands of male chimpanzees attack other families for no apparent reason) but only humans kill each other at long range because they think that their chosen enemy worships a different god - or none at all.The extent to which humans will exert themselves to make poisons like tobacco or methamphetamine, to climb ice-covered rocks, or to jump out of aeroplanes, is unparalleled. The best definition of intelligence is "constructive laziness", and it's a surprisingly rare commodity, whereas its opposite is abundant and even revered as "art" or "philosophy".Humanity can be seen as successor of our ancestors. If we trace back far enough, they won't be recognized as human. Similarly, our far future successors may not be recognized as human. Currently, humans are the most advanced level of consciousness biological beings. The gap with the next group is quite significant.
The default is to give strangers the benefit of the doubt and, like every other animal, to give preference to our own species in the absence of any other information. But given the choice between Donald Trump and a chicken, I'd save the chicken every time.As I mentioned above, currently, humans are our only hope to prevent catastrophic events from eliminating conscious beings. Hence, preservation of human is inline with the universal moral rule.
As I mentioned above, currently, humans are our only hope to prevent catastrophic events from eliminating conscious beings.Far from it.
. Currently, humans are the most advanced level of consciousness biological beings.
So you think fewer human is better. How low can you go? Is zero the best? What do you propose to get there? Do you agree with the genius who makes all people to stop reproducing as I mentioned in a previous post in this topic?As I mentioned above, currently, humans are our only hope to prevent catastrophic events from eliminating conscious beings.Far from it.
If you believe in consensus, then humans are responsible for catastrophic climate change that will be as disastrous as the extinction of the dinosaurs.
If you believe in science, it is clear that the absence of humans from the Chernobyl exclusion zone has allowed every native species of mammal from mice to wolves, to flourish in a garden of robust plants.
If you believe in history, you will have noted the disastrous effect of arable farming in the American dustbowl, deforestation of Easter Island, and gradual loss of freshwater habitat in Bangladesh, all due to the unlimited presence of a relatively new species (hom sap) with no significant predators.
The solution to the preservation of life on earth is fewer humans.
I've mentioned that consciousness is multidimensional. We can make comparison among conscious beings by how far ahead they can make plans or prepare their actions. Other key performance indicators are information processing speed, memory capacity and reliablility, which determine how well their mind represents reality, which in turn determine the success probability of their goal achievements. Their ability to filter incoming information is also important to prevent them from making false assumptions which lead to bad decisions and unexpected results.. Currently, humans are the most advanced level of consciousness biological beings.
Please define consciousness.If humans represent the highest level of it, then consciousness appears to be defined by a tendency to self-harm, genocide, irrational belief, or the deliberate destruction of food to support market prices.
You cut both wires at the same time and discover that the rules stated as certainties are actually impossible.In electronic, you can design the priority between those triggers. In RS flip flop, Reset command is dominant, while in SR flip flop, it's the set command. They are called bistable multivibrator.
Here is some examples to demonstrate that moral judgment is closely related to knowledge and uncertainty.
You are in a tall and large building, and find a massive time bomb which makes it impossible to move before disarming it first. You can see red and blue wires on the detonator, and a counting down clock showing that there is only 2 minutes left before it explodes. You are an expert in explosives, sou you know for certain the following premises:
- if you cut the red wire, the bomb will be disarmed.
- If you cut the blue wire, the bomb will explode immediately, destroying the entire building and killing thousands inside.
- If you do nothing about the bomb, the timer will eventually trigger the bomb.
Which is the most moral decision you can take, which is the least moral, and why?
Here is some examples to demonstrate that moral judgment is closely related to knowledge and uncertainty.
You are in a tall and large building, and find a massive time bomb which makes it impossible to move before disarming it first. You can see red and blue wires on the detonator, and a counting down clock showing that there is only 2 minutes left before it explodes. You are an expert in explosives, sou you know for certain the following premises:
- if you cut the red wire, the bomb will be disarmed.
- If you cut the blue wire, the bomb will explode immediately, destroying the entire building and killing thousands inside.
- If you do nothing about the bomb, the timer will eventually trigger the bomb.
Which is the most moral decision you can take, which is the least moral, and why?
Why not try and find the wires powering the timer if that was put out of action a more detailed examination could be made.In this thread I don't want to go too deep into technical details. I think it's adequate to describe cause and effect relationships in the situation to determine which action to take to get the most desired possible result.
There are two possibilities the timer is supplying a signal to the detonator that stops it detonating or when the time runs out sends a signal to the detonator to make it explode if you stop the timer you can check which it is.
there is a worrying possibility wires powering the counter also power the don't detonate signal generator so try and find an alternative way to stop the counter !
If the timer is mechanical you could try zapping it with a CO2 fire extinguisher if one is handy , best of luck.
If I was building this device I would incorporate a small battery in the detonator box and make the signal from the timer "don't explode" and use the other wire to prime the device.
You would only have to provide a don't explode signal from the timer and cut the signal from the timer.
I am assuming only DC signals are used if one used AC signals and frequency sensitive detectors it would be a whole new ball game
If this topic is about universal morals, then the bomb question has not nearly enough information. Why mess with a device that has a clear purpose? It has not been stated that there is a goal to preserve the building. Maybe the bomb was put there by a demolition crew who was paid to take it down.Yes it is about universal morals. And yes, the situation was designed to show that moral judgement is closely related to knowledge and uncertainty.
Suppose the building is full of puppies. It is a universal law that it is bad to damage something cute, correct? If so, you've already begged your answer. If not, how am I to know what to do with the bomb even if it has a simple 'off' switch available? The universe seems to provide no input for the situation at hand.
However, let's assume the higher floors of the building are full of people who can't possibly get out in two minutes (or even be warned within two minutes), that the bomb is on the ground floor, and the the building will collapse as soon as it blows. There is nothing immoral about not risking your own death in order to have a 50:50 chance of saving lots of other people, so you are entitled to run out of there and let it blow. If AGI is making the decision though, it could lock you in with the bomb so that you don't have a choice - that would be its moral decision. You would then cut one of the wires, randomly selected.To determine what's the universally most moral action in a particular situation, we need first to determine what's the universal goal we want to achieve, and then calculate and compare the expected results we would get by taking available actions. We should take actions expected to get us closest to the universal goal.
There are some other factors though. If the person who has to run or cut a wire is more valuable to humanity than the sum total worth of all the other people in the building, AGI will not lock him/her in the room, but will order him/her to get out of there and let the building blow. If the building is full of Nazis who are attending a conference, that could well happen.
I was brung up as a technician and find designing bombs more interesting than pondering moral questionsI hope I can entertain you in another thread.
Here is some examples to demonstrate that moral judgment is closely related to knowledge and uncertainty.Let's say that you are the one who built the bomb, hence you know for certain that the premises above are true. Suppose you you designed the detonator as SR flipflop, so when both wires are cut, the bomb will explode immediately. As pointed out by David and Halc, to determine moral judgement for each option, we need to have information about further consequences brought by them. This thread will explore how they can be assessed if all required information is available.
You are in a tall and large building, and find a massive time bomb which makes it impossible to move before disarming it first. You can see red and blue wires on the detonator, and a counting down clock showing that there is only 2 minutes left before it explodes. You are an expert in explosives, sou you know for certain the following premises:
- if you cut the red wire, the bomb will be disarmed.
- If you cut the blue wire, the bomb will explode immediately, destroying the entire building and killing thousands inside.
- If you do nothing about the bomb, the timer will eventually trigger the bomb.
Which is the most moral decision you can take, which is the least moral, and why?
Yes it is about universal morals. And yes, the situation was designed to show that moral judgement is closely related to knowledge and uncertainty.Agree with all of this. Suppose we have full knowledge of the situation. We have the uncertainty if you want it, like an even chance that cutting a wire will halt or blow the bomb.
Unfortunately, cuteness is not a universal value. Something cute to someone might be not cute for someone else.
To determine what's the universally most moral action in a particular situation, we need first to determine what's the universal goal we want to achieve, and then calculate and compare the expected results we would get by taking available actions. We should take actions expected to get us closest to the universal goal.Agree with this if a universal goal can be found, but I don't think there are objective goals. I absolutely agree that the goals should be considered first. What's good for one goal is not so good for others. The Catholic church's stance on birth control for example seems designed to bring about the demise of humanity in the shortest possible time. They don't seem to consider long term goals at all, or are counting on forcing God's hand, like that's ever worked.
Someone might have good intention when making a moral decision, but their decision may produce undesired result if it's based on false information, such as swapped wire of the time bomb.That part seems irrelevant since it cannot be helped. A person cannot be faulted for having good intentions and attempting what seemed best. It seems irrelevant twice because if he chooses to cut no wire, everybody in the building still dies, so the wrong choice just takes out our hero, but nobody else that wasn't already doomed. I think he'd not forgive himself if he didn't try, but only if attempting the disarming was the right thing to do in the first place, and we haven't determined that.
Here is some possible scenarios which can bring you to above situation.In all 4 of these cases, you're taking your orders from your employer. You have a goal, and it isn't a universal one. You do your job. If you work for someone you find immoral, then you know you're helping them do immoral acts. Most terrorists/soldiers don't consider their acts immoral.
- You are hired by a building contractor to destroy an old building so they can build a new one. You just get the date/month wrong, perhaps you and your client used the different format.
- You are a national secret service agent ordered to destroy their enemy's headquarter. You are discovered by enemy guard when you tried to sneak out.
- You are a mercenary hired by a terrorist organization to destroy their enemy's economic center. You are waiting to get payment confirmation.
- You are a voluntary member of a terrorist organization to destroy their enemy's economic center. You are willing to die to execute the job.
If the building is full of Nazis who are attending a conference, that could well happen.You assumed that the decision maker has the information that Nazis are bad and decide that the universe would be better off without them. Could you show how we could arrive to that conclusion?
Agree with this if a universal goal can be found, but I don't think there are objective goals. I absolutely agree that the goals should be considered first. What's good for one goal is not so good for others.That's what this thread was started for in the first place. I have tried to find one by simply answering basic questions about morality (what, who, where, when, why, how) in my previous posts.
That's why I prefer the term universal instead of objective, which means that the ultimate goal we should use to evaluate morality is restricted to the point of view of conscious beings, but still applicable for any conscious beings that might exist in the universe. This restriction give us a reason to reject nihilism, which can make us struggle to answer the question "why don't you just kill yourself if you think that nothing really matters?"A universal terminal goal must be something extremely important, that any conscious beings with sufficient information should try to achieve that, to the extend that they are willing to sacrifice any other goals conceivable. For a starter, we can compare a proposed terminal goal with another thing that we usually placed at high priority, such as our own life. Are there something more important than our own life?
If the building is full of Nazis who are attending a conference, that could well happen.You assumed that the decision maker has the information that Nazis are bad and decide that the universe would be better off without them. Could you show how we could arrive to that conclusion?
Perhaps the term objective morality is a bit oxymoron because the word objective implies independence from point of view, while morality can only apply to conscious beings who has exceeded certain consciousness level or mental capacity.Then I don't know what you're asking in this topic if not for a standard that is independent of any particular point of view.
An action can not be judged to be morally wrong when the subject doesn't have the adequate mental capacity to differentiate between right and wrong thingsSo the subject doesn't know if what it's doing is right or wrong. Does this epistemological distinction matter? If some action is wrong, then doing that action is wrong, period, regardless of whether the thing doing it knows it's wrong or not.
You can pee and show your genital in public without being judged as immoral if you are a baby.Showing genitals is not a peer-group specific thing? Seems unlikely given the 99% majority of beings that are unconcerned with it, and even humans decorate just about anything with plant genitals (flowers). Sorry to jump on this, but I find it an unlikely candidate for a universal rule.
That's why I prefer the term universal instead of objective, which means that the ultimate goal we should use to evaluate morality is restricted to the point of view of conscious beings, but still applicable for any conscious beings that might exist in the universe.Is a self-driving car conscious? It certainly has better awareness than a human, and carries moral responsibility for its occupants, and makes real decisions based on such values. But the values are programmed in (not even learned like some AI systems), and are not drawn from 'the universe'.
This restriction give us a reason to reject nihilism, which can make us struggle to answer the question "why don't you just kill yourself if you think that nothing really matters?"A nihilist doesn't deny that things matter, just that they don't matter universally. My life definitely matters to me and mine and those with whom I interact. But I don't think the universe gives a hoot about my existence. Not sure if that makes me a nihilist.
noun
the rejection of all religious and moral principles, in the belief that life is meaningless.
synonyms: negativity, cynicism, pessimism; More
PHILOSOPHY
extreme skepticism maintaining that nothing in the world has a real existence.
HISTORICAL
the doctrine of an extreme Russian revolutionary party c. 1900 which found nothing to approve of in the established social order.
Let's see what the dictionary says about nihilism. I just googled itI'm not one then since I very much think there are moral principles, and some of them religious. I've already stated that life has meaning for me. I just don't think those principles that obviously exist to me are universal. They're just a product of my parents and other people around me.Quotenoun
the rejection of all religious and moral principles, in the belief that life is meaningless.
synonyms: negativity, cynicism, pessimism;
Nazis are people who approve of killing others who are of an "impure race". Such people are so highly immoral that it is arguably immoral not to kill them: tolerating them leads to a lot of good people being killed. That's a hard one to weigh up though without a lot of careful checking and statistical analysis, and of course, the Nazis could claim that they were trying to do exactly the same thing by killing people they regarded as dangerous bigots. This is not something that people are fit to judge: it needs to be investigated by AGI which can crunch all the available data instead of small subsets of it which may be greatly biased.If a conscious being who has perfect knowledge of the relevant circumstances, including the understanding of universal terminal goal and moral standards, every immoral actions and behaviors can be identified as misinformation which leads to misplaced priorities. This means that the immoral actors choose actions which consequently deter the efforts to achieve universal terminal goal. Let's try to identify which priorities are misplaced by following immoral actions:
Morality is completely resolved though: we know how it works. Blowing up a building with 1 good person in it will do magnitudes more harm than blowing up a building with a billion spiders in it. To work out what's moral, all you have to do is reduce a multi-participant system to a single-participant system, and then it's all just a harm:benefit calculation. Let's have two buildings: one with a billion spiders in it and one with one good person in it. Both of them will blow up unless we choose which one to sacrifice and press a button to select that. We treat this system in such a way that we imagine there is only one participant in it who will have to live the lives of all the participants in turn, so he will be the one that experiences all the suffering involved. He is not only the person in one building and the billion spiders in the other, but he is all the spiders on the planet and all the people. If we choose to blow up the building with the spiders in it, none of the other spiders on the planet care at all, and the ones that were fried hardly even noticed. They had no idea how long they could have lived, and they would have died anyway in ways that would likely have involved more suffering, not least because spiders "eat" each other (by paralysing them and then sucking them dry). If we choose to blow up the building with the person in it instead, there's no great gain from saving all those spiders, but we'll have a lot of devastated people about who knew and cared about that person who was blown up instead. Our single participant in this system would experience all that suffering because he will live the lives of all of them, and living longer lives as a billion spiders isn't much compensation.I know from my Twitter feed that many people are willing to sacrifice trophy hunter to save their prey. They cheered when a matador was gored by the bull.
That's David's quote, not mine. I would not have said that.I used quote selected command from action button. I didn't realize that it gives the wrong quotation.
Then I don't know what you're asking in this topic if not for a standard that is independent of any particular point of view.As I said in the post, I restricted the use of moral rules to conscious being. You can not judge some action as immoral from the point of view of viruses, for instance.
As for conscious beings, I'm not sure how you define that, or how its relevant. The usual definition is 'just like me', meaning it isn't immoral to mistreat the aliens when they show up because they're not just like us.
That's why I prefer the term universal instead of objective, which means that the ultimate goal we should use to evaluate morality is restricted to the point of view of conscious beings, but still applicable for any conscious beings that might exist in the universe. This restriction give us a reason to reject nihilism, which can make us struggle to answer the question "why don't you just kill yourself if you think that nothing really matters?"
As for conscious beings, I'm not sure how you define that, or how its relevant. The usual definition is 'just like me', meaning it isn't immoral to mistreat the aliens when they show up because they're not just like us.I have answered that question here https://www.thenakedscientists.com/forum/index.php?topic=75380.msg559662#msg559662
An example of moral beings (without requirement of having consciousness or mental capacity) is the individual cells of any creature's body, which work selflessly as a team for the benefit of the group. There isn't a code that even begins to resemble the usual 10 commandments, but it does resemble the whole 'love thy brother like thyself' going on. Humans, for all their supposed intelligence, cannot see beyond themselves and work for a greater goal, or even name the goal for that matter. I'm just saying that if the aliens come, they'll notice that fact before they notice all our toys.IMO, they are just automaton which lack the capability to estimate the consequence of their action. They act/react that way just because it helps them to survive, or at least doesn't lead them to extinction. They don't follow moral rules, hence they are not moral actions.
I'll recap my assertion into following points:
1. There exists law of causality. Otherwise everything happens randomly, hence there's no point in making plans or responding to anything. In making a plan, a goal must be set, and some rules must be defined to respond to expected situations while executing it, so the goal can be achieved effectively.
2. Moral rules only apply to conscious beings. Hence keeping the existence of conscious being is one of the highest priority moral rules, if not the highest. If someone can propose another moral rule with even higher priority, it is necessary to have at least one conscious being to follow it. Hence keeping the existence of conscious being gets back as the highest priority.
3. We should evaluate action/decision based on their effect to the fulfillment of the ultimate goal. Due to imperfect information that we have and uncertainty of the far future, we may not be able to finish complete calculation in time. That's why we need rule of thumb, shortcut or simplified calculation to speed up the result while mostly produce correct answers. Hence the calculation output will take the form of probability or likelyhood.
4. The moral calculation should be done using scientific method, which is objective, reliable, and self correcting when new information is available. Good intentions when done in the wrong way will give us unintended results.
If a virus does something against a universal moral code, then it has done something wrong, even if it lacks the ability to know about it. Consciousness seems to play no role. A frog for instance seems conscious of water and flies and such, but like the virus, it probably has little perception of universal right and wrong. The addition of consciousness seems not to helped it with this perception.Quote from: HalcAs for conscious beings, I'm not sure how you define that, or how its relevant.As I said in the post, I restricted the use of moral rules to conscious being. You can not judge some action as immoral from the point of view of viruses, for instance.
Without a universal terminal goal, we cannot set up universal moral rules.Sounds reasonable.
This will lead us to moral relativism. In its most extreme form, you cannot judge any action as immoral, because they are always right, at least from the stand point of the actor.I beg to differ. I've done things I know are not right, even from my own standpoint. I feel free to judge myself and my peers, but not according to universal rules, because I am not aware of any, just as I am not aware of any universal terminal goals.
You define it there as a spectrum (and I agree with that), but above you make it a binary thing where some critical threshold needs to be crossed. Where is that threshold? Just above a virus? No? Just humans? If so, how then is your definition not the usual one I mentioned?As for conscious beings, I'm not sure how you define that. The usual definition is 'just like me'...
I have answered that question [in post 38]. I think that your mentioned definition is not as usual as you think.
IMO, [cells of a body] are just automaton which lack the capability to estimate the consequence of their action.The consequence of immoral action is impairment/death of the group, so I think they're quite aware of the moral code, the need to work as a team. Yes, they're automatons, as is any physical construct. I'm just a more complex one than a cell, but one far less in tune to any terminal goals of the larger group. I'm far less moral than are my cells.
They act/react that way just because it helps them to survive.What are moral rules except rules that help the survival rate of the group that defines the morals? That's not universal, that's morals of the group. Cells follow morals of the body and not anything larger than that.
They don't follow moral rules, hence they are not moral actions.
I beg to differ. I've done things I know are not right, even from my own standpoint. I feel free to judge myself and my peers, but not according to universal rules, because I am not aware of any, just as I am not aware of any universal terminal goals.How do you judge if an action is morally right or wrong? what is your highest priority? is there something more important than your own life that you are willing to sacrifice for it?
You define it there as a spectrum (and I agree with that), but above you make it a binary thing where some critical threshold needs to be crossed. Where is that threshold? Just above a virus? No? Just humans? If so, how then is your definition not the usual one I mentioned?Not all moral rules have the same level of complexity. Some moral rules are simple enough to be followed by kids. We can't expect a moral agent to follow moral rules whose complexities are beyond their capability to comprehend.
What are moral rules except rules that help the survival rate of the group that defines the morals? That's not universal, that's morals of the group. Cells follow morals of the body and not anything larger than that.Have you tried to expand the group that defines the moral rules? Can you find a moral rule that's applicable for all human being? I have proposed to expand the group to all conscious beings if we want to find universal moral rules. I also have excluded non-conscious beings from the group that defines moral rules so that they don't fall back to just "anything goes".
I'm not trying to be contradictory, just trying to illustrate the lack of difference between a human and anything else, and the complete lack of a code that comes from anywhere else except the group with which you relate. Yes, I'm a relativist, in far more ways that just moral relativism.
How do you judge if an action is morally right or wrong?I've been taught them by parents, community, employer, etc.
Is there something more important than your own life that you are willing to sacrifice for it?Of course. I'm a parent for one thing.
Have you tried to expand the group that defines the moral rules?More than most do, yes.
Can you find a moral rule that's applicable for all human being?One that they'd all agree on, probably not. One that they should, yes. But it's still applicable only to humans or something sufficiently similar. I've tried to expand the group past the limited 'just humans'. There are higher goals than human goals. Interesting to explore them.
I have proposed to expand the group to all conscious beingsWhy the word 'being'? What distinguishes a being from a non-being? Sure, it seems pretty straight forward with the sample of one that we have (it's a being if you're related to it), but that falls apart once we discover a new thing on some planet and have to decide if its a being or not.
Historically, highest consious beings have been increasing with time.The Fermi paradox wouldn't be there if that were true. Yes, it appears nothing on earth has been as sentient as us. Can't say 'highest conscious', because we've no measure of that. There's plenty of species with larger brains or better senses, either of which arguably make them more conscious.
Who know how humans will evolve into in distance future.If we survive the holocene extinction event, who knows indeed. Intelligence is currently trending downward, but that may reverse if it once again carries an advantage.
By being relativist, do you think that perpetrators of 9/11 are moral in their own respect because they follow moral rules of their group?Yes, they considered their acts as the ultimate moral act, as did those that taught them it. They laid down their lives for this greater goal.
what about human sacrifice by the Aztecs? holocaust by Nazi? slavery by the confederacy?human cannibalism by some cultures?I am not very familiar with the teachings of all these cultures, but one culture oppressing some other culture has been in the moral teachings of most groups I can think of, especially the religious ones. My mother witnessed the holocaust and current votes for it happening again. It only looks ugly in hindsight, and only if you lose. Notice everyone vilifies Hitler, but Lenin and Stalin get honored tombs, despite killing far more jews and others they felt were undesirables. Translation: It is immoral to lose.
Why the word 'being'? What distinguishes a being from a non-being? Sure, it seems pretty straight forward with the sample of one that we have (it's a being if you're related to it), but that falls apart once we discover a new thing on some planet and have to decide if its a being or not.You can use other words such as 'things' if you'd like to. The main criteria is that they exist in objective reality, which can be verified by other intelligent things, not just in imagination. Hence if you discover a new thing on some planet, you can be sure that it is a thing, whether or not it is intelligent.
I've been taught them by parents, community, employer, etc.How do you resolve when some of their teachings are contradictory to each other?
What if the ebola virus were as sentient as us? What would the moral code for such a species be like? Would it be wrong for them to infect and kill a creature? Only if it's a human? I read a book that included a sentient virus, and also a R-strategist intelligence and more. Much of the storytelling concerned the conflicts in the morals each group found obvious.I've said that consciousness is multidimensional. But one of the most important factor is capability to make plans for the future. This requires the agents to make simulation of objective reality in their mind, which means they have body parts dedicated to make arrangement in such a way to represent their environment, including other agents. Agents with self awareness have the capability to conceive representation of themselves in their mind.
So the subject doesn't know if what it's doing is right or wrong. Does this epistemological distinction matter? If some action is wrong, then doing that action is wrong, period, regardless of whether the thing doing it knows it's wrong or not.Actions with bad consequences are wrong. Actions known to have bad consequences, but are done anyway, are immoral.
What does wrong mean, anyway? Suppose I do something wrong, but don't know it. What does it mean that I've done a wrong thing? Sure, if there is some kind of consequence to be laid on me due to the action, then there's a distinction. I take the wrong turn in the maze and don't get the cheese. That makes turning left immoral, but only if there's a cheese one way and not the other? Just trying to get a bit of clarity on 'right/wrong/ought-to'.
I am not very familiar with the teachings of all these cultures, but one culture oppressing some other culture has been in the moral teachings of most groups I can think of, especially the religious ones. My mother witnessed the holocaust and current votes for it happening again. It only looks ugly in hindsight, and only if you lose. Notice everyone vilifies Hitler, but Lenin and Stalin get honored tombs, despite killing far more jews and other undesirables. Translation: It is immoral to lose.Morality is indeed would look clearer when viewed as retrospection. But it is possible to make moral judgment in advance, providing that we have the sufficient amount of information, so we can make prediction what would happen if an action is done with sufficient accuracy and precision. A Laplace demon level conscious being can judge moral actions universally.
You can use other words such as 'things' if you'd like to.I think 'agent' is a good work. A rock has no particular agency. It needs the ability to make a choice and act on it. A slave arguably has no agency. If it does exactly as it is instructed, its moral responsibility rests on the instructor, not on the slave.
How do you resolve when some of their teachings are contradictory to each other?By concluding that morals are not universal. For one, a higher goal takes priority over a lower one when they indicate contradictory choices to be made. Even simple devices work that way.
Actions with bad consequences are wrong. Actions known to have bad consequences, but are done anyway, are immoral.In the case above, the high priority goal makes one choose an action that violates the lower priority goal, hence an action that is bad (for a greater good). Your statement above asserts that such actions are immoral. For instance, I injure a child (bad consequence) as a surgeon to prevent that child from dying of appendicitis. Your statement at face value says this is an immoral action. Better to do nothing and let the child die (worse consequence, but not due to explicit action on your part) leaving you morally intact, except doing nothing is also a choice. Maybe get a different surgeon to do the immoral thing of saving this kid's life.
If someday it can be demonstrated that some viruses can reach that level of complexity, than be it.I'm not asserting that this is the case (although some use the facilities of the infected host, as does rabies). You're missing the point of the question. Suppose a species has all these facilities, and knows that it is effectively a parasitical pestilence. Should that knowledge affect its choices, taking priority over its inherent nature?
But if they show the tendency to destroy other consious agents, especially with higher level of consciousness, they must be fought back.So if aliens with higher consciousness (as you put it) come down to Earth, they would not be immoral for them to harvest humans for food or perform painful procedures on us because we're not as conscious as they are. There's no shortage of fictional stories that depict this scenario, except somehow the aliens are portrayed as evil. You would perhaps differ, given the above statement. If they're higher on the ladder of consciousness, then it isn't wrong for them to do to us as they wish.
Actions with bad consequences are wrong.Yes, by definition, actions with a bad consequences are wrong. How in any way is this relevant to the discussion? If a consequence is deemed bad only by some group, then it is wrong only relative to that group. If it is bad period, then it's universal, but you've made no argument for that case with the statement here. I'm trying to get the discussion on track.
HalcChanged it to "others they felt were undesirables", which is how I meant it.
" despite killing far more jews and other undesirables." I certainly agree that the number of Jews that died as a result of the actions of Lenin and Stalin was as great as the number whose deaths were caused by Hitler and the NAZI regime but you seem to have labelled them as "undesirables" I think an edit might be appropriate
By concluding that morals are not universal. For one, a higher goal takes priority over a lower one when they indicate contradictory choices to be made. Even simple devices work that way.How do you determine wchich priority is the higher one? Have you found the highest one?
In the case above, the high priority goal makes one choose an action that violates the lower priority goal, hence an action that is bad (for a greater good). Your statement above asserts that such actions are immoral. For instance, I injure a child (bad consequence) as a surgeon to prevent that child from dying of appendicitis. Your statement at face value says this is an immoral action. Better to do nothing and let the child die (worse consequence, but not due to explicit action on your part) leaving you morally intact, except doing nothing is also a choice. Maybe get a different surgeon to do the immoral thing of saving this kid's life.I have said in my previous posts that universal morality is based on eventual result. Some actions are morally better than others, and we should not fall into false dichotomy. You perform a surgery to the child is morally better then letting them die. It would be morally better if you could perform the medical procedure which does not injure the child.
How do you determine wchich priority is the higher one?Your reply below seems to assume an obvious priority, but I love putting assumptions to the test.
You perform a surgery to the child is morally better then letting them die.While I agree, how do you know this is true? I can argue that it is better to let the kid die if there is a higher goal to breed humans resistant to appendix infections, like the Nepalese have done. I can think of other goals as well that lead to that decision. There seems to be no guidance at all from some universal moral code. I don't think there is one of course.
We'll be able to correct defects by gene editing in the future, so there's no need for any approach like eugenics to improve the species.Gene editing is currently considered very unethical, but then not as much as the passive eugenics I suggested, so point taken.
As for a universal moral code, I've already provided it several times in this thread without anyone appearing to notice. Morality is mathematics applied to harm management and it's all about calculating the harm:benefit balance.OK, that's at least an attempt to word things in some universal manner.
It only applies to sentient things, but it applies to all of them, fleas and intelligent aliens all included.You list a flea as sentient, which is a refreshing contrast to the usual 'just like me' definition. Why? Perhaps since it has a rudimentary mechanism to make choices. That's why I've used the word 'agent' in prior posts. A rock is not considered an agent of choice. A tree might be, but it gets difficult to justify it. How about a self-driving car? It meets the definition of slave. Does a true slave carry any moral responsibility? I almost say no.
It's easy to understand the harm:benefit balance calculations for a single-participant system, and a multi-participant system can be reduced to a single-participant system just by considering all the sentient participants in it to be the same individual living all those lives in turn. The entirety of morality is right there.I haven't read the entire thread. How has been the response to this. It's a good attempt. It's just that harm seems subjective. What good for X is not necessarily good for Y, so its measure seems context dependent.
Is there a way to compute harm without being relative to a peer group? Humans seem to be causing a lot more harm than benefit, with an estimated genocide of 80% of the species on the planet in the holocene extinction event. Any harm to a species like that would probably be viewed as a total benefit by all these other species
A rock is not considered an agent of choice. A tree might be, but it gets difficult to justify it. How about a self-driving car? It meets the definition of slave. Does a true slave carry any moral responsibility? I almost say no.
Does the species need to consider the harm done to the environment/other species, or only harm done to its own kind? What if it has no concept of 'species' or 'kind', or possibly not even 'individual' or 'agent'?
QuoteIt's easy to understand the harm:benefit balance calculations for a single-participant system, and a multi-participant system can be reduced to a single-participant system just by considering all the sentient participants in it to be the same individual living all those lives in turn. The entirety of morality is right there.I haven't read the entire thread. How has been the response to this. It's a good attempt. It's just that harm seems subjective. What good for X is not necessarily good for Y, so its measure seems context dependent.
Yes, by definition, actions with a bad consequences are wrong. How in any way is this relevant to the discussion? If a consequence is deemed bad only by some group, then it is wrong only relative to that group. If it is bad period, then it's universal, but you've made no argument for that case with the statement here. I'm trying to get the discussion on track.I tried to make distinction between wrong and immoral. If you take only the first half of the statements, it is no surprise that it doesn't look relevant to the discussion.
Your question above has been answered by David. I just want to add that actions are valued by their effectiveness and efficiency. Actions are considered effective if they can achieve the goal, and more efficient if they use less resources.How do you determine wchich priority is the higher one?Your reply below seems to assume an obvious priority, but I love putting assumptions to the test.You perform a surgery to the child is morally better then letting them die.While I agree, how do you know this is true? I can argue that it is better to let the kid die if there is a higher goal to breed humans resistant to appendix infections, like the Nepalese have done. I can think of other goals as well that lead to that decision. There seems to be no guidance at all from some universal moral code. I don't think there is one of course.
I personally have died 3.5 times, or at least would have were it not for the intervention of modern medicine. My wife would have survived until the birth of our first child. The human race is quite a wreck since we no longer allow defects to be eliminated, and we're not nearly as 'finished' as most species that have had time to perfect themselves to their niche.
The point of the thread seems to be to argue why an action might be bad in all cases, and there has been little to back up this position. The examples all seem to have had counter-examples. All the examples of evil have been losers, never something that your people are doing right now, like say employing sweatshop child labor for the clothes you wear. It's almost impossible to avoid since so much is produced via various methods that a typical person would find inhumane, and hard to see since you're paying somebody else to do (and conceal from you) the actual act. At least that is an example of something done by the winner.I guess I can't expect anyone newly joined this discussion to follow all the conversations from the start. As the title might suggest, this thread is meant to look for a universal standard to evaluate moral actions in as diverse situations as possible. I want to answer why an action can be considered moral in some situations but immoral/less moral in other situations.
You also need to decide if consciousness is relevant in a continuous or binary way. If relative, then it isn't immoral for an adult to harm a child since you've said a child (or an elderly person) has a lower level of consciousness than the adult. If it's a threshold thing (do what you want to anything below the threshold, but not above it), then it needs a definition. A human crosses the threshold at some point, and until he does, it isn't immoral to do bad things to him.I have said several times already, that universal morality is evaluated from the eventual result, with complete relevant information available. Otherwise, we must deal with probability based on available information.
For instance, a human embryo obviously has far less consciousness than does a pig, so eating pork is more wrong than abortion by this level-of-consciousness argument, be it a spectrum thing or binary threshold.
Similarly, it's OK to kill a person under anesthesia because they're not conscious at the time, and will not suffer for it. These are some of the reasons the whole 'conscious' argument seems to fall apart.
But the expansion is restricted by consiousness level of the group members, because only consious beings can follow moral rules. Otherwise, it would be immoral for human to eat animal as well as vegetables, since this action is bad for the them.
Morality applies to all sentiences and it should be applied by all intelligences that are capable of calculating it. Many humans are not good at calculating it, and some are little better at it than other animals, but their inadequacy doesn't make it right to kill and eat them. It might be just as bad to torture a fly as to torture a human because it isn't about intelligence, but sentience: the pain may feel the same to both. It's all about how much suffering is involved. If you're comparing the killing of a fly versus the killing of a human though, there's inordinately more suffering caused by the latter due to all the other people who are upset by that, and by the loss of potential life.When someone suggests that you should follow a rule X, a natural response would be: what is the expected consequence if we follow x, why is it good for you? What if we ignore it, why is it bad?
The three strategies used during detailed design to prevent, control or mitigate hazards are:
Passive strategy: Minimise the hazard via process and equipment design features that reduce hazard frequency or consequence;
Active strategy: Engineering controls and process automation to detect and correct process deviations; and
Procedural strategy:Administrative controls to prevent incidents or minimise the effects of an incident.
So if aliens with higher consciousness (as you put it) come down to Earth, they would not be immoral for them to harvest humans for food or perform painful procedures on us because we're not as conscious as they are. There's no shortage of fictional stories that depict this scenario, except somehow the aliens are portrayed as evil. You would perhaps differ, given the above statement. If they're higher on the ladder of consciousness, then it isn't wrong for them to do to us as they wish.Any aliens with the ability to perform interstellar travel are very unlikely to develop the required technology as an individual. They are most likely a product of a society, which have their own struggles in the past, competitions and conflicts among themselves. They might experienced devastating wars, famines, and natural disasters. They might also have developed weapons of mass destruction such as nuclear and chemical weapons. They must have survived all of those, otherwise they won't be here in the first place. They must have developed their own moral rules, and might have even figured out the universal morality by expanding the scope and applicability of their rules. They might have their own version of PETA or vegan activists, and genetically modified bacteria to produce their food, or even better, 3D printed their food using nanotechnology. They might have modified their own bodies so that they don't depend on external biological systems just to survive.
Evaluation of moral action is based on eventual result, not just immediate consequence. For example, killing every plants can eventually leads to extinction of macroscopic animals, including human. Hence it is morally worse than directly killing one individual human being.Here is another example to emphasize the need to evaluate morality from eventual result, rather than direct consequences. Most of us agree that the sun is not a conscious being. But it would be immoral to turn the sun into blackhole just for fun, while knowing that this action will lead to death of all currently known conscious being.
A rock, tree or self-driving car is not a sentience.There is a lot to discuss in your long post, but this one stood out. Why is a flea a sentience but an AI car not one? Surely the car is entrusted with moral decisions that nobody would ever entrust to a flea. The only thing the flea has that the car doesn't is that you and the flea share a common ancestor, and even that doesn't explain why 'tree' is on the other side of the line. The car is a reasonable example of an alien, something with which you don't share an ancestry, and right off you assert that it isn't a sentience, seemingly because it isn't just like you.
Why is a flea a sentience but an AI car not one? Surely the car is entrusted with moral decisions that nobody would ever entrust to a flea. The only thing the flea has that the car doesn't is that you and the flea share a common ancestor, and even that doesn't explain why 'tree' is on the other side of the line. The car is a reasonable example of an alien, something with which you don't share an ancestry, and right off you assert that it isn't a sentience, seemingly because it isn't just like you.
They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings...Welcome to our discussion.
Being a meme, the universal moral standard shares space in memetic pool with other memes. They will have higher chance to survive if they could optimize distribution of resources to preserve conscious beings.Efforts to discover universal goal can be done using top-down or bottom-up approach. Your statement above seems to lean more on bottom-up approach, similar to my original attempts in other thread https://www.thenakedscientists.com/forum/index.php?topic=71347.0
To answer why keeping the existence of conscient beings is a fundamental moral rule, we can use a method called reductio ad absurdum to its alternative.I'll try to summarize the discussion here in a more of deductive reasoning and then compile it in a Euclidean style writing.
Imagine a rule that actively seeks to destroy conscient beings. It's basically a meme that's self destruct by destroying its own medium. Or conscient beings that don't follow the rule to actively keep their existence (or their copies) will likely be outcompeted by those who do, or struck by random events and cease to exist.
Are you arguing that rock or car protons are different from the ones in fleas ? If not, I don't know why you brought up the prospect of suffering of fundamental particles, especially since those particles move fairly freely into and out of biological things like the flea.Why is a flea a sentience but an AI car not one?First, let's start with a rock. A rock may be sentient in that every fundamental particle in it may be sentient. Can we torture the rock? We could maybe throw it into a lava lake to torture it with high heat, but there's a lot of rock in that state all the time deep in the Earth. Maybe it's all in agony all the time. We should maybe throw all material into a black hole as that might stop the suffering by slowing its functionality to a halt. Maybe that's the best way to end all the extreme suffering that might for all we know be going on in the universe wherever there is matter..
The self-driving car may be sentient in the same way as the rock. Every particle in us could be sentient in the same way too, and most of it could be in extreme agony all the time without us knowing - we can't measure how it feels. The only sentient thing that we think we can measure is somewhere in our own brain. We have an information system in there which generates data that makes assertions about what that sentience is feeling. We don't know what evidence that information system is using when it makes its measurements, but it looks impossible for its assertions about sentience to be competent - it should not have any way of measuring feelings and knowing that they are feelings. It should be unable to tell whether they are pleasant feelings or unpleasant ones. Its assertions about feelings cannot be trusted to be anything more than fiction. However, we must also err on the side of caution and consider the possibility that the assertions may somehow be true. We will find out for certain when we can trace back the assertions about feelings in the brain to see how that data was put together and what evidence it was based on. In doing that, we might find some magical quantum mechanism which does the job.
It will most likely be in most creatures that have a brain and a response to damage with any kind of response that makes it look as if it might be in pain.So you want it to writhe in a familiar way in response to harm. I agree that the self-driving car does not writhe in a familiar way. I watched a damaged fly, and it seemed more intent on repairing itself than on gestures of agony.
A self-driving car's brain is a computer which works in the same way as the computer on a desk. There is no sentience involved in its processing.That's just an assertion. How do you know this? Because it doesn't writhe in a familiar way when you hit it with a hammer? You just finished suggesting that fundamental particles are sentient, and yet a computer on my desk (which has moral responsibility, and not primarily to me) does not.
If such a machine generates claims that it is sentient and that it's feeling painA rock can do that. I just need a sharpie. How does a person demonstrate his claim of sentience (a thing you've yet to define)? A computer already has demonstrated that it bears moral responsibility, so if it isn't sentient, then sentience isn't required for what a thing does to do right or wrong.
or that it feels the greenness of green, then it has been programmed to tell lies.How do you convince the alien that you're not just programmed to say 'ouch' when you hammer your finger, assuming quite unreasonably that they'd consider "ouch" to be the correct response?
Are you arguing that rock or car protons are different from the ones in fleas ? If not, I don't know why you brought up the prospect of suffering of fundamental particles, especially since those particles move fairly freely into and out of biological things like the flea.
As for all these comments concerning suffering, you act like it is a bad thing. If there was a pill that removed all my pain and suffering (there is), I'd not take it, because it's there for a reason. It would be like voluntarily removing my physical conscience, relying instead on rational reasoning to not do things that are wrong. I still have all my fingers because I have pain and suffering (and not for lack of trying otherwise).
Thus it is not wrong for an alien to injure us since we don't react to the injury in a way that is familiar to them.
The rules only apply to things that are 'sufficiently just like me'.
QuoteA self-driving car's brain is a computer which works in the same way as the computer on a desk. There is no sentience involved in its processing.That's just an assertion. How do you know this? Because it doesn't writhe in a familiar way when you hit it with a hammer? You just finished suggesting that fundamental particles are sentient, and yet a computer on my desk (which has moral responsibility, and not primarily to me) does not.
Similarly, if a person commits some crime, then creates an exact replica of himself and destroys the original person, the replica is still guilty of the crime despite the fact that the actual body that performed the crime is gone. The information is preserved and the information is what is guilty. So a thing that process/retains information seems capable of doing things that can be classified as right or wrong. Just my observation.
QuoteIf such a machine generates claims that it is sentient and that it's feeling painA rock can do that. I just need a sharpie.
How does a person demonstrate his claim of sentience (a thing you've yet to define)?
A computer already has demonstrated that it bears moral responsibility, so if it isn't sentient, then sentience isn't required for what a thing does to do right or wrong.
Quoteor that it feels the greenness of green, then it has been programmed to tell lies.How do you convince the alien that you're not just programmed to say 'ouch' when you hammer your finger, assuming quite unreasonably that they'd consider "ouch" to be the correct response?
You seem to define a computer to be not sentient because it does a poor job of mimicking a person. By that standard, I'm not as sentient as a squirrel because I've yet to convince one that I am of of their own kind. I fail the squirrel Turning test. It can be done with a duck. I apparently pass the duck Turning test.
If suffering happens, and if a compound object can suffer, that cannot happen without at least one of the components of that compound object suffering. A suffering compound object with none of the components feeling anything at all is not possible.By reducto ad-adsurdum, that indeed implies that a proton can suffer, and only because at least one of its quarks isn't contented. I see no way to relieve the suffering of a quark since I've no idea what needs it has that aren't getting met.
As for all these comments concerning suffering, you act like it is a bad thing. If there was a pill that removed all my pain and suffering (there is), I'd not take it, because it's there for a reason. It would be like voluntarily removing my physical conscience, relying instead on rational reasoning to not do things that are wrong. I still have all my fingers because I have pain and suffering (and not for lack of trying otherwise).
Torture is universally recognised as immoral.It is not. I see nothing in the universe that recognizes any moral rule at all. Not saying there isn't one. That said, there are human cultures that don't find torture immoral. Most are satisfied if they get the benefit of the torture without the direct evidence that it's going on. Immoral to kill your neighbor, but not immoral to hire a hitman to do it, so long as you don't watch.
Then you think it's moral for aliens to torture people?A moral code is not likely to assert that one is obligated to torture something, but that's the way you word the question. So no. I was commenting that by the rules you are giving me, it wouldn't be immoral for them to torture us.
All the particles of the machine could be sentient, but they may be suffering while the machine generates claims about being happy, or they may all be content while the machine generates claims about being in agony.Maybe your protons also are in a different state than the one you claim, so it seems that the state of the protons is in fact irrelevant to how I treat the object composed of said protons.
The claims generated by an information system have no connection to the sentient state of the material of the machine.Ah, there's the distinction I asked for. You claim a thing is 'sentient' if it has a connection with the feelings of its protons, and a computer doesn't. How do you justify this claim, and how do you know that the protons are suffering because there's say too much pressure on them? The same pressure applied to different protons of mine seems not to cause those particular protons any discomfort. That's evidence that it's not the protons that are suffering.
It is not "just" an assertion. It is an assertion which I can demonstrate to be correct. A good starting point though would be for you to read up on the Chinese Room experiment so that you get an understanding of the disconnect between processing and sentience.Chinese Room experiment has different interpretations, and has nothing to do with the suffering of particles.
Ah. The sentence definition comes out. As you've been reluctant to say, you're working with a dualistic model, and I'm not. My sentience (the physical collection of particles) is to blame because it is in control of itself (has free will). Your gob of matter is not to blame because it is instead controlled by an outside agent which assumes blame for the actions it causes. The agent is to blame, not the collection of matter.QuoteSimilarly, if a person commits some crime, then creates an exact replica of himself and destroys the original person, the replica is still guilty of the crime despite the fact that the actual body that performed the crime is gone. The information is preserved and the information is what is guilty. So a thing that process/retains information seems capable of doing things that can be classified as right or wrong. Just my observation.The sentience is not to blame because it is not in control: there is no such thing as free will.
You brought up sentience in a discussion of universal morals. If it isn't needed, then why bring it up?QuoteA computer already has demonstrated that it bears moral responsibility, so if it isn't sentient, then sentience isn't required for what a thing does to do right or wrong.Correct. Sentience is not needed by something that makes moral decisions.
A rock is made of the same particles, and you say it isn't capable of suffering...
QuoteTorture is universally recognised as immoral.It is not.
I see nothing in the universe that recognizes any moral rule at all.
I was commenting that by the rules you are giving me, it wouldn't be immoral for them to torture us.
]Maybe your protons also are in a different state than the one you claim, so it seems that the state of the protons is in fact irrelevant to how I treat the object composed of said protons.
You claim a thing is 'sentient' if it has a connection with the feelings of its protons, and a computer doesn't. How do you justify this claim, and how do you know that the protons are suffering because there's say too much pressure on them? The same pressure applied to different protons of mine seems not to cause those particular protons any discomfort. That's evidence that it's not the protons that are suffering.
Chinese Room experiment has different interpretations, and has nothing to do with the suffering of particles.
Anyway, in some tellings, the guy in the room has a lookup table of correct responses to any input. If this is the algorithm, the room will very much be distinguishable from talking to a real Chinese speaker. It fails the Turing test.
If it doesn't fail the Turing test, then it passes the test and is indistinguishable from a real person, which makes it sentient (common definition, not yours).
QuoteThe sentience is not to blame because it is not in control: there is no such thing as free will.Ah. The sentence definition comes out. As you've been reluctant to say, you're working with a dualistic model, and I'm not. My sentience (the physical collection of particles) is to blame because it is in control of itself (has free will). Your gob of matter is not to blame because it is instead controlled by an outside agent which assumes blame for the actions it causes. The agent is to blame, not the collection of matter.
Anyway, the self-driving car is then not sentient because it hasn't been assigned one of these immaterial external agents. My question is, what is the test for having this external control or not? How might the alien come down and know that you have one of these connections and the object to your left does not? The answer to this is obvious. The sentient object violates physics, because if it didn't, its actions would be a function of physics, and not a reaction to an input without a physical cause. Show me such a sensory mechanism in any sentient thing then.
In fact, there is none since a living thing is engineered entirely wrong for an avatar setup like that. If I want to efficiently move my arm, I should command the muscle directly and not bother with the indirection from a remote location. Nerves would be superfluous. So would senses since the immaterial entity could measure the environment directly, as is demonstrably done by out-of-body/near-death experiences.
Anyway, I had not intended this to be a debate on philosophy of mind. Yes, the dualistic model has a completely different (and untestable) set of assumptions about what the concept of right and wrong means. Morals don't come from the universe at all. They come from this other realm where the gods and other assertions are safely hidden from empirical inquiry.
You brought up sentience in a discussion of universal morals. If it isn't needed, then why bring it up?
Science has no model that can make sense of sentience - it looks as if there can be no such thing. If we decide that that's the case, then there can be no such thing as suffering and there is no role for morality.
Protecting sentient things is the purpose of morality. Calculating morality does not require the calculator to be sentient.That requires sentience to be defined objectively.
How do you define fundamental things? When you reach them, their definitions are always circular. All you have is how they relate to other things.Science has no model that can make sense of sentience - it looks as if there can be no such thing. If we decide that that's the case, then there can be no such thing as suffering and there is no role for morality.Protecting sentient things is the purpose of morality. Calculating morality does not require the calculator to be sentient.That requires sentience to be defined objectively.
How do you define fundamental things? When you reach them, their definitions are always circular. All you have is how they relate to other things.You can compare fundamental things of one object to another. For example, which rock has more mass or volume.
A rock may be sentient in that every fundamental particle in it may be sentient.But we know the rock isn't sentient since none of its particles exhibits free will. If any particle was suffering, it could put itself in a situation where this was not the case. Since it isn't doing that, either it isn't sentient or the thing is completely contented.
How do you compare and relate sentience to other things?
But we know the rock isn't sentient since none of its particles exhibits free will.
If any particle was suffering, it could put itself in a situation where this was not the case.
Since it isn't doing that, either it isn't sentient or the thing is completely contented.
Likewise, the motion of the particles in my body can be described by the laws of physics. Not a single proton seems to be exerting free will. Hence I cannot be sentient (your definition).
What prevents me from flying like superman? I will that, yet cannot bring it about. My free will does not seem to have any ability to override physics, yet you claim otherwise when contrasting yourself to the actions of computers that, lacking said sentience, are confined to the laws of physics.
I did not intend to debate morality from a dualist perspective. The perspective is religious (inherently non-empirical) and that typically has morality pretty much built in. I don't deny that. I just find your particular flavor of it self contradictory.
Sentience is the capacity to feel, perceive, or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience).http://en.wikipedia.org/wiki/Sentience
Definition of sentiencehttps://www.merriam-webster.com/dictionary/sentience
1: a sentient quality or state
2: feeling or sensation as distinguished from perception and thought
Definition of consciousnesshttps://www.merriam-webster.com/dictionary/consciousness
1a: the quality or state of being aware especially of something within oneself
b: the state or fact of being conscious of an external object, state, or fact
c: AWARENESS
especially : concern for some social or political cause
The organization aims to raise the political consciousness of teenagers.
2: the state of being characterized by sensation, emotion, volition, and thought : MIND
3: the totality of conscious states of an individual
4: the normal state of conscious life
regained consciousness
5: the upper level of mental life of which the person is aware as contrasted with unconscious processes
Nothing exhibits free will.I think I have misread your position. You say nothing has free will, but haven't defined it.
It seems that you consider sentience to be a passive experiencer, lacking any agency in the physical world. Morals are there as obligations to these external experiencers, to keep your movie audience contented so to speak.
Perhaps I am wrong about this epiphenomenal stance. Kindly correct me if I've again got it wrong.
OK, you define free will as 1) having this external thing (what you call a sentience), and 2) it having a will and being able to exert that. This actually pretty much sums up the concept from a typical dualist, yes.You say nothing has free will, but haven't defined it.Free will depends on an injection of magic somewhere to get round the problem of everything that happens having a cause.
I on the other hand would describe that situation as possession, where my will is overridden by a stronger agent, and its freedom taken away. You don't describe possession. The body retains its physical will and this 'sentience' gets its jollies by being along for the ride.
That said, you seem aware of the 'magic' that needs to happen. Most are in stark denial of it, or posit it in some inaccessible place like the pineal gland despite the complete lack of neurons letting their shots be called by it.
You're an epiphenomenalist, a less mainstream stance.
Why do you posit it then? Seem like the equivalent of positing the invisible pink unicorn that's always in the room. If there's no distinction between the presence or absence of a thing, why posit it?
Why might you not have many of them, a whole cinema full all taking the same ride?
Most people don't define sentience as an epiphenomenal passenger, so most don't base their moral decisions on how it will make the unicorn feel.
When I said it "depends on an injection of magic", I was ruling out free will on that basis - not endorsing it.Understood, but this is only true given a free-will definition that involves this kind of magic going on, instead of somebody else that considers free will to be not remote controlled.
I just see a whole lot of causation from the outside interacting with causation from the set up of whatever's on the inside, and every part of it is dictated by physics.That sounds like a description of semi-deterministic physics.
If sentience is real, it has a causal role: without that, it cannot possibly cause claims about feelings being felt to be generated. It is still just a passenger though in that what it does is forced by the inputs.This seems to be a contradictory statement. If I feel the warmth of green, I cannot cause the body to discuss said warmth without performing said magic on the physical body which supposedly is incapable of such feelings. If it has any causal role, there's magic going on.
Agree. So I find your definitions rather implausible for this reason. My view doesn't have this external passenger. The physical being is all there is and is sentient in itself (yes, a different definition of sentience), has free will because nothing else is overriding its physical will, and morality has a purpose because there are obligations to the physical thing.QuoteWhy do you posit it then? Seem like the equivalent of positing the invisible pink unicorn that's always in the room. If there's no distinction between the presence or absence of a thing, why posit it?If there's no sentience, then torture is impossible and morality has no purpose.
Most people believe that pain is real and that they strongly dislike it. If you are in that camp, then you're a unicornist yourself.Nonsense. I don't think I need the unicorn to feel my own pain for me. That you propose this indicates that the idea is beyond your comprehension, and not just an interpretation with which you don't agree.
If they believe in sentience, the sentient thing that feels is what morality is there to protect.Almost nobody believes in the sort of sentence you describe. Typically it's a separate experiencer capable of said magic (think Chalmers), or in my case, a sentience composed of a physical process (Dennett, or whoever that hero is supposed to be).
When I said it "depends on an injection of magic", I was ruling out free will on that basis - not endorsing it.Understood, but this is only true given a free-will definition that involves this kind of magic going on, instead of somebody else that considers free will to be not remote controlled.
QuoteI just see a whole lot of causation from the outside interacting with causation from the set up of whatever's on the inside, and every part of it is dictated by physics.That sounds like a description of semi-deterministic physics.
QuoteIf sentience is real, it has a causal role: without that, it cannot possibly cause claims about feelings being felt to be generated. It is still just a passenger though in that what it does is forced by the inputs.This seems to be a contradictory statement.
If I feel the warmth of green, I cannot cause the body to discuss said warmth without performing said magic on the physical body which supposedly is incapable of such feelings. If it has any causal role, there's magic going on.
QuoteIf there's no sentience, then torture is impossible and morality has no purpose.Agree. So I find your definitions rather implausible for this reason. My view doesn't have this external passenger.
The physical being is all there is and is sentient in itself (yes, a different definition of sentience), has free will because nothing else is overriding its physical will, and morality has a purpose because there are obligations to the physical thing.
I also don't think morality is about pain and suffering. Everybody that says that makes it sound like life is some kind of horrible thing to have to experience. Pleasure and pain are means to an end. If the pleasure and pain were the end (the point of morality), then we should just put everybody on heroin. Problem solved. Recognizing the greater purpose is isn't a trivial task.
QuoteMost people believe that pain is real and that they strongly dislike it. If you are in that camp, then you're a unicornist yourself.Nonsense. I don't think I need the unicorn to feel my own pain for me. That you propose this indicates that the idea is beyond your comprehension, and not just an interpretation with which you don't agree.
QuoteIf they believe in sentience, the sentient thing that feels is what morality is there to protect.Almost nobody believes in the sort of sentence you describe. Typically it's a separate experiencer capable of said magic (think Chalmers), or in my case, a sentience composed of a physical process (Dennett, or whoever that hero is supposed to be).
The same magic is required for it regardless where it's controlled from.Blatantly false. A roomba is controlled from within itself and it requires no magic to do so. It just requires magic if the control is to come from outside the physical realm.
Whatever causes something is itself caused and is forced to cause what it causes.Indeed. You make it sound like a bad thing. I thought of what it would be like if choices were not based on input that was not caused by prior state. I'd be dead in a day.
There is no such thing as choice in that whatever is chosen in the end was actually forced.If that were true, mammals would not have evolved better brains to make better choices, or to make say moral choices. We are ultimately responsible for our choices, as evidenced by what happens to those that make poor ones. Not sure what choice is if you don't think that's going on.
It isn't. X causes Y, then Y causes Z --> X causes Z. Y causes Z but is forced to by X.[/quote]Which one (X, Y, or Z) is the sentience (your definition)? I thought it was a passenger and has no arrow pointing from it. If so, it has no causal role. If it has one, then there's magic going on.QuoteIf sentience is real, it has a causal role: without that, it cannot possibly cause claims about feelings being felt to be generated. It is still just a passenger though in that what it does is forced by the inputs.This seems to be a contradictory statement.
If you don't have something experiencing the feelings, you have no sentience there and the feelings don't exist either.Only true in your interpretation. I for instance never said there wasn't something experiencing my feelings. I just don't think it's a separate entity, passenger or otherwise. I'm fine with you disagreeing with it, but do you find inconsistency with it, without begging your own interpretation?
I find that thinking shallow. Heroin it is then, the most moral thing you can do to others. It minimizes suffering an maximizes pleasure, resulting in the optimum quality of life.QuoteI also don't think morality is about pain and suffering. Everybody that says that makes it sound like life is some kind of horrible thing to have to experience. Pleasure and pain are means to an end. If the pleasure and pain were the end (the point of morality), then we should just put everybody on heroin. Problem solved. Recognizing the greater purpose is isn't a trivial task.Morality is about suffering AND the opposite.
I wasn't commenting on your position. Your statement above concerned the camp that I'm in, implying that pain cannot be felt given a different interpretation of mind.Quote from: Halc... Don't attribute nonsense to me that comes out of your misreading of my position.Quote from: CooperMost people believe that pain is real and that they strongly dislike it. If you are in that camp, then you're a unicornist yourself.Nonsense. I don't think I need the unicorn to feel my own pain for me.
The only thing that actually matters here is that for feelings to be real, something real has to experience them, and that is a sentience. No sentient thing --> no feelings can be felt --> no role for morality --> you can try to torture anyone as much as you like and no harm can be done.Totally agree. That's all that matters for the purpose of this topic. I'm not the one that drove this discussion to down assertions about the interpretation of mind. Only a moral nihilist denies that feelings matter to anything, and I'm not in with that crowd.
Dennet appears to be a nihilistA word you seem to use for any monist position. You're begging your interpretation to draw this conclusion.
Quote from: David CooperThe same magic is required for it regardless where it's controlled from.Blatantly false. A roomba is controlled from within itself and it requires no magic to do so. It just requires magic if the control is to come from outside the physical realm.
If that were true, mammals would not have evolved better brains to make better choices, or to make say moral choices. We are ultimately responsible for our choices, as evidenced by what happens to those that make poor ones. Not sure what choice is if you don't think that's going on.
MInd you, I agree that if the physics of the universe is deterministic, then my choices are determined. I'm just saying that they're still choices.
I thought it was a passenger and has no arrow pointing from it. If so, it has no causal role. If it has one, then there's magic going on.
QuoteIf you don't have something experiencing the feelings, you have no sentience there and the feelings don't exist either.Only true in your interpretation. I for instance never said there wasn't something experiencing my feelings. I just don't think it's a separate entity, passenger or otherwise. I'm fine with you disagreeing with it, but do you find inconsistency with it, without begging your own interpretation?
Quote[Morality is about suffering AND the opposite.I find that thinking shallow.
Your statement above concerned the camp that I'm in, implying that pain cannot be felt given a different interpretation of mind.
QuoteDennet appears to be a nihilistA word you seem to use for any monist position. You're begging your interpretation to draw this conclusion.
Answer my question. How do you know about your passenger if it cannot make itself known to you?
The inputs are the X (inputs) that cause Y in the sentient thing, and Y then causes Z (outputs). In the course of Y happening, feelings are supposedly generated, and some of the outputs document that in some way.OK, Is Y the experiencing the feelings, or is Y the physical feelings which are noticed by the sentient experiencer? I'm trying to figure out if the physical feelings or the sentient experience of those feelings is what is causing Z, the output.
Here you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.QuoteI thought it was a passenger and has no arrow pointing from it. If so, it has no causal role. If it has one, then there's magic going on.I told you before that it has a causal role: the generation of data documenting the experience of sentience cannot be triggered without outputs from the sentience to inform the system that the experience happened.
I call it a passenger when referring to its lack of any useful causal role that can be produced just by going straight from X to ZHere again you seem to deny the 'passenger' having a causal role, yet above you say it causes data about the feelings. If I avoid standing in the rain because it gives me discomfort, then the discomfort definitely plays a causal role in my choosing to seek shelter. There's not a direct link from rain to choice of seeking shelter if I don't know if the sentient experiencer prefers a wet environment or not. Some things clearly have a preference for it, like say robins.
if the physics of the universe is deterministicIn quantum theory, physics is not deterministic (or at least, not determinable by us).
Morality is ... a harm:benefit calculation in which the harm is ideally minimised and the benefit (all kinds of pleasure) maximisedAs I understand it, people with certain brain structures have psychopathic tendencies
His assertion, not mine. And deterministic doesn't mean determinable.Quote from: Halcif the physics of the universe is deterministicIn quantum theory, physics is not deterministic (or at least, not determinable by us).
However, in mammals, I think moral decisions arise at a higher level than the quantum level - it is encoded in the strengths of synapses.Agree that it's not a quantum thing at all. Quantum stuff always comes up because dualism needs a way to allow a non-physical will to effect changes in a physical world, and QM is where lies the argument that such external interference is or isn't feasible.
From the start I said that sentience could be a property of all particles (stuff, energy), so in that sense it needn't be a passenger as it is the essential nature of that stuff.It seems to me that you used eastern philosophical definition of sentience.
Sentience is the capacity to feel, perceive, or experience subjectively.[1] Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience). In modern Western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as "qualia"). In Eastern philosophy, sentience is a metaphysical quality of all things that require respect and care. The concept is central to the philosophy of animal rights because sentience is necessary for the ability to suffer, and thus is held to confer certain rights.I had some issues regarding this view, as I stated in my previous post. What is the ultimate/terminal goal of moral rules derived from this view? What will happen if we ignore them? why are they bad?
- So the synapses driving their morality are formed with various inputs from DNA, and development before and after birth. These synaptic weights can be modified by the individual based on their teaching, experiences, and deductions.I think this view is consistent with my thought experiment posted here
The next step for cooperating more effectively is by splitting duties among colony members. Some responsible for defense, some for digesting food, etc. Though each cell are genetically identical, they can develop differently due to Gene activation by their surrounding.
This requires longer and more complex genetic materials in each organism's cell.
From neuroscience, we know that pain and pleasure are electrochemical processes in the nervous system. Hence seeking pleasure and avoiding pain should be treated as instrumental goals only, not the terminal goals themselves. Otherwise they would be the inevitable victims of reward hackings such as in drug abuses.So they need the ability to distinguish objects in their surrounding and categorize them, so they can choose appropriate actions.Some organisms develop pain and pleasure system to tell if some circumstances are good or bad for their survival. They try to avoid pain and seek pleasure, which is basically making assumptions that pain is bad while pleasure is good.
Though there are times it could be a mistake to seek pleasure and avoid pain, mostly this rule of thumb brings overall benefits to the organisms.
Avoiding pain can prevent organisms from suffering further damage which may threat their lives. While seeking pleasure can help them to get basic needs to survive, such as food and sex.
OK, Is Y the experiencing the feelings, or is Y the physical feelings which are noticed by the sentient experiencer? I'm trying to figure out if the physical feelings or the sentient experience of those feelings is what is causing Z, the output.
I ask because of this:QuoteHere you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.QuoteI thought it was a passenger and has no arrow pointing from it. If so, it has no causal role. If it has one, then there's magic going on.I told you before that it has a causal role: the generation of data documenting the experience of sentience cannot be triggered without outputs from the sentience to inform the system that the experience happened.
You say that physics is entirely deterministic, which means that output from something external to the physical system cannot cause any effects in said determined system. In your quote just above, you assert the opposite, that the system is being informed of data from non-physical sources, which would make it non-deterministic, or which makes the sentience part of the deterministic physical system, in which case it isn't two systems, but just one.
Here again you seem to deny the 'passenger' having a causal role,
If I avoid standing in the rain because it gives me discomfort, then the discomfort definitely plays a causal role in my choosing to seek shelter.
Agree that it's not a quantum thing at all. Quantum stuff always comes up because dualism needs a way to allow a non-physical will to effect changes in a physical world, and QM is where lies the argument that such external interference is or isn't feasible.
Neuroscience has shown that we can manipulate neurotransmitters to temporary disable human's ability to feel. Hence it is possible to kill a living organism, including humans, without involving any feeling of the subject (see Coup de grâce), hence not violating moral rules whose ultimate goal is to minimize pain and suffering while maximizing pleasure and happiness.
Your calculation of harm:benefit here has nothing to do with feelings. Moral rules based on pleasure and suffering as their ultimate goals are vulnerable to reward hacking (such as drugs) and exploitation by utility monsters.Neuroscience has shown that we can manipulate neurotransmitters to temporary disable human's ability to feel. Hence it is possible to kill a living organism, including humans, without involving any feeling of the subject (see Coup de grâce), hence not violating moral rules whose ultimate goal is to minimize pain and suffering while maximizing pleasure and happiness.
You can kill everyone humanely without them feeling anything, but that's clearly immoral if you're producing inferior harm:benefit figures, and you would be doing so if you tried that. Imagine that you are going to live the lives of everyone in the system, going round and round through time to do so. There are a thousand people on an island and one of them decides that he can have a better life if he kills all the others, and by doing it humanely he imagines that it's not immoral. He doesn't know that he will also live the lives of all those other people and that he will be killing himself 999 times. If he knew, he would not do it because he'd realise that he's going to lose out heavily rather than gain.
Of course, in the real world we don't believe that we're going to live all those lives in turn, but the method for calculating morality is right regardless: this is the way that AGI should calculate it. Morality isn't about rewarding one selfish person at the expense of all the others, but about maximising pleasure (though not by force - we don't all want to be drugged for it) and minimising suffering.
Also, we're setting things up for future generations. We care about our children's children's children's children's children, and we don't want to set up a system that picks one of them to give the Earth to while the rest are humanely killed. Morality isn't about biasing things in favour of one individual or group, but about rewarding all.
standing in the rain ... we could simply connect the input wire to both output wires and remove the black box and the exact same functionality is produced, including the generation of claims about feelings being experienced in the black box even though the black box no longer exists in the system.If you grew up Scottish winters, standing in the rain is likely to give you hypothermia.
Sometimes morality is just applied to "my family", "my tribe"...And sometimes morality is just applied to "my species"
With that I agree, but you are not consistent with this model.Quote from: HalcHere you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.If you don't have output from the sentience, it has no role in the system.
I also never said that something external to the physical system was involved in any way. Whatever is sentient, if feelings exist at all, is necessarily part of the physical system.OK, this is different. If it is part of the physical system, why can't it play a role in the system? What prevents it from having an output?
Your calculation of harm:benefit here has nothing to do with feelings.
Moral rules based on pleasure and suffering as their ultimate goals are vulnerable to reward hacking (such as drugs) and exploitation by utility monsters.
We know that killing random person is immoral, even if we can make sure that the person doesn't feel any pain while dying. There must be a more fundamental reason to get to that conclusion, other than minimising suffering, because no suffering is involved here.
If you grew up Scottish winters, standing in the rain is likely to give you hypothermia.
If you grew up in Darwin (Australia), standing in the rain cools you down a bit, and the water will evaporate fairly soon anyway.
With that I agree, but you are not consistent with this model.Quote from: HalcHere you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.If you don't have output from the sentience, it has no role in the system.
QuoteI also never said that something external to the physical system was involved in any way. Whatever is sentient, if feelings exist at all, is necessarily part of the physical system.OK, this is different. If it is part of the physical system, why can't it play a role in the system? What prevents it from having an output?
It would seem that I don't avoid hitting my thumb with a hammer because I want to avoid saying 'ouch'. I can say the word freely and it causes me no discomfort. No, I avoid hitting my thumb because it would hurt, which means the past experience of pain has had the causal effect of making me more careful. That's an output (a useful role), but you deny that this causal chain (output from the physical sentience) exists.
And the other problem is that the information system that generates the claims about feelings being felt is outside the black box and cannot know anything about the feelings that are supposedly being experienced in there.I am conversing with your information system, not the black box, and that information system seems very well aware indeed of those feelings. Your stance seem to be that you are unaware that you feel pain and such. I feel mine, but I cannot prove that to you since only I have a subjective connection to the output of what you call this black box.
Cannot competent? That seems a typo, but I cannot guess as to what you meant there.QuoteI avoid hitting my thumb because it would hurt, which means the past experience of pain has had the causal effect of making me more careful. That's an output (a useful role), but you deny that this causal chain (output from the physical sentience) exists.I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot competent,
unless there's something spectacular going on in the physics which science has not yet uncovered.A simple wire (nerve) from the black box to the 'information system' part is neither spectacular nor hidden from science. In reality, there's more than one, but a serial line would do in a pinch. Perhaps you posit that the black box is spatially separated from the information system to where a wire would not be practical. If so, you've left off that critical detail, which is why I'm forced to play 20 questions, 'chasing it down' as you put it.
In a chess game, the winner is not determined by who has more pieces, nor the one with highest sum of pieces' values. They are merely rule of thumb, short cut, approximation, Which is usually useful when we can't be sure about the end position of the game.The set of all possible chess states does not represent a game being played. It wouldn't be an eternal structure if it did.
The only reason a human game of chess is deeper than that is because we can't just look at a chess position and know which of those 3 states it represents. If we could, the game would be trivial.In some cases we can, especially when the possible moves ahead are limited. That's why in high level games, grand masters often resign when they still have several moves ahead before inevitably fall to a checkmate position.
In a chess game, the winner is not determined by who has more pieces, nor the one with highest sum of pieces' values. They are merely rule of thumb, short cut, approximation, Which is usually useful when we can't be sure about the end position of the game. We can easily find exceptions where they don't apply, which means they are not the most fundamental principle. Likewise, maximizing pleasure and minimizing pain are just short cut to approximate a more fundamental moral rule. The real fundamental moral rule must be applied universally, without exception. Any dispute would turn out to be technical problems due to incomplete information at hand.
I am conversing with your information system, not the black box, and that information system seems very well aware indeed of those feelings.
Your stance seem to be that you are unaware that you feel pain and such. I feel mine, but I cannot prove that to you since only I have a subjective connection to the output of what you call this black box.
On the other hand, you claim the black box does have outputs, but they're apparently not taken into consideration by anything, which is functionally the same as not having those outputs, sort of like a computer with a VGA output without a monitor plugged into it.
QuoteI don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot competent,Cannot competent? That seems a typo, but I cannot guess as to what you meant there.
Again this contradiction is asserted: You don't deny the causal connection exists, yet the information system is seemingly forbidden from using the connection. Perhaps your black box also holds an entirely different belief about how it all works, but your information system instead generates these contradictory statements, and the black box lacks the free will to make it post its actual beliefs.
A simple wire (nerve) from the black box to the 'information system' part is neither spectacular nor hidden from science.
In reality, there's more than one, but a serial line would do in a pinch.
Perhaps you posit that the black box is spatially separated from the information system to where a wire would not be practical. If so, you've left off that critical detail, which is why I'm forced to play 20 questions, 'chasing it down' as you put it.
The more fundamental rule is the one that you treat all participants as if they are a single participant. It ends up being much the same thing as utilitarianism. In your chess example, the players don't care about the wellbeing of their troops: a player could deliberately play a game in which he ends up with nothing more than king and rook against king and he will be just as happy as if he annihilated the other side without losing a piece of his own.Yes. It's written in the rules of the game. People tend to be more emotional when they are dealing with anthropomorphized objects, such as chess pieces. I don't see something like that in other games like Go, where the pieces are not anthropomorphized.
If you think my method for calculating morality doesn't work, show me an example of it failing.
Because utilitarianism is not a single theory but a cluster of related theories that have been developed over two hundred years, criticisms can be made for different reasons and have different targets.
The thought experimenthttps://en.wikipedia.org/wiki/Utility_monster
A hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.[1] Nozick writes:
Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose ... the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.[2]
This thought experiment attempts to show that utilitarianism is not actually egalitarian, even though it appears to be at first glance.[1]
The experiment contends that there is no way of aggregating utility which can circumvent the conclusion that all units should be given to a utility monster, because it's possible to tailor a monster to any given system.
For example, Rawls' maximin considers a group's utility to be the same as the utility of the member who's worst off. The "happy" utility monster of total utilitarianism is ineffective against maximin, because as soon as a monster has received enough utility to no longer be the worst-off in the group, there's no need to accommodate it. But maximin has its own monster: an unhappy (worst-off) being who only gains a tiny amount of utility no matter how many resources are given to it.
It can be shown that all consequentialist systems based on maximizing a global function are subject to utility monsters.[1]
History
Robert Nozick, a twentieth century American philosopher, coined the term "utility monster" in response to Jeremy Bentham's philosophy of utilitarianism. Nozick proposed that accepting the theory of utilitarianism causes the necessary acceptance of the condition that some people would use this to justify exploitation of others. An individual (or specific group) would claim their entitlement to more "happy units" than they claim others deserve, and the others would consequently be left to receive fewer "happy units".
Nozick deems these exploiters "utility monsters" (and for ease of understanding, they might also be thought of as happiness hogs). Nozick poses utility monsters justify their greediness with the notion that, compared to others, they experience greater inequality or sadness in the world, and deserve more happy units to bridge this gap. People not part of the utility monster group (or not the utility monster individual themselves) are left with less happy units to be split among the members. Utility monsters state that the others are happier in the world to begin with, so they would not need those extra happy units to which they lay claim anyway.
Because utilitarianism is not a single theory but a cluster of related theories that have been developed over two hundred years, criticisms can be made for different reasons and have different targets.
The thought experiment
A hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.
[1] Nozick writes:
Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose ... the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.[2]
This thought experiment attempts to show that utilitarianism is not actually egalitarian, even though it appears to be at first glance.[1]
The experiment contends that there is no way of aggregating utility which can circumvent the conclusion that all units should be given to a utility monster, because it's possible to tailor a monster to any given system.
For example, Rawls' maximin considers a group's utility to be the same as the utility of the member who's worst off. The "happy" utility monster of total utilitarianism is ineffective against maximin, because as soon as a monster has received enough utility to no longer be the worst-off in the group, there's no need to accommodate it. But maximin has its own monster: an unhappy (worst-off) being who only gains a tiny amount of utility no matter how many resources are given to it.
It can be shown that all consequentialist systems based on maximizing a global function are subject to utility monsters.[1]
Then show me a model for how those feelings are integrated into the information system. The only kinds of information system science understands map to the Chinese Room processor in which feelings cannot have a role.I don't think a system would pass a Turing test without feelings, so the Chinese room, despite being a test of ability to imitate human intelligence, not feelings, would seem to be an example of strong AI. All Searle manages to prove is that by replacing a CPU with a human, the human can be shown to function without an understanding of the Chinese language, which is hardly news. In the same way, the CPU of my computer has no idea that a jpg file represents an image.
The outputs clearly have a role, but they are determined by the inputs in such a way that the black box is superfluous: the inputs can feed directly into the outputs without any difference in the actions of the machine and the claims that it generates about feelings being experienced.If the inputs and outputs are identical, the box can be implemented as a pass-through box, which is indeed superfluous unless bypass is not an option. The phone lines in my street work that way, propagating signals from here to there with the output being ideally the same as the input.
it would simply be taking the output from a black box and then interpreting it by applying rules stored in data which was put together by something that had no idea what was actually in the black box.The whole point of a black box is that one doesn't need to know what's inside it. The whole point of the consciousness debate is to discuss what's going on inside us, so using black-box methodology seems a poor strategy for achieving this.
http://magicschoolbook.com/consciousness (http://magicschoolbook.com/consciousness) - this illustrates the problem, and I've been trying to find an error in this for many years.The site lists 19 premises. Some of them are just definitions, but some very much are assumptions, and the conclusions drawn are only as strong as the assumptions. I could think of counterexamples of many of the premises. Others are begging a view that defies methodological naturalism, which makes them non-scientific premises. So you're on your own if you find problems with it.
I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot [be] competent,OK, I repaired the sentence, but now you're saying that your own claims of experiencing pain are not competent claims? I don't think you meant to say that either, but that's how it comes out now. The claims (the posts on this site) are output by the information system, right? What else produces them? Maybe you actually mean it.
We have something (unidentified) experiencing feelings, but how is that unidentified thing going to be able to tell anything else about that experience?Using the output you say it has. I don't think the thing is unidentified, nor do I deny the output from it since said output is plastered all over our posts.
Is it to be a data system? If so, what is it in that information system that's experiencing feelings? The whole thing? Where's the mechanism for that?You don't know where the whole thing is?
If we run that information on a Chinese Room processor, we find that there's no place for feelings in it.The Chinese room models a text-only I/O. A real human is not confined to a text-only stream of input. It makes no attempt to model a human. If it did, there would indeed be a place for feelings. All the experiment shows is that the system can converse in Chinese without the guy knowing Chinese, similar to how I can post in English without any of my cells knowing the language.
With computation as we know it, there is no way to make such a model. We're missing something big.Computation as you know it is a processor running a set of instructions, hardly a model of any living thing, which is more of an electro-chemical system with a neural net. The chemicals are critical, easily demonstrated by the changed behavior of people under various drugs. Chemicals would have zero effect on a CPU running a binary instruction stream, except possibly to dissolve it.
A simple wire (nerve) from the black box to the 'information system' part is neither spectacular nor hidden from science.How do you know what the output from the box means?[/quote]I don't have to. According to your terminology, the 'data system' needs the output to be mapped according to the rules of that data system. Evolution isn't going to select for one system that cannot parse its own inputs. That would be like hooking the vision data to the auditory system and v-v. It violates the rules of the data system, leaving the person blind and deaf.
How does the data system attribute meaning to that signal?Same way my computer attributes meaning from the USB signal from my mouse: by the mouse outputting according to the rules of the data system, despite me personally not knowing those rules. I'm no expert in USB protocol. I'm more of an NFS guy, and this computer doesn't use an NFS interface. There's probably no mouse that speaks NFS.
If we try to model this based on our current understanding of computation, we get a signal in from the black box in the form of a value in a port. We then look up a file to see what data from that port represents, and then we assert that it represents that.Look up a file? My, you sure know a lot more about how it works than I do.
In reality, there's more than one, but a serial line would do in a pinch.Let's give it a parallel port and have it speak in ASCII. Now ask yourself, how is it able to speak to us? How does it know our language?[/quote]You tell me. You're the one that compartmentalizes it into an isolated box like that. Not my model at all.
There's an information processing system in the black boxThen it isn't a black box.
and that can run on a Chinese Room processor. Where are the feelings being experienced in the box, and what by? How is the information system in the black box able to measure them and know what the numbers it's getting in its measurements mean? It looks up a file to see what the numbers mean, and then it maps them too it and creates an assertion about something which it cannot know anything about.Again, your model, not mine. I have no separation of information system and the not-information-system.
Draw a model and see how well you get on with it. Where is the information system reading the feeling and how does it know that there's a feeling there at all?There's no reading of something outside the information system. My model only has the system, which does its own feeling.
How does it construct the data that documents this experience of feelingSounds like you're asking how memory works. I don't know. Not a neurologist.
where does it ever see the evidence that the feeling is in any way real?I (the information system) have subjective evidence of my feelings.
No it would not allow the mistreatment of anyone. This is what poor philosophers always do when they analyse thought experiments incorrectly - they jump to incorrect conclusions. Let me provide a better example, and then we'll look back at the above one afterwards. Imagine that a scientist creates a new breed of human which gets 100 times more pleasure out of life, and that these humans aren't disadvantaged in any way. The rest of us would then think, we want that too. If we can't have it added to us through gene modification, would it be possible to design it into our children? If so, then that is the way to switch to a population of people who enjoy life more without upsetting anyone. The missing part of the calculation is that upset that would be caused by mistreating or annihilating people, and the new breed of people who get more enjoyment out of living aren't actually going to get that enjoyment if they spend all their time fearing that they'll be wiped out next in order to make room for another breed of human which gets 10,000 times as much pleasure out of living. By creating all that fear, you actually create a world with less pleasure in it.I think we need to be clear about our definition of terms we used in this discussion, since subtle differences may lead to frustrating disagreements. I want to avoid implicit assumptions and taking for granted that our understanding of a term is the same as other participants.
Let us suppose that we can't do it with humans though and that we need to be replaced with the utility monster in order to populate the universe with things that get more out of existing than we do. The correct way to make that transition is for humans voluntarily to have fewer children and to reduce their population gradually to zero over many generations while the utility monsters grow their population. We'd agree to do this for the same reason that if we were spiders we'd be happy to disappear and be replaced by humans. We would see the superiority of the utility monster and let it win out, but not through abuse and genocide.
I don't think a system would pass a Turing test without feelings, so the Chinese room, despite being a test of ability to imitate human intelligence, not feelings, would seem to be an example of strong AI. All Searle manages to prove is that by replacing a CPU with a human, the human can be shown to function without an understanding of the Chinese language, which is hardly news. In the same way, the CPU of my computer has no idea that a jpg file represents an image.
Secondly, the mind of no living thing works via a von-Neumann architecture, with a processing unit executing a stream of instructions, but it has been shown that a Turning machine can execute any algorithm including doing what any living thing does, and thus the Chinese room is capable of passing the Turing test if implemented correctly.
Concerning the way we've been using the term 'black box'. You are describing a white box since you are placing the feelings of the sentience in the box. A black box has no description of what is in the box, only a description of inputs and outputs. A black box with no outputs can be implemented with an empty box.
Those lines are not superfluous because my phone would not work if you took them away. You seem to posit that the box is white, not black, and generates feelings that are not present at the inputs. If the inputs can be fed straight into the outputs without any difference, then the generation of said feelings cannot be distinguished at the outputs from a different box that doesn't generate them.
The whole point of a black box is that one doesn't need to know what's inside it. The whole point of the consciousness debate is to discuss what's going on inside us, so using black-box methodology seems a poor strategy for achieving this.
The site lists 19 premises. Some of them are just definitions, but some very much are assumptions, and the conclusions drawn are only as strong as the assumptions. I could think of counterexamples of many of the premises. Others are begging a view that defies methodological naturalism, which makes them non-scientific premises. So you're on your own if you find problems with it.
OK, I repaired the sentence, but now you're saying that your own claims of experiencing pain are not competent claims? I don't think you meant to say that either, but that's how it comes out now. The claims (the posts on this site) are output by the information system, right? What else produces them? Maybe you actually mean it.
QuoteWe have something (unidentified) experiencing feelings, but how is that unidentified thing going to be able to tell anything else about that experience?Using the output you say it has. I don't think the thing is unidentified, nor do I deny the output from it since said output is plastered all over our posts.
QuoteIs it to be a data system? If so, what is it in that information system that's experiencing feelings? The whole thing? Where's the mechanism for that?You don't know where the whole thing is?
If you hold to the dualist view, then you assert that all this is simply correlation, a cop-out that can be used no matter how much science learns about these things.
The Chinese room models a text-only I/O. A real human is not confined to a text-only stream of input. It makes no attempt to model a human. If it did, there would indeed be a place for feelings. All the experiment shows is that the system can converse in Chinese without the guy knowing Chinese, similar to how I can post in English without any of my cells knowing the language.
Computation as you know it is a processor running a set of instructions, hardly a model of any living thing, which is more of an electro-chemical system with a neural net. The chemicals are critical, easily demonstrated by the changed behavior of people under various drugs. Chemicals would have zero effect on a CPU running a binary instruction stream, except possibly to dissolve it.
QuoteHow do you know what the output from the box means?I don't have to. According to your terminology, the 'data system' needs the output to be mapped according to the rules of that data system. Evolution isn't going to select for one system that cannot parse its own inputs. That would be like hooking the vision data to the auditory system and v-v. It violates the rules of the data system, leaving the person blind and deaf.
QuoteHow does the data system attribute meaning to that signal?Same way my computer attributes meaning from the USB signal from my mouse: by the mouse outputting according to the rules of the data system, despite me personally not knowing those rules. I'm no expert in USB protocol. I'm more of an NFS guy, and this computer doesn't use an NFS interface. There's probably no mouse that speaks NFS.
QuoteIf we try to model this based on our current understanding of computation, we get a signal in from the black box in the form of a value in a port. We then look up a file to see what data from that port represents, and then we assert that it represents that.Look up a file? My, you sure know a lot more about how it works than I do.
QuoteLet's give it a parallel port and have it speak in ASCII. Now ask yourself, how is it able to speak to us? How does it know our language?You tell me. You're the one that compartmentalizes it into an isolated box like that. Not my model at all.
QuoteThere's an information processing system in the black boxThen it isn't a black box.
Again, your model, not mine. I have no separation of information system and the not-information-system.
QuoteDraw a model and see how well you get on with it. Where is the information system reading the feeling and how does it know that there's a feeling there at all?There's no reading of something outside the information system. My model only has the system, which does its own feeling.
QuoteHow does it construct the data that documents this experience of feelingSounds like you're asking how memory works. I don't know. Not a neurologist.
Quotewhere does it ever see the evidence that the feeling is in any way real?I (the information system) have subjective evidence of my feelings.
Who do you mean with anyone? human? what about animals and plants?
Why pleasure is good while pain is bad?
what about inability/reduced ability to feel pain or pleasure?
How much fewer children is considered acceptable?
Okay, but it's a black box until we try to work out what's going on inside it, at which point it becomes a white box and we have to complete the contents by including a new black box.This is fine, but you're not going to demonstrate your sentience that way, since you always put it in the black box where you cannot assert its existence.
First, the outputs are not the same as the inputsDidn't you say otherwise?
the inputs can feed directly into the outputs without any difference in the actions of the machine and the claims that it generates about feelings being experienced.OK, this statement says the inputs can be fed into the outputs, but are not necessarily. It says those outputs make no difference to the actions of the machine, which means the machine would claim feelings even if there were none. That means you've zero evidence for this sentience you claim.
there's an extra output line which duplicates what goes out on the main output line, and this extra one is read as indicating that a feeling was experienced.This contradicts your prior statement.
The whole point of the black box is to draw your attention to the problem.More like a way to hide it. The scientists that work on this do not work this way. They explore what's in the box.
If the bit we can't model is inside the black box and we don't know what's going on in there, we don't have a proper model of sentience.So you're admitting you don't have a proper white box model? Does anybody claim they have one?
they always have to point somewhere and say "feelings are felt here and they are magically recognised as existing and as being feelings by this magic routine which asserts that they are being felt there even though it has absolutely no evidence to back its assertion".I'm unaware of this wording. There are no 'routines' for one thing. They very much do have evidence as to mapping where much of this functionality goes on, but that isn't a model of how it works. It is a pretty good way to say which creatures 'feel' the various sorts of this to which humans can relate.
Some small nits. The information system processes only data (1). 3 says the non-data must first be converted to data before being given to the information system (IS), but 5 and 13 talk about the IS doing the converting, which means it processes something that isn't data. As I said, that's just a nit.QuoteThe site lists 19 premises. Some of them are just definitions, but some very much are assumptions, and the conclusions drawn are only as strong as the assumptions. I could think of counterexamples of many of the premises. Others are begging a view that defies methodological naturalism, which makes them non-scientific premises. So you're on your own if you find problems with it.Give me your best counterexample then. So far as I can see, they are correct. If you can break any one of them, that might lead to an advance, so don't hold back.
That is predicated on the idea that the brain works like a computer, processing data in ways that science understands.Science does not posit the brain to operate like a computer. There are some analogies, sure, but there is no equivalent to a CPU, address space, or instructions. Yes, they have a fairly solid grasp on how the circuitry works, but not how the circuit works.
I'm not asking where the whole thing is. I was asking if it's the whole thing that's experiencing feelings rather than just a part of it.Yes, It's the whole thing. It isn't a special piece of material or anything.
It makes little difference either way though, because to model this we need to have an interface between the experience and the system that makes data. For that data to be true, the system that makes it has to be able to know about the experience, but it can't.Doesn't work that way. Eyes arguably 'makes data', yet isn't a device that 'knows' about experience. The system that processes the data (in my case) has evolved to be compatible with the system that makes the data, not the other way around. It's very good at that, being able to glean information from new sources. They've taught humans to navigate by sound like a bat, despite the fact that we've not evolved for it. The system handles this alternately formatted data (outside the rules of the IS) just fine. The only thing they needed to add was the bit that produces the sound pulses, since we're not physically capable of generating them.
Didn't say otherwise, but that's all it does is run code. The processor doesn't know Chinese. But the system (the whole thing) does. There is no black box where the Chinese part is. There's not a 'know Chinese' instruction in the book of English instructions from which the guy in there works.QuoteAll the [Chinese room] experiment shows is that the system can converse in Chinese without the guy knowing Chinese, similar to how I can post in English without any of my cells knowing the language.A Chinese Room processor can run any code at all and can run an AGI system. It is Turing complete.
We can simulate neural networks. Where is the interface between the experience of feelings and the system that generates the data to document that experience?This presumes that the experience is not part of the system, and that it needs to be run through this data-generation step. You hold the same premise as step 7.
Waving at something complex isn't good enough. You have no model of sentience.Pretty much how you're presenting your views, yes. My model is pretty simple actually. I don't claim to know how it works. Neither do you, but you add more details than do I, but still hide your complex part in a black box, as if you had an understanding of how the data-processing part worked.
but we do have models of neural nets which are equivalent to running algorithms on conventional computers.That we do.
If evolution selects for an assertion of pain being experienced in once case and an assertion of pleasure in another case, who's to say that the sentient thing isn't actually feeling the opposite sensation to the one asserted? The mapping of assertion to output is incompetent.This makes no sense to me since I don't model the sentience as a separate thing. There is no asserting going on. If the data system takes 'damage' data and takes pleasure from them, then it will make choices to encourage the sensation, resulting in the being being less fit.
The mouse is designed to speak the language that the computer understands, or rather, the computer is told how to interpret the squeaks from the mouse.The first guess is closer. Somebody put out a standard interface and both computer and mouse adhere to that interface. Sensory organs and brains don't work that way, being evolved rather than designed. Turns out the sensory organ pretty much defines the data format, and the IS is really good at extracting meaning from any data. So we could in theory afix a 6th sense to detect vibrations of passing creatures, like the lateral line in fish. Run some nerves from that up the spine and the IS would quickly have a new sense to add to its qualia. Some people see a 4th color, and some only 2.
If there are feelings being experienced in the mouse, the computer cannot know about them unless the mouse tells it, and for the mouse to tell it it has to use a language.And even then, the computer only knows about the claim, not the feelings. You don't seem to be inclined to believe a computer mouse if it told you it had feelings.
If the mouse is using a language, something in the mouse has to be able to read the feelings, and how does that something know what's being felt? It can't.This again assumes feelings separate from the thing that reads it. Fine and dandy if it works that way, but if the two systems don't interface in a meaningful way, then system 2 is not able to pass on a message from system 1 that it just interprets as noise.
I'm trying to eliminate the magic, and the black box shows the point where that task becomes impossible. So, you open up the black box and have the feelings exist somewhere (who cares where) in the system while data is generated to document the existence of those feelings, but you still can't show me how the part of the system putting that data together knows anything about the feelings at all.The part of the system putting that data together experiences the subjective feelings directly since it's the same system. No magic is needed for a system to have access to itself. The part of the system documenting the feelings is probably my mouth and hands since I can speak and write of those feelings. You seem to ask how the hands know about the feelings. They don't. They do what they're told via the one puppet language they understand: Move thus. They have no idea that they're documenting feelings, and such documentation can be produced by anything (like a copy machine), so it's hardly proof of a particular documented claim.
And that's how you fool yourself into thinking you have a working mode, but it runs on magiclI'm only fooling myself if I'm wrong, and that hasn't been demonstrated. My model doesn't run on magic. I've asserted no such thing, and you've supposedly not asserted it about your model.
The part of it that generates the data about feelings might be in intense pain, but how can the process it's running know anything about that feeling in order to generate data about it?Your model, not mine. You need magic because you're trying to squeeze your model into mine. Your statement above mixes layers of understanding and is thus word salad, like describing a system using classic and quantum physics intermixed.
For one, it already is data, so no conversion. I am capable of lying, so if I generate additional data (like I do on these posts), I have no way of proving that the data is true, so I cannot assure something outside the system of the truth of generated data. Inside the system, there is no truth or falsehood, just subjective experience.QuoteThere's no reading of something outside the information system. My model only has the system, which does its own feeling.And how does it then convert from that experience of feeling into data being generated in a competent way that ensures that the data is true?
I'm asking for a theoretical model. Science doesn't have one for this.A model of how memory works? I think they have some, but I'm no personally aware of them. It's just not my field. I mean, I'm a computer guy, and yet I'd have to look it up if I were to provide an answer as to how exactly the various kinds of computer memory work. For my purposes, I just assume it does.
That is the model. One system, not multiple. Yes, it has inputs and outputs, but the feelings don't come from those. There is no generation of data of feelings from a separate feeling organ.QuoteI (the information system) have subjective evidence of my feelings.Show me the model.
Okay, but it's a black box until we try to work out what's going on inside it, at which point it becomes a white box and we have to complete the contents by including a new black box.This is fine, but you're not going to demonstrate your sentience that way, since you always put it in the black box where you cannot assert its existence.
QuoteFirst, the outputs are not the same as the inputsDidn't you say otherwise?
It says those outputs make no difference to the actions of the machine, which means the machine would claim feelings even if there were none. That means you've zero evidence for this sentience you claim.
Quotethere's an extra output line which duplicates what goes out on the main output line, and this extra one is read as indicating that a feeling was experienced.This contradicts your prior statement.
1) How do you know about these lines? The answer seems awfully like something you just now made up.
2) If there are two outputs and one is a duplicate of the other, how can it carry additional information?
3) This is the contradiction part: You said earlier that the action of the 'machine' is unaffected by these outputs, but here you claim that an output is read as indicating that a feeling was experienced. That's being affected. If the machine action is unaffected by this output, then the output is effectively ignored at some layer.
Where does the output of your black box go? To what is it connected? This is outside the black box, so science should be able to pinpoint it. It's in the white part of the box after all. If you can't answer that, then you can't make your black box ever smaller since the surrounding box is also black.
QuoteThe whole point of the black box is to draw your attention to the problem.More like a way to hide it. The scientists that work on this do not work this way. They explore what's in the box.
QuoteIf the bit we can't model is inside the black box and we don't know what's going on in there, we don't have a proper model of sentience.So you're admitting you don't have a proper white box model? Does anybody claim they have one?
Quotethey always have to point somewhere and say "feelings are felt here and they are magically recognised as existing and as being feelings by this magic routine which asserts that they are being felt there even though it has absolutely no evidence to back its assertion".I'm unaware of this wording. There are no 'routines' for one thing. They very much do have evidence as to mapping where much of this functionality goes on, but that isn't a model of how it works. It is a pretty good way to say which creatures 'feel' the various sorts of this to which humans can relate.
Some small nits. The information system processes only data (1). 3 says the non-data must first be converted to data before being given to the information system (IS), but 5 and 13 talk about the IS doing the converting, which means it processes something that isn't data. As I said, that's just a nit.
13 also talks about ideas being distinct from data. An idea sounds an awful lot like data to me.
A counterexamples comes up with 10 which says that data which is not covered by the rules of the IS cannot be considered by the IS. Not sure what they mean by 'considered' ...
... but take a digital signal processor (DSP) or just a simple amplifier. It might be fed a data stream that is meaningless to the IS, yet the IS is completely capable of processing the stream. This is similar to the guy in the Chinese room. He is an IS, and he's handling data (the Chinese symbols) that does not conform to his own rules (English), yet he's tasked with processing that data.
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.
Science does not posit the brain to operate like a computer. There are some analogies, sure, but there is no equivalent to a CPU, address space, or instructions. Yes, they have a fairly solid grasp on how the circuitry works, but not how the circuit works.
QuoteI'm not asking where the whole thing is. I was asking if it's the whole thing that's experiencing feelings rather than just a part of it.Yes, It's the whole thing. It isn't a special piece of material or anything.
Doesn't work that way. Eyes arguably 'makes data', yet isn't a device that 'knows' about experience. The system that processes the data (in my case) has evolved to be compatible with the system that makes the data, not the other way around. It's very good at that, being able to glean information from new sources. They've taught humans to navigate by sound like a bat, despite the fact that we've not evolved for it. The system handles this alternately formatted data (outside the rules of the IS) just fine. The only thing they needed to add was the bit that produces the sound pulses, since we're not physically capable of generating them.
The processor doesn't know Chinese. But the system (the whole thing) does. There is no black box where the Chinese part is. There's not a 'know Chinese' instruction in the book of English instructions from which the guy in there works.
]This presumes that the experience is not part of the system, and that it needs to be run through this data-generation step. You hold the same premise as step 7.
QuoteWaving at something complex isn't good enough. You have no model of sentience.Pretty much how you're presenting your views, yes. My model is pretty simple actually. I don't claim to know how it works. Neither do you, but you add more details than do I, but still hide your complex part in a black box, as if you had an understanding of how the data-processing part worked.
QuoteIf evolution selects for an assertion of pain being experienced in one case and an assertion of pleasure in another case, who's to say that the sentient thing isn't actually feeling the opposite sensation to the one asserted? The mapping of assertion to output is incompetent.This makes no sense to me since I don't model the sentience as a separate thing.
There is no asserting going on. If the data system takes 'damage' data and takes pleasure from them, then it will make choices to encourage the sensation, resulting in the being being less fit.
QuoteThe mouse is designed to speak the language that the computer understands, or rather, the computer is told how to interpret the squeaks from the mouse.The first guess is closer.
And even then, the computer only knows about the claim, not the feelings.
You don't seem to be inclined to believe a computer mouse if it told you it had feelings.
This again assumes feelings separate from the thing that reads it. Fine and dandy if it works that way, but if the two systems don't interface in a meaningful way, then system 2 is not able to pass on a message from system 1 that it just interprets as noise.
The part of the system putting that data together experiences the subjective feelings directly since it's the same system.
No magic is needed for a system to have access to itself.
The part of the system documenting the feelings is probably my mouth and hands since I can speak and write of those feelings.
My model doesn't run on magic. I've asserted no such thing, and you've supposedly not asserted it about your model.
Your model, not mine. You need magic because you're trying to squeeze your model into mine. Your statement above mixes layers of understanding and is thus word salad, like describing a system using classic and quantum physics intermixed.
QuoteAnd how does it then convert from that experience of feeling into data being generated in a competent way that ensures that the data is true?For one, it already is data, so no conversion.
I am capable of lying, so if I generate additional data (like I do on these posts), I have no way of proving that the data is true, so I cannot assure something outside the system of the truth of generated data. Inside the system, there is no truth or falsehood, just subjective experience.
A model of how memory works?
That is the model. One system, not multiple. Yes, it has inputs and outputs, but the feelings don't come from those. There is no generation of data of feelings from a separate feeling organ.
I have given you a method which can be used to determine the right form of utilitarianism. Where they differ, we can now reject the incorrect ones.I think what you are doing here is building a moral system based on simple version of utilitarianism, and then apply patches to cover specific criticisms that discovers loopholes on it. Discovering those loopholes is what philosophers do.
No it would not allow the mistreatment of anyone. This is what poor philosophers always do when they analyse thought experiments incorrectly - they jump to incorrect conclusions. Let me provide a better example, and then we'll look back at the above one afterwards. Imagine that a scientist creates a new breed of human which gets 100 times more pleasure out of life, and that these humans aren't disadvantaged in any way. The rest of us would then think, we want that too. If we can't have it added to us through gene modification, would it be possible to design it into our children? If so, then that is the way to switch to a population of people who enjoy life more without upsetting anyone. The missing part of the calculation is that upset that would be caused by mistreating or annihilating people, and the new breed of people who get more enjoyment out of living aren't actually going to get that enjoyment if they spend all their time fearing that they'll be wiped out next in order to make room for another breed of human which gets 10,000 times as much pleasure out of living. By creating all that fear, you actually create a world with less pleasure in it.
Let us suppose that we can't do it with humans though and that we need to be replaced with the utility monster in order to populate the universe with things that get more out of existing than we do. The correct way to make that transition is for humans voluntarily to have fewer children and to reduce their population gradually to zero over many generations while the utility monsters grow their population. We'd agree to do this for the same reason that if we were spiders we'd be happy to disappear and be replaced by humans. We would see the superiority of the utility monster and let it win out, but not through abuse and genocide.
No. Utilitarian theory applied correctly does not allow that because it actually results in a hellish life of fear for the utility monsters.
When you apply my method to it, you see that one single participant is each of the humans and each of the utility monsters, living each of those lives in turn. This helps you see the correct way to apply utilitarianism because that individual participant will suffer more if the people in the system are abused and if the utility monsters are in continual fear that they'll be next to be treated that way.
That analysis of the experiment is woeful philosophy (and it is also very much the norm for philosophy because most philosophers are shoddy thinkers who fail to take all factors into account).
I don't know what that is, but it isn't utilitarianism because it's ignoring any amount of happiness beyond the level of the least happy thing in existence.
If you ask people if they'd like to be modified so that they can fly, most would agree to that. We could replace non-flying humans with flying ones and we'd like that to happen. That is a utility monster, and it's a good thing. There are moral rules about how we get from one to the other, and that must be done in a non-abusive way. If all non-flying humans were humanely killed to make room with flying ones, are those flying ones going to be happy when they realise the same could happen to them to make room for flying humans that can breathe underwater? No. Nozick misapplies utilitarianism.
If they're sentient, then they're included. Some animals may not be, and it's highly doubtful that any plants are, or at least, not in any way that's tied to what's happening to them (just as the material of a rock could be sentient).You need to draw a line between sentient and non-sentient. Or assign numbers to allow us measure and describe sentience, including partial sentience. The next step would be some methods to use those numbers to make decisions of which options to take in morally conflicting situations.
They are just what they are. One is horrible and we try to avoid it, while the other is nice and we seek it out, with the result that most people are now overweight due to their desire to eat delicious things.
Pain is a distressing feeling often caused by intense or damaging stimuli. The International Association for the Study of Pain's widely used definition defines pain as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage".[1] In medical diagnosis, pain is regarded as a symptom of an underlying condition.https://en.wikipedia.org/wiki/Pain
Pleasure is a component of reward, but not all rewards are pleasurable (e.g., money does not elicit pleasure unless this response is conditioned).[2] Stimuli that are naturally pleasurable, and therefore attractive, are known as intrinsic rewards, whereas stimuli that are attractive and motivate approach behavior, but are not inherently pleasurable, are termed extrinsic rewards.[2] Extrinsic rewards (e.g., money) are rewarding as a result of a learned association with an intrinsic reward.[2] In other words, extrinsic rewards function as motivational magnets that elicit "wanting", but not "liking" reactions once they have been acquired.[2]https://en.wikipedia.org/wiki/Pleasure#Neuropsychology
The reward system contains pleasure centers or hedonic hotspots – i.e., brain structures that mediate pleasure or "liking" reactions from intrinsic rewards. As of October 2017, hedonic hotspots have been identified in subcompartments within the nucleus accumbens shell, ventral pallidum, parabrachial nucleus, orbitofrontal cortex (OFC), and insular cortex.[3][4][5] The hotspot within the nucleus accumbens shell is located in the rostrodorsal quadrant of the medial shell, while the hedonic coldspot is located in a more posterior region. The posterior ventral pallidum also contains a hedonic hotspot, while the anterior ventral pallidum contains a hedonic coldspot. Microinjections of opioids, endocannabinoids, and orexin are capable of enhancing liking in these hotspots.[3] The hedonic hotspots located in the anterior OFC and posterior insula have been demonstrated to respond to orexin and opioids, as has the overlapping hedonic coldspot in the anterior insula and posterior OFC.[5] On the other hand, the parabrachial nucleus hotspot has only been demonstrated to respond to benzodiazepine receptor agonists.[3]
Hedonic hotspots are functionally linked, in that activation of one hotspot results in the recruitment of the others, as indexed by the induced expression of c-Fos, an immediate early gene. Furthermore, inhibition of one hotspot results in the blunting of the effects of activating another hotspot.[3][5] Therefore, the simultaneous activation of every hedonic hotspot within the reward system is believed to be necessary for generating the sensation of an intense euphoria.[6]
What about it? Each individual must be protected by morality from whatever kinds of suffering can be inflicted on it, and that varies between different people as well as between different species.A person gets brain damage that makes him unable to feel pain and pleasure, while still capable of doing normal activities. Is he still considered sentient? Does he still has right to be treated as sentient being? Why so?
Imagine that you have to live all the lives of all the people and utility monsters. They are all you. With that understanding in your head, you decide that you prefer being utility monsters, so you want to phase out people and replace them. You also have to live the lives of those people, so you need to work out how not to upset them, and the best way to do that is to let the transition take a long time so that the difference is too small to register with them. For a sustainable human population, each person who has children might have 1.2 children. That could be reduced to 1.1 and the population would gradually disappear while the utility monsters gradually increase in number. Some of those humans will realise that they're envious of the utility monsters and would rather be them, so they may be open to the idea of bringing up utility monsters instead of children, and that may be all you need to drive the transition. It might also make the humans feel a lot happier about things if they know that a small population of humans will be allowed to go on existing forever - that could result in better happiness numbers overall than having them totally replaced by utility monsters.If we acknowledge that currently, humans are not the most optimal form to achieve universal moral goal, we also acknowledge that there are somethings that must be changed. But we must be careful that many changes lead to worse outcome than existing condition.
I think what you are doing here is building a moral system based on simple version of utilitarianism, and then apply patches to cover specific criticisms that discovers loopholes on it. Discovering those loopholes is what philosophers do.
Rawl's version is widely recognized as one form of utilitarianism.
You need to draw a line between sentient and non-sentient.
Or assign numbers to allow us measure and describe sentience, including partial sentience. The next step would be some methods to use those numbers to make decisions of which options to take in morally conflicting situations.
I don't think that a fundamental principle of morality should be based on symptoms.
A person gets brain damage that makes him unable to feel pain and pleasure, while still capable of doing normal activities. Is he still considered sentient? Does he still has right to be treated as sentient being? Why so?
If we acknowledge that currently, humans are not the most optimal form to achieve universal moral goal, we also acknowledge that there are somethings that must be changed. But we must be careful that many changes lead to worse outcome than existing condition.
Irrational is what they are. It means there's no point in engaging with an irrational data system, as you label it. Your whole moral code is based on a lie about feeling for which you claim no evidence exists.QuoteIt says those outputs make no difference to the actions of the machine, which means the machine would claim feelings even if there were none. That means you've zero evidence for this sentience you claim.That's the whole point: there is no evidence of the sentience. There is no way for a data system to acquire such evidence, so its claims about the existence of sentience are incompetent.
Once you're dealing with neural nets, you may not be able to work out how they do what they do, but they are running functionality in one way or another. That lack of understanding leaves room for people to point at the mess and say "sentience is in there", but that's not doing science.But you're pointing in there and saying sentience is not there, which is equally not science. Science is not saying "I don't know how it works, so it's in there". I in particular reference my subjective experience in making my claim, despite my inability to present that evidence to another.
We need to see the mechanism and we need to identify the thing that is sentient. Neural nets can be simulated and we can then look at how they behave in terms of cause and effect.Doesn't work. You can look at them all you want and understand exactly how it works, and still not see the sentience because the understanding is not subjective. The lack of understanding is not the problem.
I made no mention of variables. I said ideas seem to be data. You assert otherwise, but have not demonstrated it.Quote13 also talks about ideas being distinct from data. An idea sounds an awful lot like data to me.Variables are data, but they are not ideas.
If sentience is a form of data, what does that sentience look like in the Chinese Room?Chinese room is not a model of a human, or if it is, it is a model of a paralyzed person with ESP in a sensory deprivation chamber. Any output from it that attempts to pass a Turing test is deceit.
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.You didn't really reply to this. You posted some text after it, but that text (above) was related to sentience being the processing of data and no to point 7 which implicitly assumes a premise of separation of 'conscious thing' and an 'information system'.
If a multi-component feels a feeling without any of the components feeling anything, that's magic.I was wondering where you thought the magic was needed. Now I know. I deny that it is magic. Combustion of a gas can occur without any of the electrons and protons (the compoents) being combusted. A computer can read a web page without any transistor actually reading the web page. Kindly justify your assertion.
We don't have any model for sentience being part of the systemDon't say 'we'. You don't have a model maybe.
The claims that come out about feelings are assertions. They are either true or baseless. If the damage inputs are handled correctly, the pleasure will be suppressed in an attempt to minimise damage.Given damage data, what's the point of suppressing pleasure if the system that is in charge of minimizing the damage is unaware of either the pain or pleasure? This makes no sense given the model you've described.
And if an unpleasant feeling is generated when an animal eats delicious food, it will be designed (by evolution) to go on eating it.You told me the animal cannot know the food tastes good. It just concludes it should eat it, I don't know, due to logical deduction or something.
This presumes that 'feeling' and 'normal signal' are different things. I'll partially agree since I don't think any feeling is reducible to one signal, but signals involved with feelings are quite normal.QuoteMy model doesn't run on magic. I've asserted no such thing, and you've supposedly not asserted it about your model.It's measuring a feeling and magically knowing that it's a feeling that it's measuring rather than just a signal of any normal kind.
It means there's no point in engaging with an irrational data system, as you label it. Your whole moral code is based on a lie about feeling for which you claim no evidence exists.
But you're pointing in there and saying sentience is not there, which is equally not science.
Science is not saying "I don't know how it works, so it's in there".
Doesn't work. You can look at them [neural nets] all you want and understand exactly how it works, and still not see the sentience because the understanding is not subjective. The lack of understanding is not the problem.
QuoteVariables are data, but they are not ideas.I made no mention of variables. I said ideas seem to be data. You assert otherwise, but have not demonstrated it.
Chinese room is not a model of a human, or if it is, it is a model of a paralyzed person with ESP in a sensory deprivation chamber. Any output from it that attempts to pass a Turing test is deceit.
Nevertheless, the thing is capable of its own sentience. The sentience is in the processing of the data of course. It is not the data itself. Data can be shelved. Process cannot.
Quote from: HalcMy big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.You didn't really reply to this. You posted some text after it, but that text (above) was related to sentience being the processing of data and no to point 7 which implicitly assumes a premise of separation of 'conscious thing' and an 'information system'.
Combustion of a gas can occur without any of the electrons and protons (the compoents) being combusted.
There are creatures that feel (in a crude manner) and yet lack the complexity (or the motivation) to document it, so they've no memory of past feelings.
QuoteWe don't have any model for sentience being part of the systemDon't say 'we'. You don't have a model maybe.
Given damage data, what's the point of suppressing pleasure if the system that is in charge of minimizing the damage is unaware of either the pain or pleasure? This makes no sense given the model you've described.
QuoteAnd if an unpleasant feeling is generated when an animal eats delicious food, it will be designed (by evolution) to go on eating it.You told me the animal cannot know the food tastes good. It just concludes it should eat it, I don't know, due to logical deduction or something.
If they're sentient, then they're included. Some animals may not be, and it's highly doubtful that any plants are, or at least, not in any way that's tied to what's happening to them (just as the material of a rock could be sentient).That's the very problem identified by philosophers critisizing utilitarianism. How can you expect anyone else to agree with your thoughts when your don't clearly define what you mean with sentience, which you claimed to be the core idea of universal morality? At least you have to define a criterion to determine which agent is more sentient when compared to another agent. It would be better if you can assign a number to represent each agent's sentience, so they can be ranked at once. You can't calculate something that can't be quantified. Until you have a method to quantify sentience of moral agents, your AGI is useless to calculate the best option in a moral problem.
The neutral feelings contribute nothing to total utility, hence the resources should be used optimally, which is to maximize positive feelings and minimize negative feelings.A person gets brain damage that makes him unable to feel pain and pleasure, while still capable of doing normal activities. Is he still considered sentient? Does he still has right to be treated as sentient being? Why so?
If you've removed all of that from him, there could still be neutral feelings like colour qualia, in which case he would still be sentient. You could thus have a species which is sentient but only has such neutral feelings and they would not care about existing or anything else that happens to them, so they have no need of protection from morality. They might be programmed to struggle to survive when under attack, but in their minds they would be calmly observing everything throughout and would be indifferent to the outcome.
In the case of your brain-damaged human though, there are the relatives, friends and other caring people to consider. They will be upset if he is not protected by morality even if he doesn't need that himself.So if the brain-damaged human has no relative or friend that care, e.g. unwanted baby left by the parents, there would be no utilitarian moral reason to save him/her.
That's the very problem identified by philosophers critisizing utilitarianism. How can you expect anyone else to agree with your thoughts when your don't clearly define what you mean with sentience, which you claimed to be the core idea of universal morality?
At least you have to define a criterion to determine which agent is more sentient when compared to another agent. It would be better if you can assign a number to represent each agent's sentience, so they can be ranked at once. You can't calculate something that can't be quantified. Until you have a method to quantify sentience of moral agents, your AGI is useless to calculate the best option in a moral problem.
AFAIK, neuroscience has demonstrated that pain, pleasure, sadness, happiness are electrochemical states of nervous systems, and human already have basic understanding of how to manipulate them at will. I think we can be quite confident to say that rocks feel nothing, thus not sentient.
So if the brain-damaged human has no relative or friend that care, e.g. unwanted baby left by the parents, there would be no utilitarian moral reason to save him/her.
I'm not required to spell out what is sentient and in what ways it is sentient. That task is part of the calculation: what are the odds that species A is sentient, and how much does it suffer in cases where it suffers, and how much pleasure does it experience in cases where it enjoys things. AGI will make the best judgements it can about those things and then act on the basis of those numbers. It will look at rocks and determine that there is no known way to affect how any sentience that might be in any rock is feeling, so anything goes when it comes to interactions with rocks.
It's AGI's job to work out those numbers as best as they can be worked out.Do you know how Artificial Intelligence work? Their creators need to define what their ultimate/terminal goal is. An advanced version of AI may find instrumental goals beyond the expectation of its creators, but they won't change the ultimate/terminal goal. I have posted several videos discussing this. You better check them out.
Neuroscience has demonstrated nothing of the kind. It merely makes assumptions equivalent to listening to the radio waves coming off a processor and making connections with patterns in that and the (false) claims about sentience being generated by a program.Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition. If you want to expand the scope of the term, it's fine. You just need to clearly state its new boundary condition so everyone else can understand what you mean. Does your calculation include emotional states such as happiness, sadness, love, passion, anger, anxiety, lust, etc.?
Do you know how Artificial Intelligence work?
Their creators need to define what their ultimate/terminal goal is.
An advanced version of AI may find instrumental goals beyond the expectation of its creators, but they won't change the ultimate/terminal goal.
I have posted several videos discussing this. You better check them out.
Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition.
Does your calculation include emotional states such as happiness, sadness, love, passion, anger, anxiety, lust, etc.?
You have claimed that the ultimate goal of morality is maximizing X while minimizing Y. But so far you haven't clearly define what they are and their boundary conditions, so it's impossible for anyone else to definitively agree or disagree with you.
QuoteI wasn't talking about artificial intelligent machines here. It was experiments on living humans using medical instrumentation such as fMRI and brainwave sensors that can determine when someone is conscious or not, when they are feeling pain or not. We can compare the readings of the instrumentations and the experience of the human subjects to draw a general patterns about what brain conditions constitute consciousness and feelings.
Neuroscience has demonstrated how brain activity would be like when someone is conscious and when someone is not conscious. It can determine if someone is feeling pain or not, pleasure or not. At least it can demonstrate sentience in the standard definition.
All it has demonstrated is correlation with something that may or may not be real. If you pull the plug on a machine that's generating false claims about being conscious, the false claims stop. The link between the claims being generated and particular patterns of activity in a processor do not determine that the claimed feelings in the system are real.
I wasn't talking about artificial intelligent machines here. It was experiments on living humans using medical instrumentation such as fMRI and brainwave sensors that can determine when someone is conscious or not, when they are feeling pain or not. We can compare the readings of the instrumentations and the experience of the human subjects to draw a general patterns about what brain conditions constitute consciousness and feelings.
You are talking about biological machines which generate claims about consciousness which may not be true, just as a computer can generate claims about experiencing feelings (including one of awareness) without those claims being true. When you disrupt the functionality of the hardware in some way, whether it's a CPU or a brain, you stop the generation of those claims. You do not get any proof from that that you are narrowing down the place where actual feelings might be being experienced.Any instrumentation system has non-zero error rate. There always be a chance for either false positive or false negative. But as long the error rate can be maintained below an acceptable limit (based on risk evaluation considering probability of the error occurence and severity of the effects), the method can be legitimately used.
Russell & Norvig (2003) group agents into five classes based on their degree of perceived intelligence and capability:https://en.wikipedia.org/wiki/Intelligent_agent#Classes
1. simple reflex agents
(https://upload.wikimedia.org/wikipedia/commons/thumb/9/91/Simple_reflex_agent.png/408px-Simple_reflex_agent.png)
2. model-based reflex agents
(https://upload.wikimedia.org/wikipedia/commons/thumb/8/8d/Model_based_reflex_agent.png/408px-Model_based_reflex_agent.png)
3. goal-based agents
(https://upload.wikimedia.org/wikipedia/commons/thumb/4/4f/Model_based_goal_based_agent.png/408px-Model_based_goal_based_agent.png)
4. utility-based agents
(https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Model_based_utility_based.png/408px-Model_based_utility_based.png)
5. learning agents
(https://upload.wikimedia.org/wikipedia/commons/thumb/0/09/IntelligentAgent-Learning.png/408px-IntelligentAgent-Learning.png)
I think it's good that you and others are still exploring this. We'll soon be able to put all the different approaches to the test by running them in AGI systems to see how they perform when applied consistently to all thought experiments. Many approaches will be shown to be wrong by their clear failure to account for some scenarios which reveal serious defects. Many others may do a half-decent job in all cases. Some may do the job perfectly. I'm confident that my approach will produce the best performance in all cases despite it being extremely simple because I think I've found the actual logical basis for morality. I think other approaches are guided by a subconscious understanding of this too, but instead of uncovering the method that I found, people tend to create rules at a higher level which fail to account for everything that's covered at the base level, so they end up with partially correct moral systems which fail in some circumstances. Whatever your ideas evolve into, it will be possible to let AGI take your rules and apply them to test them to destruction, so I'm going to stop commenting in this thread in order not to lose any time that's better spent on building the tool that will enable that testing to be done.Thank you for your contribution in this topic. It's sad that you decide to stop, but it's certainly your right.
In 2012, Oliver Scott Curry was an anthropology lecturer at the University of Oxford. One day, he organized a debate among his students about whether morality was innate or acquired. One side argued passionately that morality was the same everywhere; the other, that morals were different everywhere.
“I realized that, obviously, no one really knew, and so decided to find out for myself,” Curry says.
Seven years later, Curry, now a senior researcher at Oxford’s Institute for Cognitive and Evolutionary Anthropology, can offer up an answer to the seemingly ginormous question of what morality is and how it does—or doesn’t—vary around the world.
Morality, he says, is meant to promote cooperation. “People everywhere face a similar set of social problems, and use a similar set of moral rules to solve them,” he says as lead author of a paper recently published in Current Anthropology. “Everyone everywhere shares a common moral code. All agree that cooperating, promoting the common good, is the right thing to do.”
For the study, Curry’s group studied ethnographic accounts of ethics from 60 societies, across over 600 sources. The universal rules of morality are:
Help your family
Help your group
Return favors
Be brave
Defer to superiors
Divide resources fairly
Respect others’ property
The authors reviewed seven “well-established” types of cooperation to test the idea that morality evolved to promote cooperation, including family values, or why we allocate resources to family; group loyalty, or why we form groups, conform to local norms, and promote unity and solidarity; social exchange or reciprocity, or why we trust others, return favors, seek revenge, express gratitude, feel guilt, and make up after fights; resolving conflicts through contests which entail “hawkish displays of dominance” such as bravery or “dovish displays of submission,” such as humility or deference; fairness, or how to divide disputed resources equally or compromise; and property rights, that is, not stealing.
The team found that these seven cooperative behaviors were considered morally good in 99.9% of cases across cultures. Curry is careful to note that people around the world differ hugely in how they prioritize different cooperative behaviors. But he said the evidence was overwhelming in widespread adherence to those moral values.
“I was surprised by how unsurprising it all was,” he says. “I expected there would be lots of ‘be brave,’ ‘don’t steal from others,’ and ‘return favors,’ but I also expected a lot of strange, bizarre moral rules.” They did find the occasional departure from the norm. For example, among the Chuukese, the largest ethnic group in the Federated States of Micronesia, “to steal openly from others is admirable in that it shows a person’s dominance and demonstrates that he is not intimidated by the aggressive powers of others.” That said, researchers who studied the group concluded that the seven universal moral rules still apply to this behavior: “it appears to be a case in which one form of cooperation (respect for property) has been trumped by another (respect for a hawkish trait, although not explicitly bravery),” they wrote.
Plenty of studies have looked at some rules of morality in some places, but none have attempted to examined the rules of morality in such a large sample of societies. Indeed, when Curry was trying to get funding, his idea was repeatedly rejected as either too obvious or too impossible to prove.
The question of whether morality is universal or relative is an age-old one. In the 17th century, John Locke wrote that if you look around the world, “you could be sure that there is scarce that principle of morality to be named, or rule of virtue to be thought on …. which is not, somewhere or other, slighted and condemned by the general fashion of whole societies of men.”
Philosopher David Hume disagreed. He wrote that moral judgments depend on an “internal sense or feeling, which nature has made universal in the whole species,” noting that certain qualities, including “truth, justice, courage, temperance, constancy, dignity of mind . . . friendship, sympathy, mutual attachment, and fidelity” were pretty universal.
In a critique of Curry’s paper, Paul Bloom, a professor of psychology and cognitive science at Yale University, says that we are far from consensus on a definition of morality. Is it about fairness and justice, or about “maximizing the welfare of sentient beings?” Is it about delaying gratification for long-term gain, otherwise known as intertemporal choice—or maybe altruism?
Bloom also says that the authors of the Current Anthropology study do not sufficiently explain the way we come to moral judgements—that is, the roles that reason, emotions, brain structures, social forces, and development may play in shaping our ideas of morality. While the paper claims that moral judgments are universal because of “collection of instincts, intuitions, inventions, and institutions,” Bloom writes, the authors make “no specific claims about what’s innate, what’s learned, and what arises from personal choice.”
So perhaps the seven universal rules may not be the ultimate list. But at a time when it often feels like we don’t have much in common, Curry offers a framework to consider how we might.
“Humans are a very tribal species,” Curry says. “We are quick to divide into us and them.”
Philosophy can be perceived as a rather dry, boring subject. Perhaps for that very reason, divulgers have attempted to use stimulating and provocative thought experiments and hypothetical scenarios, in order to arouse students and get them to think about deep problems.
Surely one of the most popular thought experiments is the so-called “Trolley Problem”, widely discussed across American colleges as a way to introduce ethics. It actually goes back to an obscure paper written by Philippa Foot in the 1960s. Foot wondered if a surgeon could ethically kill one healthy patient in order to give her organs to five sick patients, and thus save their life. Then, she wondered whether the driver of a trolley on course to run over five people could divert the trolley onto another track in which only one person would be killed.
As it happens, when presented with these questions, most people agree it is not ethical for the surgeon to kill the patient and distribute her organs thus saving the other five, but it is indeed ethical for the driver to divert the trolley, thus killing one and saving the five. Foot was intrigued what the difference would be between both cases.
She reasoned that, in the first case, the dilemma is between killing one and letting five die, whereas in the second case, the dilemma is between killing one and killing five. Foot argued that there is a big moral difference between killing and letting die. She considered negative duties (duties not to harm others) should have precedence over positive duties (duties to help others), and that is why letting five die is better than killing one.
This was a standard argument for many years, until another philosopher, Judith Jarvis Thomson, took over the discussion and considered new variants of the trolley scenario. Thomson considered a trolley going down its path about to run over five people, and the possibility of diverting it towards another track where only one person would be run over. But, in this case, the decision to do so would not come from the driver, but rather, from a bystander who pulls a lever in order to divert the trolley.
The bystander could simply do nothing, and let the five die. But, when presented with this scenario, most people believe that the bystander has the moral obligation to pull the lever. This is strange, as now, the dilemma is not between killing one and killing five, but instead, killing one and letting five die. Why can the bystander pull the lever, but the surgeon cannot kill the healthy person?
Thomson believed that the answer was to be found in the doctrine of double effect, widely discussed by Thomas Aquinas and Catholic moral philosophers. Some actions may serve an ultimately good purpose, and yet, have harmful side effects. Those actions would be morally acceptable as long as the harmful side effects are merely foreseen, but not intended. The surgeon would save the five patients by distributing the healthy person’s organs, but in so doing, he would intend the harmful effect (the death of the donor). The bystander would also save the five persons by diverting the trolley, but killing the one person on the alternate track is not an intrinsic part of the plan, and in that sense, the bystander would merely foresee, but not intend, the death of that one person.
Thomson considered another trolley scenario that seemed to support her point. Suppose the trolley is going down its path to run over five people, and it is about to go underneath a bridge. On that bridge, there is a fat man. If thrown onto the tracks, the fat man’s weight would stop the trolley, and thus save the five people. Again, this would be killing one person in order to save five. However, the fat man’s death would not only be foreseen but also intended. According to the doctrine of double effect, this action would be immoral. And indeed, when presented with this scenario, most people disapprove of throwing down the fat man.
However, Thomson herself came up with yet another trolley scenario, in which an action is widely approved by people who consider it, yet it is at odds with the doctrine of double effect. Suppose this time that the trolley is on its path to run over five people, and there is a looping track in which the fat man is now standing. If the trolley is diverted onto that track, the fat man’s body will stop the trolley, and it will prevent the trolley from making it back to the track where the five people will be run over. Most people believe that a bystander should pull the lever to divert the trolley, and thus kill the fat man to save the five.
Yet, by doing so, the fat man’s death is not merely foreseen, but intended. If the fat man were somehow able to escape from the tracks, he would not be able to save the other five. The fat man needs to die, and yet, most people do not seem to have a problem with that.
Thomson wondered why people would object to the fat man being thrown from the bridge, but would not object to running over the fat man in the looping track, when in fact, in both scenarios the doctrine of double effect is violated. To this day, this question remains unanswered.
Some philosophers have made the case that too much has been written about the Trolley Problem, and too little has been achieved with it. Some argue either that the examples are unrealistic to the point of being comical and irrelevant. Others argue that intuitions are not reliable and that moral decisions should be based on reasoned analysis, not just on feeling “right” or “wrong” when presented with scenarios.
It is true that all these scenarios are highly unrealistic and that intuitions can be wrong. The morality of actions cannot just be decided by public votes. Yet, despite all its shortcomings, the Trolley Problem remains an exciting and useful approach. It is extremely unlikely someone will ever encounter a situation where a fat man could be thrown from a bridge in order to save five people. But the thought of that situation can elicit thinking about situations with structural similarities, such as whether or not civilians can be bombed in wars, or whether or not doctors should practice euthanasia. The Trolley Problem will not provide definite answers, but it will certainly help in thinking more clearly.
Different person may have different preference on the same feeling/sensation. In extreme case, some kind of pain might be preferred by some kind of persons, such as sadomasochists. Hence I concluded that there must be a deeper meaning than feeling which we should base our morality upon.
Here is another reading on trolley problem to check our ideas on universal morality.When I first encountered the trolley problem, I kept thinking why the number 5 was chosen to trade with 1 to determine the morality of action/inaction. Then I sketched a basic version of trolley problem where the numbers vary, like I've shown in my previous post here:
https://qz.com/1562585/the-seven-moral-rules-that-supposedly-unite-humanity/
And here is how the trolley problem has evolved over time.
https://www.prindlepost.org/2018/05/just-how-useful-is-the-trolley-problem/
Here is an example to emphasize that sometimes moral decision is based on efficiency. We will use some variations of trolley problem with following assumptions:One of notable conclusions I got from this analysis is emphasized in bold.
- the case is evaluated retrospectively by a perfect artificial intelligence, hence no room for uncertainty of cause and effect regarding the actions or inactions.
- a train is moving in high speed on the left track.
- a lever can be used to switch the train to the right track.
- if the train goes to the left track, every person on the left track will be killed. Likewise for the right track.
- all the people involved are average persons who have positive contribution to the society. No preferences for any one person over the others.
The table below shows possible combination of how many persons on the left and right tracks, ranging from 0 to 5.
The left column in the table below shows how many persons are on the left track, while the top row shows how many persons are on the right track.
\ 0 1 2 3 4 5
0 o o o o o o
1 x o o o o o
2 x ? o o o o
3 x ? ? o o o
4 x ? ? ? o o
5 x ? ? ? ? o
When there are 0 person on the left track, moral persons must leave the lever as it is, no matter how many persons on the right track. This is indicated by letter o in every cell next to number 0 on the left column.
When there are 0 person on the right track, moral persons must switch the lever if there are at least 1 person on the left track. This is indicated by letter x in every cell below the number 0 on the top row, except when there is 0 person on the left track.
When there are non-zero persons on each track and more persons on the right track than the left track, moral persons must leave the lever as it is to reduce casualty. This is indicated by letter o in every cell on the top right side of diagonal cells.
When there are the same number of persons on the left and right tracks, moral persons should leave the lever to conserve resource (energy to switch the track) and avoid being accused of playing god. This is indicated by letter o in every diagonal cell.
When there are non-zero persons on each track and more persons on the left track, the answer might vary (based on previous studies). If you choose to do nothing in these situations, effectively it shows how much you value your action of switching the lever, in the unit of difference of person number between the left and right track. This is indicated by question marks in every cell on the bottom left side of diagonal cells.
Pain is a distressing feeling often caused by intense or damaging stimuli. The International Association for the Study of Pain's widely used definition defines pain as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage".[1] In medical diagnosis, pain is regarded as a symptom of an underlying condition.https://en.wikipedia.org/wiki/Pain
https://www.prindlepost.org/2018/05/just-how-useful-is-the-trolley-problem/In the case of surgeon version of trolley problem, I think many people would make following assumptions that make them reluctant to make the sacrifice:
Quote
Philosophy can be perceived as a rather dry, boring subject. Perhaps for that very reason, divulgers have attempted to use stimulating and provocative thought experiments and hypothetical scenarios, in order to arouse students and get them to think about deep problems.
Surely one of the most popular thought experiments is the so-called “Trolley Problem”, widely discussed across American colleges as a way to introduce ethics. It actually goes back to an obscure paper written by Philippa Foot in the 1960s. Foot wondered if a surgeon could ethically kill one healthy patient in order to give her organs to five sick patients, and thus save their life. Then, she wondered whether the driver of a trolley on course to run over five people could divert the trolley onto another track in which only one person would be killed.
As it happens, when presented with these questions, most people agree it is not ethical for the surgeon to kill the patient and distribute her organs thus saving the other five, but it is indeed ethical for the driver to divert the trolley, thus killing one and saving the five. Foot was intrigued what the difference would be between both cases.
She reasoned that, in the first case, the dilemma is between killing one and letting five die, whereas in the second case, the dilemma is between killing one and killing five. Foot argued that there is a big moral difference between killing and letting die. She considered negative duties (duties not to harm others) should have precedence over positive duties (duties to help others), and that is why letting five die is better than killing one.
This was a standard argument for many years, until another philosopher, Judith Jarvis Thomson, took over the discussion and considered new variants of the trolley scenario. Thomson considered a trolley going down its path about to run over five people, and the possibility of diverting it towards another track where only one person would be run over. But, in this case, the decision to do so would not come from the driver, but rather, from a bystander who pulls a lever in order to divert the trolley.
The bystander could simply do nothing, and let the five die. But, when presented with this scenario, most people believe that the bystander has the moral obligation to pull the lever. This is strange, as now, the dilemma is not between killing one and killing five, but instead, killing one and letting five die. Why can the bystander pull the lever, but the surgeon cannot kill the healthy person?
In the case of surgeon version of trolley problem, I think many people would make following assumptions that make them reluctant to make the sacrifice:Foot was correct in noticing that people don't really hold to the beliefs they claim. A hypothetical situation (trolley) yields a different answer than a real one (such as the surgery policy described actually being implemented as policy).
- there is some non-zero chance that the surgery would fail.
- the five patients' conditions are somehow the consequence of their own fault, such as not living a healthy life, thus make them deserve their failing organs.
- on the other hand, the healthy person to be sacrificed is given credit for living a healthy life.
- many people would likely see the situation in that healthy person's perspective.
Your objections seem to just be trying to avoid the issue. Let's assume the surgery carries no risks. The one dies, the others go on to live full lives. This is like assuming no friction in simple physics questions, or assuming the trolley will not overturn when it hits the switch at speed, explode and kill 20. Adding variables like this just detracts from the question being asked.It's the opposite. I'm trying to identify the reason why people change their mind when the situation is slightly changed, one parameter at a time.
It is considered (rightly so) unethical to harvest a healthy condemned criminal in order to save the lives of all these innocents in need. Now why is that?
There is another solution: You have these 5 people each in need of a different organ from the one healthy person. So they draw lots and the loser gives his organs to the other 4. That's like one of the 5 trolley victims getting to be the hero by throwing the other 4 off the tracks before the trolley hits and kills him. Win win, and yet even this isn't done in practice. Why not? What is the actual moral code which typically drives practical policy?In practice, that is a very rare circumstance.
Again, you seem to be searching for loopholes rather than focusing on the fundamental reasons why we choose to divert the trolley on a paper philosophy test but not in practice. I think there is a reason, but the best way to to see it is to consider the most favorable case, and wonder why it is still rejected. You seem to be looking for the less favorable cases, which is looking in the wrong direction.Quote from: HalcIt is considered (rightly so) unethical to harvest a healthy condemned criminal in order to save the lives of all these innocents in need. Now why is that?I have some possible reason to think about.
- Perhaps the crime isn't considered severe enough for death penalty."Condemned criminal" means it is severe enough. The death sentence has been made.
- Fear of revenge from the victim's relatives. There's always non-zero chance the secret will be revealed.There's a secret involved? I was suggesting this be above board. Not sure who the victim is here, the criminal or the victims of whatever crimes he committed. If the former, he's already got the death penalty and his relatives already know it. Changing the sentence to 'death by disassembly' shouldn't be significantly different from their POV than say death by lethal injection (which renders the organs unusable for transplants).
- Hope that there might be better options without sacrificing anyone, such as technological advancement.People in need of transplants often have short life expectancy, certainly shorter than advancement of technology. OK, they've made I think a few mechanical hearts, and the world is covered with mechanical kidneys (not implantable ones though). A dialysis machine does not fit in a torso. No mechanical livers. It's transplant or die. Not sure what other organs are life-saving. There are eye transplants, but that just restores sight, not life.
- The lost of those five lives are not that big deal. Life can still go on as usual.With that reasoning, murder shouldn't even be illegal.
Millions of lives had died due to accident, natural disasters, epidemic, famine, etc. without anyone getting their hands dirty of homicide.Ah, there's the standard. Because putting the trolley on the track with one is an act of homicide (involves the dirtying of someone's hands), but the non-act of not saving 5 (or 4) people who could be saved is not similarly labeled a homicide. Negligent homicide is effectively death caused by failing to take action, so letting the trolley go straight is still homicide.
In fact, I think it has never been done. But I'm asking why not, since it actually works better than the 'accidental' version they use now.Quote from: HalcThere is another solution: You have these 5 people each in need of a different organ from the one healthy person. So they draw lots and the loser gives his organs to the other 4. That's like one of the 5 trolley victims getting to be the hero by throwing the other 4 off the tracks before the trolley hits and kills him. Win win, and yet even this isn't done in practice.In practice, that is a very rare circumstance.
The cost/resource required could be high, especially if . Who will pay the operation?Same person who pays when there is a donor found. It costs this money in both circumstances. High cost of the procedure actually is an incentive to do it. The hospitals make plenty of money over these sorts of things, so you'd think the solution I proposed would be found more attractive.
The uncertainty of cost and benefit would make surgeons avert risks by simply doing nothing and noone would blame them.Surgeons always take risks, and sometimes people blame them. They say to watch out for surgeons who have too low of a failure rate for a risky procedure because either they cook the books or they are too incompetent to take on the higher risk patients. But people very much do blame surgeons who refuse to save lives when it is within their capability.
Again, you seem to be searching for loopholes rather than focusing on the fundamental reasons why we choose to divert the trolley on a paper philosophy test but not in practice. I think there is a reason, but the best way to to see it is to consider the most favorable case, and wonder why it is still rejected. You seem to be looking for the less favorable cases, which is looking in the wrong direction.The social experiments shows that different people give different answers for different reasons. They also changed their mind in different occasions, even when presented with exactly same situation. It might even be the case that some of them just performed coin toss to choose the answer.
"Condemned criminal" means it is severe enough. The death sentence has been made.Sorry for my limitations in English. It's not my native language. The dictionaries have several definitions for the word "condemn". Some says it can mean life imprisonment.
There's a secret involved? I was suggesting this be above board. Not sure who the victim is here, the criminal or the victims of whatever crimes he committed. If the former, he's already got the death penalty and his relatives already know it. Changing the sentence to 'death by disassembly' shouldn't be significantly different from their POV than say death by lethal injection (which renders the organs unusable for transplants).In the surgeon version of the trolley problem, the secrecy is part of the scenario. Sorry for the mixed up.
There is another solution: You have these 5 people each in need of a different organ from the one healthy person. So they draw lots and the loser gives his organs to the other 4. That's like one of the 5 trolley victims getting to be the hero by throwing the other 4 off the tracks before the trolley hits and kills him. Win win, and yet even this isn't done in practice.Sacrificing one to get parts required to save many is routinely done in industry. But it's only done with machines/equipments, not human. Though it's often called cannibalizing.
In fact, I think it has never been done. But I'm asking why not, since it actually works better than the 'accidental' version they use now.
QuoteIn the past, it wasn't. Ask the Aztecs who sacrifice humans. Or Europeans collonizing Americas and killing the natives.
- The lost of those five lives are not that big deal. Life can still go on as usual.
With that reasoning, murder shouldn't even be illegal.
A specialty doctor could just decide to stay home one day to watch TV for once, without informing his hospital employer. As a result, 3 patients die. His hands are not 'dirty with homicide', and people die every day anyway, so there's nothing wrong with his choosing to blow the day off like that.I don't know if all hospitals apply the same rules. But their employees have rights such as annual leaves. The duties to provide adequate resources for their operation include having backup doctors. So don't put so much pressure to the doctors.
Sorry, I find this an immoral choice on the doctor's part.
Indeed, are happiness and misery mathematical entities that can be added or subtracted in the first place? Eating ice cream is enjoyable. Finding true love is more enjoyable. Do you think that if you just eat enough ice cream, the accumulated pleasure could ever equal the rapture of true love?Homo Deus - Yuval Noah Harari.
Eating ice cream is enjoyable. Finding true love is more enjoyable. Do you think that if you just eat enough ice cream, the accumulated pleasure could ever equal the rapture of true love?
In the broad and always disconcerting area of Ethics there seem to be two broad categories for identifying what makes acts ‘moral’:https://charlescrawford.biz/2018/05/17/philosophy-trolley-problem-torture/
Deontology: Acts are moral (or not) in themselves: it’s just wrong to kill or torture someone under most circumstances, regardless of the consequences. See Kant.
Consequentialism: Acts are moral according to their consequences: killing or torturing someone leads to bad results or sets bad precedents, so (sic) we should not do it.
Then there is Particularism: the idea that there are no clear moral principles as such.
Let's go back to deliberate killing. It is apparently OK for a soldier to kill a uniformed opponent at a distance, or even hand-to-hand, but not to execute a wounded opponent.It may depends on the wound and circumstances. If it's so severe and there is no possibility to save them in time(e.g. hole through the lung), and letting them live only causes them to endure prolonged, meaningless pain, then executing them might be the best option.
But it is a moral imperative to execute a wounded animal of any other species. Or he could kill a plain-clothes spy, but arbitrarily butchering other civilians is a war crime. Except if said civilians happen to be in the vicinity of a legitimate (or reasonably suspected) bombing target...... Surely, of all the possible human interactions, acts of war should be cut and dried by now? But they aren't.Cooperations are formed by common interests of involving parties. They are more reliable if they have common goals instead of spontaneous interests. They can be permanent with common terminal goals.
The example above was meant as counterexample to classical method of utilitarian morality. Another prominent critic is the utility monster as discussed in my previous posts.QuoteEating ice cream is enjoyable. Finding true love is more enjoyable. Do you think that if you just eat enough ice cream, the accumulated pleasure could ever equal the rapture of true love?Not a universal example, by any means. There are some people who choose to eat to excess (say outside the 3σ region of the normal distribution) and end up with no friends. Some people are socially anhedonic and prefer any amount of ice cream to even a hint of love. Some people (me included) don't much like ice cream.
You can base your moral standard on an arithmetic mean, or some other statistic, but the definition of immorality requires an arbitrary limit on deviation.
Goodhart's Curse and meta-utility functionsOther interesting reading around AI problems.
An obvious next question is "Why not just define the AI such that the AI itself regards U as an estimate of V, causing the AI's U to more closely align with V as the AI gets a more accurate empirical picture of the world?"
Reply: Of course this is the obvious thing that we'd want to do. But what if we make an error in exactly how we define "treat U as an estimate of V"? Goodhart's Curse will magnify and blow up any error in this definition as well.
We must distinguish:
V, the true value function that is in our hearts.
T, the external target that we formally told the AI to align on, where we are hoping that T really means V.
U, the AI's current estimate of T or probability distribution over possible T.
U will converge toward T as the AI becomes more advanced. The AI's epistemic improvements and learned experience will tend over time to eliminate a subclass of Goodhart's Curse where the current estimate of U-value has diverged upward from T-value, cases where the uncertain U-estimate was selected to be erroneously above the correct formal value T.
However, Goodhart's Curse will still apply to any potential regions where T diverges upward from V, where the formal target diverges from the true value function that is in our hearts. We'd be placing immense pressure toward seeking out what we would retrospectively regard as human errors in defining the meta-rule for determining utilities. 1
Goodhart's Curse and 'moral uncertainty'
"Moral uncertainty" is sometimes offered as a solution source in AI alignment; if the AI has a probability distribution over utility functions, it can be risk-averse about things that might be bad. Would this not be safer than having the AI be very sure about what it ought to do?
Translating this idea into the V-T-U story, we want to give the AI a formal external target T to which the AI does not currently have full access and knowledge. We are then hoping that the AI's uncertainty about T, the AI's estimate of the variance between T and U, will warn the AI away from regions where from our perspective U would be a high-variance estimate of V. In other words, we're hoping that estimated U-T uncertainty correlates well with, and is a good proxy for, actual U-V divergence.
The idea would be that T is something like a supervised learning procedure from labeled examples, and the places where the current U diverges from V are things we 'forgot to tell the AI'; so the AI should notice that in these cases it has little information about T.
Goodhart's Curse would then seek out any flaws or loopholes in this hoped-for correlation between estimated U-T uncertainty and real U-V divergence. Searching a very wide space of options would be liable to select on:
Regions where the AI has made an epistemic error and poorly estimated the variance between U and T;
Regions where the formal target T is solidly estimable to the AI, but from our own perspective the divergence from T to V is high (that is, the U-T uncertainty fails to perfectly cover all T-V divergences).
The second case seems especially likely to occur in future phases where the AI is smarter and has more empirical information, and has correctly reduced its uncertainty about its formal target T. So moral uncertainty and risk aversion may not scale well to superintelligence as a means of warning the AI away from regions where we'd retrospectively judge that U/T and V had diverged.
Goodhart's Law is named after the economist Charles Goodhart. A standard formulation is "When a measure becomes a target, it ceases to be a good measure." Goodhart's original formulation is "Any observed statistical regularity will tend to collapse when pressure is placed upon it for control purposes."
For example, suppose we require banks to have '3% capital reserves' as defined some particular way. 'Capital reserves' measured that particular exact way will rapidly become a much less good indicator of the stability of a bank, as accountants fiddle with balance sheets to make them legally correspond to the highest possible level of 'capital reserves'.
Decades earlier, IBM once paid its programmers per line of code produced. If you pay people per line of code produced, the "total lines of code produced" will have even less correlation with real productivity than it had previously.
Touch underlies the functioning of almost every tissue and cell type, says Patapoutian. Organisms interpret forces to understand their world, to enjoy a caress and to avoid painful stimuli. In the body, cells sense blood flowing past, air inflating the lungs and the fullness of the stomach or bladder. Hearing is based on cells in the inner ear detecting the force of sound waves.It shows why morality based on pain and pleasure is susceptible to problems identified as winner's, optimizer's and Goodhart's curses. https://arbital.com/p/goodharts_curse/
Decades earlier, IBM once paid its programmers per line of code produced. If you pay people per line of code produced, the "total lines of code produced" will have even less correlation with real productivity than it had previously.A fine example. Slightly off topic from universal morality, but I've always distinguished between production and management. Production workers should get paid per unit product since they have no other choice or control. The function of management is to optimise, so managers should be paid only from a profit share. The IBM example is interesting since a line of code is not product but a component: if you can achieve the same result with less code, you have a more efficient product: the program or subroutine is the product.
This example emphasizes the discrepancy between longterm goal with short term goal. Just like the name suggest, long term goals have measurable results after a long time has passed since the goal setting, hence without other tools, we might not know wether or not they are going to be achieved, or even if we are going to the right direction. That's why we need short term goals, to help us evaluate our actions and see if they are aligned with our long term goals. In process control system, we can use Smith predictor which is a predictive controller designed to control systems with a significant feedback time delay. We must carefully choose the design of the predictor to be as accurate as possible to minimize process fluctuation.Decades earlier, IBM once paid its programmers per line of code produced. If you pay people per line of code produced, the "total lines of code produced" will have even less correlation with real productivity than it had previously.A fine example. Slightly off topic from universal morality, but I've always distinguished between production and management. Production workers should get paid per unit product since they have no other choice or control. The function of management is to optimise, so managers should be paid only from a profit share. The IBM example is interesting since a line of code is not product but a component: if you can achieve the same result with less code, you have a more efficient product: the program or subroutine is the product.
Glyn Williams, Answered Aug 11, 2014
I personally define intelligence as the ability to solve problems.
And while we often attempt to solve problems using conscious methods. (Visualize a problem, visualize potential solutions etc) - it is clear from nature that problems can be solved without intent of any sort.
Evolutionary biology has solved the problem of flight at least 4 times. Without a single conscious-style thought in its non-head.
Chess playing computers can solve chess problems, by iterating though all possible moves. Again without a sense of self.
Consciousness as it is usually defined, is type of intelligence that is associated with the problems of agency. If you are a being and have to do stuff - then that might be called awareness or consciousness.
IQ Range ("ratio IQ") IQ Classification
175 and over Precocious
150–174 Very superior
125–149 Superior
115–124 Very bright
105–114 Bright
95–104 Average
85–94 Dull
75–84 Borderline
50–74 Morons
25–49 Imbeciles
0–24 Idiots
Moral rules are set to achieve some desired states in reliable manner, i.e. they produce more desired results in the long run.Here is another objection to deontological morality. There are circumstances where following one moral rule will inevitably violating other moral rules. Which rules must we keep following then, which can be abandoned? How to set priority for those rules? Is the priority fixed, or might it still depend on the circumstances?QuoteIn the broad and always disconcerting area of Ethics there seem to be two broad categories for identifying what makes acts ‘moral’:https://charlescrawford.biz/2018/05/17/philosophy-trolley-problem-torture/
Deontology: Acts are moral (or not) in themselves: it’s just wrong to kill or torture someone under most circumstances, regardless of the consequences. See Kant.
Consequentialism: Acts are moral according to their consequences: killing or torturing someone leads to bad results or sets bad precedents, so (sic) we should not do it.
Then there is Particularism: the idea that there are no clear moral principles as such.
Even someone who embrace Deontology recognize that there are exceptions to their judgement toward some actions, as seen in the usage of the word most, instead of all circumstances. It shows that the moral value is not inherently attached to the actions themselves. It still depends on the circumstances instead, and the consequences are part of those.
All objections/criticisms to Consequentialism that I've seen so far get their points by emphasizing short term consequences which are in contrast to their long term overall consequences. If anybody know some counterexamples, please let me know.
prominent moral authorities such as prophets, which presumably had higher moral standards than their peers.Illegitimate presumption! Priests, politicians, philosophers, prophets, and perverts in general, all profess to have higher moral standards than the rest of us, but so did Hitler and Trump. "By their deeds shall ye know them" (Matthew 7:16) is probably the least questionable line in the entire Bible.
prominent moral authorities such as prophets, which presumably had higher moral standards than their peers.Illegitimate presumption! Priests, politicians, philosophers, prophets, and perverts in general, all profess to have higher moral standards than the rest of us, but so did Hitler and Trump.
presumptionThat definition seems to be using Bayesian inference, hence there is still a chance that it turns out to be false.
/prɪˈzʌm(p)ʃ(ə)n/
noun
1.
an idea that is taken to be true on the basis of probability.
"underlying presumptions about human nature"
"By their deeds shall ye know them" (Matthew 7:16) is probably the least questionable line in the entire Bible.
I think that we can safely presume that many of their peers have lower moral standard.Lower than Hitler and Trump? Really?
I think that we can safely presume that many of their peers have lower moral standard.Lower than Hitler and Trump? Really?
Priests, politicians, and other parasites, assert their moral authority. "Proof by assertion" is not valid.
I was talking about moral authority instead of formal authority, which you seem to use as counter examples.I think even many followers of Hitler and Trump who view them as legitimate formal authorities don't view them as moral authorites. Many people are more morally bankrupt, but they don't come to prominence due to lack of power or influence.
It's also worth noting that being conscious doesn't necessarily having high intelligence.At least there are two things required for consciousness or self awareness of an agent.
Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country.https://en.wikipedia.org/wiki/Brouwer_fixed-point_theorem#Illustrations
Another key feature of the human brain is the ability to make predictions, including predictions about the resultsIn the Wikipedia article, this cell type is also found in cetaceans and elephants.
of its own decisions and actions. Some scientists believe that prediction is the primary function of the cerebral cortex,
although the cerebellum also plays a major role in the prediction of movement.
Interestingly, we are able to predict or anticipate our own decisions. Work by physiology professor Benjamin
Libet at the University of California at Davis shows that neural activity to initiate an action actually occurs about a
third of a second before the brain has made the decision to take the action. The implication, according to Libet, is that
the decision is really an illusion, that "consciousness is out of the loop." The cognitive scientist and philosopher Daniel
Dennett describes the phenomenon as follows: "The action is originally precipitated in some part of the brain, and off
fly the signals to muscles, pausing en route to tell you, the conscious agent, what is going on (but like all good officials
letting you, the bumbling president, maintain the illusion that you started it all)."114
A related experiment was conducted recently in which neurophysiologists electronically stimulated points in the
brain to induce particular emotional feelings. The subjects immediately came up with a rationale for experiencing
those emotions. It has been known for many years that in patients whose left and right brains are no longer connected,
one side of the brain (usually the more verbal left side) will create elaborate explanations ("confabulations") for
actions initiated by the other side, as if the left side were the public-relations agent for the right side.
The most complex capability of the human brain—what I would regard as its cutting edge—is our emotional
intelligence. Sitting uneasily at the top of our brain's complex and interconnected hierarchy is our ability to perceive
and respond appropriately to emotion, to interact in social situations, to have a moral sense, to get the joke, and to
respond emotionally to art and music, among other high-level functions. Obviously, lower-level functions of
perception and analysis feed into our brain's emotional processing, but we are beginning to understand the regions of
the brain and even to model the specific types of neurons that handle such issues.
These recent insights have been the result of our attempts to understand how human brains differ from those of
other mammals. The answer is that the differences are slight but critical, and they help us discern how the brain
processes emotion and related feelings. One difference is that humans have a larger cortex, reflecting our stronger
capability for planning, decision making, and other forms of analytic thinking. Another key distinguishing feature is
that emotionally charged situations appear to be handled by special cells called spindle cells, which are found only in
humans and some great apes. These neural cells are large, with long neural filaments called apical dendrites that
connect extensive signals from many other brain regions. This type of "deep" interconnectedness, in which certain
neurons provide connections across numerous regions, is a feature that occurs increasingly as we go up the
evolutionary ladder. It is not surprising that the spindle cells, involved as they are in handling emotion and moral
judgment, would have this form of deep interconnectedness, given the complexity of our emotional reactions.
What is startling, however, is how few spindle cells there are in this tiny region: only about 80,000 in the human
brain (about 45,000 in the right hemisphere and 35,000 in the left hemisphere). This disparity appears to account for
the perception that emotional intelligence is the province of the right brain, although the disproportion is modest.
Gorillas have about 16,000 of these cells, bonobos about 2,100, and chimpanzees about 1,800. Other mammals lack
them completely.
Dr. Arthur Craig of the Barrow Neurological Institute in Phoenix has recently provided a description of the
architecture of the spindle cells.115 Inputs from the body (estimated at hundreds of megabits per second), including
nerves from the skin, muscles, organs, and other areas, stream into the upper spinal cord. These carry messages about
touch, temperature, acid levels (for example, lactic acid in muscles), the movement of food through the gastrointestinal
tract, and many other types of information. This data is processed through the brain stem and midbrain. Key cells
called Lamina 1 neurons create a map of the body representing its current state, not unlike the displays used by flight
controllers to track airplanes.
The information then flows through a nut-size region called the posterior ventromedial nucleus (VMpo), which
apparently computes complex reactions to bodily states such as "this tastes terrible," "what a stench," or "that light
touch is stimulating." The increasingly sophisticated information ends up at two regions of the cortex called the insula.
These structures, the size of small fingers, are located on the left and right sides of the cortex. Craig describes the
VMpo and the two insula regions as "a system that represents the material me."
Although the mechanisms are not yet understood, these regions are critical to self-awareness and complicated
emotions. They are also much smaller in other animals. For example, the VMpo is about the size of a grain of sand in
macaque monkeys and even smaller in lower-level animals. These findings are consistent with a growing consensus
that our emotions are closely linked to areas of the brain that contain maps of the body, a view promoted by Dr.
Antonio Damasio at the University of Iowa.116 They are also consistent with the view that a great deal of our thinking
is directed toward our bodies: protecting and enhancing them, as well as attending to their myriad needs and desires.
Very recently yet another level of processing of what started out as sensory information from the body has been
discovered. Data from the two insula regions goes on to a tiny area at the front of the right insula called the
frontoinsular cortex. This is the region containing the spindle cells, and tMRI scans have revealed that it is particularly
active when a person is dealing with high-level emotions such as love, anger, sadness, and sexual desire. Situations
that strongly activate the spindle cells include when a subject looks at her romantic partner or hears her child crying.
Anthropologists believe that spindle cells made their first appearance ten to fifteen million years ago in the as-yet
undiscovered common ancestor to apes and early hominids (the family of humans) and rapidly increased in numbers
around one hundred thousand years ago. Interestingly, spindle cells do not exist in newborn humans but begin to
appear only at around the age of four months and increase significantly from ages one to three. Children's ability to
deal with moral issues and perceive such higher-level emotions as love develop during this same time period.
The spindle cells gain their power from the deep interconnectedness of their long apical dendrites with many other
brain regions. The high-level emotions that the spindle cells process are affected, thereby, by all of our perceptual and
cognitive regions. It will be difficult, therefore, to reverse engineer the exact methods of the spindle cells until we have
better models of the many other regions to which they connect. However, it is remarkable how few neurons appear to
be exclusively involved with these emotions. We have fifty billion neurons in the cerebellum that deal with skill
formation, billions in the cortex that perform the transformations for perception and rational planning, but only about
eighty thousand spindle cells dealing with high-level emotions. It is important to point out that the spindle cells are not
doing rational problem solving, which is why we don't have rational control over our responses to music or over
falling in love. The rest of the brain is heavily engaged, however, in trying to make sense of our mysterious high-level
emotions.
Spindle neurons, also called von Economo neurons (VENs), are a specific class of mammalian cortical neurons characterized by a large spindle-shaped soma (or body) gradually tapering into a single apical axon (the ramification that transmits signals) in one direction, with only a single dendrite (the ramification that receives signals) facing opposite. Other cortical neurons tend to have many dendrites, and the bipolar-shaped morphology of spindle neurons is unique here.https://en.wikipedia.org/wiki/Spindle_neuron
Spindle neurons are found in two very restricted regions in the brains of hominids (humans and other great apes): the anterior cingulate cortex (ACC) and the fronto-insular cortex (FI), but recently they have been discovered in the dorsolateral prefrontal cortex of humans.[1] Spindle cells are also found in the brains of a number of cetaceans,[2][3][4] African and Asian elephants,[5] and to a lesser extent in macaque monkeys[6] and raccoons.[7] The appearance of spindle neurons in distantly related clades suggests that they represent convergent evolution—specifically, as an adaptation to accommodate the increasing size of these distantly-related animals' brains.
Spindle neuron concentrations
ACC
The largest number of ACC spindle neurons are found in humans, fewer in the gracile great apes, and fewest in the robust great apes. In both humans and bonobos they are often found in clusters of 3 to 6 neurons. They are found in humans, bonobos, chimpanzees, gorillas, orangutans, some cetaceans, and elephants.[16]:245 While total quantities of ACC spindle neurons were not reported by Allman in his seminal research report (as they were in a later report describing their presence in the frontoinsular cortex, below), his team's initial analysis of the ACC layer V in hominids revealed an average of ~9 spindle neurons per section for orangutans (rare, 0.6% of section cells), ~22 for gorillas (frequent, 2.3%), ~37 for chimpanzees (abundant, 3.8%), ~68 for bonobos (abundant/clusters, 4.8%), ~89 for humans (abundant/clusters, 5.6%).[17]
Fronto-insula
All of the primates examined had more spindle cells in the fronto-insula of the right hemisphere than in the left. In contrast to the higher number of spindle cells found in the ACC of the gracile bonobos and chimpanzees, the number of fronto-insular spindle cells was far higher in the cortex of robust gorillas (no data for Orangutans was given). An adult human had 82,855 such cells, a gorilla had 16,710, a bonobo had 2,159, and a chimpanzee had a mere 1,808 – despite the fact that chimpanzees and bonobos are great apes most closely related to humans.
Dorsolateral PFC
Von Economo neurons have been located in the Dorsolateral prefrontal cortex of humans[1] and elephants.[5] In humans they have been observed in higher concentration in Brodmann area 9 (BA9) – mostly isolated or in clusters of 2, while in Brodmann area 24 (BA24) they have been found mostly in clusters of 2-4.[1]
Clinical significance
Abnormal spindle neuron development may be linked to several psychotic disorders, typically those characterized by distortions of reality, disturbances of thought, disturbances of language, and withdrawal from social contact[citation needed]. Altered spindle neuron states have been implicated in both schizophrenia and autism, but research into these correlations remains at a very early stage. Frontotemporal dementia involves loss of mostly spindle neurons.[18] An initial study suggested that Alzheimer's disease specifically targeted von Economo neurons; this study was performed with end-stage Alzheimer brains in which cell destruction was widespread, but later it was found that Alzheimer's disease doesn't affect the spindle neurons.
The research results mentioned above support assertion that humans have higher consciousness level than other animals. They also provide some ways to rank other animals based on their capacity to experience emotions.
con·scious·nessIn the context of morality, I've tried to give the proper description here.
/ˈkän(t)SHəsnəs/
noun
the state of being awake and aware of one's surroundings.
"she failed to regain consciousness and died two days later"
Similar:
awareness
wakefulness
alertness
responsiveness
sentience
Opposite:
unconsciousness
the awareness or perception of something by a person.
plural noun: consciousnesses
"her acute consciousness of Mike's presence"
Similar:
awareness of
knowledge of the existence of
alertness to
sensitivity to
realization of
cognizance of
mindfulness of
perception of
apprehension of
recognition of
the fact of awareness by the mind of itself and the world.
"consciousness emerges from the operations of the brain"
It's also worth noting that being conscious doesn't necessarily having high intelligence.At least there are two things required for consciousness or self awareness of an agent.
First is ability to represent itself in its internal model of its environment. As an illustration, if you put a map of your country on the floor, there will be a point on the map that is touching the actual point it refers to.QuoteTake an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country.https://en.wikipedia.org/wiki/Brouwer_fixed-point_theorem#Illustrations
In a real conscious agents, some part of the agent's data storage must represent some property of the agent itself.
The next is the existence of preference for one state over the others. One of the most common examples in animal world is pleasure over pain. Thus a map, even a dynamic one, is not conscious due to lack of preference.
2. Please show how you measured it in at least three species (a mammal, an insect, a fish)Mammal, insects, and fishes are large groups with large in group variance. But I think we can still use the method I described above to measure their individual level of consciousness. We must also be aware of the distinction between effective and potential level of consciousness. A hunting shark is effectively more conscious than a human under general anesthetic. On the other hand, a human baby has higher level of potential consciousness than a smart dog.
3. Is religious intolerance indicative of rank in the same sense as altruism? Please list some non-human species that exhibit religious intolerance.
Interestingly, none of the dictionary definitions has anything to do with intelligence or selfawareness. It's all about responding to, or being capable of responding to, a stimulus. Which is the characteristic of all living things.As I said, consciousness is a multidimensional parameter. Perhaps some species of sharks have higher sensitivity to certain chemicals in water compared to human. But they are not conscious about what happens on land, nor they are aware of killing asteroids coming toward the earth.
A shark can respond to a drop of blood in a swimming pool, which makes it billions of times more conscious than you or me.
I detect a fellow skeptic in the area of rights and privileges!If that's the case, I think you'll enjoy this performance of George Carlin describing rights and priviledges.
Even in the common usage, consciousness have some levels.All of which are easily observed in all animals and even have analogs in the plant world. They do not distinguish between species.
Very much so. You don't have to be literate to be a narcissist (Donald Trump struggles with words in lower case and has the style and vocabulary of a 6-year-old) or anorexic - two extremes of selfawareness. After 70 hours without sleep, few junior doctors are aware of anything, never mind themselves.
As for consciousness, you seem now to be defining it as "something bigger than its definition". An amusing take on Russell's Paradox but not very helpful.
Consciousness is the state or quality of awareness.https://en.wikipedia.org/wiki/Consciousness_(disambiguation)
Having inaccurate model of reality reduce the agent's consciousness level, since it would render their plan's execution less effective.Though barely literate and with no concept of reality, Trump is extremely effective in executing his plan to build a big wall and get re-elected. Who cares about reality when you can shout into a microphone?
It only happens with the help of Trump enablers who seek for personal gains. But it won't last long if things continue that way. Objective reality has limited tolerance. When long term damages become more apparent, more people will start to realize it and try to make a change.Having inaccurate model of reality reduce the agent's consciousness level, since it would render their plan's execution less effective.Though barely literate and with no concept of reality, Trump is extremely effective in executing his plan to build a big wall and get re-elected. Who cares about reality when you can shout into a microphone?
Scientists have continuously improved their understanding about consciousness. Here is one of newest results.Most people agree that consciousness plays a central role in morality. Hence understanding consciousness is necessary to discuss about morality productively. IMO anyone who claims that consciousness cannot be understood scientifically has commited some kind of arrogance, namely "if I can't understand something, noone else can."QuoteIn a wild new experiment conducted on monkeys, scientists discovered that a tiny, but powerful area of the brain may enable consciousness: the central lateral thalamus. Activation of the central lateral thalamus and deep layers of the cerebral cortex drives pathways in the brain that carry information between the parietal and frontal lobe in the brain, the study suggests.https://www.inverse.com/mind-body/3d-brain-models-crucial-stage-of-human-development
This brain circuit works as a sort-of “engine for consciousness,” the researchers say, enabling conscious thought and feeling in primates.
To zero in on this brain circuit, a scientific team put macaque monkeys under anesthesia, then stimulated different parts of their brain with electrodes at a frequency of 50 Hertz. Essentially, they zapped different areas of the brain and observed how the monkeys responded. When the central lateral thalamus was stimulated, the monkeys woke up and their brain function resumed — even though they were STILL UNDER ANESTHESIA. Seconds after the scientists switched off the stimulation, the monkeys went right back to sleep.
This research was published Wednesday in the journal Neuron.
“Science doesn’t often leave opportunity for exhilaration, but that’s what that moment was like for those of us who were in the room,” co-author Michelle Redinbaugh, a researcher at the University of Wisconsin, Madison, tells Inverse.
https://www.cell.com/neuron/fulltext/S0896-6273(20)30005-2
But what we have here is an evil man pretending to be naïve. The damage done by his heroes lasted for decades.It takes a closer look to determine if he is indeed an inherently evil man. It's possible that he suffers some mental illness which makes him believes his own lies.
It would be a lot easier to understand something if you could define it. So far you have rejected the clinical, dictionary definition and asserted that the word means some abstract characteristic of living things that cannot be defined or measured, but can be used to rank the things that possess it. Not a fruitful starting point.If you carefully read my posts in this thread, I have tried to provide a useful definition of consciousness to discuss about morality several times already. I also showed that it is an extended version of clinical definition manifested in glasgow list. If the levels in the list is likened to a handful colors of the rainbow, then a concept of consciousness required to be useful in building moral rules is like the whole spectrum of electromagnetic wave.
It isn't defined as multidimensional but anydimensional. A thing is either conscious or notYour definition above makes consciousness less relevant to building moral rules.
It is also worth remembering that we do not live in a static, perfect world. There will always be hard cases and exceptions, which need to be dealt with as such and not necessarily to impact the general framework. Simple case: you should pay your taxes. But if your house has just burnt down, your overriding imperative is to shelter your family, not to give the government money to squander on railway consultants. Simpler still: you shouldn't kill civilians, but there's no point in coming second in a fight.A legitimate exception means that we acknowledge a higher priority moral rule than the one we are going to break. A mature society should provide the list of highest priority moral rules in hierarchical structure to help their members make a quick decisions when facing hard cases. Autonomous vehicles and other AI with significant impacts to society must also have that hierarchy incorporated into their algorithm.
The final assessment thus depends on the formula or algorithm used to combine those parameters into a single value useful to compare intelligence, at least in relative scale.In other words, the measure of consciousness is whatever Hamdani Yusuf says it is, unless it's measured by someone else, since there is no universal arbiter of the formula. Not sure how that advances our discussion .
A legitimate exception means that we acknowledge a higher priority moral rule than the one we are going to break
I think we can all agree that a good moral rule is a useful one. But follow up question naturally comes up: useful according to who?
The concept of IQ has been around for more than a century without my involvement.The final assessment thus depends on the formula or algorithm used to combine those parameters into a single value useful to compare intelligence, at least in relative scale.In other words, the measure of consciousness is whatever Hamdani Yusuf says it is, unless it's measured by someone else, since there is no universal arbiter of the formula. Not sure how that advances our discussion .
An intelligence quotient (IQ) is a total score derived from a set of standardized tests designed to assess human intelligence.[1] The abbreviation "IQ" was coined by the psychologist William Stern for the German term Intelligenzquotient, his term for a scoring method for intelligence tests at University of Breslau he advocated in a 1912 book.[2]https://en.m.wikipedia.org/wiki/Intelligence_quotient
The concept of IQ has been around for more than a century without my involvement.The concept has, but its only definition is "something to do with quizzes, with a normal distribution and a mean score of 100". The results you get for any particular test vary according to the language and culture within which you apply it!
The concept has, but its only definition is "something to do with quizzes, with a normal distribution and a mean score of 100". The results you get for any particular test vary according to the language and culture within which you apply it!IQ test is specifically designed to measure human intelligence. Average human can be modeled as hardware and software which take inputs, process the data, and generate output. They are assumed to already have some commonly used software for data processing such as concept of number, letters, grammar, basic geometry, etc. Without proper software, even the best computer hardware can't solve many problems.
And anyway, we aren't talking about intelligence, but asking for your definition of consciousness. A decent computer can probably score 200+ on the best IQ tests. Would that signify consciousness, or even intelligence?
Historically, IQ was a score obtained by dividing a person's mental age score, obtained by administering an intelligence test, by the person's chronological age, both expressed in terms of years and months. The resulting fraction (quotient) is multiplied by 100 to obtain the IQ score.[3] For modern IQ tests, the median raw score of the norming sample is defined as IQ 100 and scores each standard deviation (SD) up or down are defined as 15 IQ points greater or less.[4] By this definition, approximately two-thirds of the population scores are between IQ 85 and IQ 115. About 2.5 percent of the population scores above 130, and 2.5 percent below 70.[5][6]https://en.wikipedia.org/wiki/Intelligence_quotient
Scores from intelligence tests are estimates of intelligence. Unlike, for example, distance and mass, a concrete measure of intelligence cannot be achieved given the abstract nature of the concept of "intelligence".[7] IQ scores have been shown to be associated with such factors as morbidity and mortality,[8][9] parental social status,[10] and, to a substantial degree, biological parental IQ. While the heritability of IQ has been investigated for nearly a century, there is still debate about the significance of heritability estimates[11][12] and the mechanisms of inheritance.[13]
IQ scores are used for educational placement, assessment of intellectual disability, and evaluating job applicants. Even when students improve their scores on standardized tests, they do not always improve their cognitive abilities, such as memory, attention and speed.[14] In research contexts, they have been studied as predictors of job performance[15] and income.[16] They are also used to study distributions of psychometric intelligence in populations and the correlations between it and other variables. Raw scores on IQ tests for many populations have been rising at an average rate that scales to three IQ points per decade since the early 20th century, a phenomenon called the Flynn effect. Investigation of different patterns of increases in subtest scores can also inform current research on human intelligence.
In other words, the measure of consciousness is whatever Hamdani Yusuf says it is, unless it's measured by someone else, since there is no universal arbiter of the formula. Not sure how that advances our discussion .The formula of the test can be fine tuned to approach desired result. The arbiter for the IQ test is job performance, which is useful for hiring managers.
Job performance
According to Schmidt and Hunter, "for hiring employees without previous experience in the job the most valid predictor of future performance is general mental ability."[15] The validity of IQ as a predictor of job performance is above zero for all work studied to date, but varies with the type of job and across different studies, ranging from 0.2 to 0.6.[122] The correlations were higher when the unreliability of measurement methods was controlled for.[10] While IQ is more strongly correlated with reasoning and less so with motor function,[123] IQ-test scores predict performance ratings in all occupations.[15] That said, for highly qualified activities (research, management) low IQ scores are more likely to be a barrier to adequate performance, whereas for minimally-skilled activities, athletic strength (manual strength, speed, stamina, and coordination) are more likely to influence performance.[15] The prevailing view among academics is that it is largely through the quicker acquisition of job-relevant knowledge that higher IQ mediates job performance. This view has been challenged by Byington & Felps (2010), who argued that "the current applications of IQ-reflective tests allow individuals with high IQ scores to receive greater access to developmental resources, enabling them to acquire additional capabilities over time, and ultimately perform their jobs better."[124]
In establishing a causal direction to the link between IQ and work performance, longitudinal studies by Watkins and others suggest that IQ exerts a causal influence on future academic achievement, whereas academic achievement does not substantially influence future IQ scores.[125] Treena Eileen Rohde and Lee Anne Thompson write that general cognitive ability, but not specific ability scores, predict academic achievement, with the exception that processing speed and spatial ability predict performance on the SAT math beyond the effect of general cognitive ability.[126]
The US military has minimum enlistment standards at about the IQ 85 level. There have been two experiments with lowering this to 80 but in both cases these men could not master soldiering well enough to justify their costs.
All the quotes seem to suggest is that if you select people with a relevant test, they will perform better than average or those that fail the test. But the key is relevance. A blind man with an IQ of 130 probably won't make a good pilot. Bench pressing 100 kilos is quite a feat, but a footballer needs quite different feet.Unaided blind man has reduced awareness compared to otherwise normal men. Advanced technology can provide some ways to compensate the handicap, or even give more advantage, such as additional infrared, ultraviolet, and radar vision unavailable to the unaided normal human. An average man aided by a powerful AI directly connected to his brain may easily beat the smartest persons in many tasks requiring high intelligence.
So you assert that we should select lawmakers on the grounds of consciousness, but the only definition you have given seems to be "IQ plus selfawareness". Every animal I have encountered is self-aware. The extreme seems to be narcissism, which is obviously undesirable.
Empirical studieshttps://en.wikipedia.org/wiki/Narcissism#Empirical_studies
Within the field of psychology, there are two main branches of research into narcissism: (1) clinical and (2) social psychology.
These two approaches differ in their view of narcissism, with the former treating it as a disorder, thus as discrete, and the latter treating it as a personality trait, thus as a continuum. These two strands of research tend loosely to stand in a divergent relation to one another, although they converge in places.
Campbell and Foster (2007)[23] review the literature on narcissism. They argue that narcissists possess the following "basic ingredients":
Positive: Narcissists think they are better than others.[26]
Inflated: Narcissists' views tend to be contrary to reality. In measures that compare self-report to objective measures, narcissists' self-views tend to be greatly exaggerated.[27]
Agentic: Narcissists' views tend to be most exaggerated in the agentic domain, relative to the communion domain.[clarification needed][26][27]
Special: Narcissists perceive themselves to be unique and special people.[28]
Selfish: Research upon narcissists' behaviour in resource dilemmas supports the case for narcissists as being selfish.[29]
Oriented toward success: Narcissists are oriented towards success by being, for example, approach oriented.[clarification needed][30]
In the case of narcissism, the agent has inaccurate model of reality, which significantly reduces its measure of general consciousness.Sadly, Donald Trump has a more accurate model of reality and grasp of the controls than his morally superior opponents. It's much easier to manipulate the machinery of politics and the gullibility of the electorate if you really understand what you are doing, in the current context. He's not the first or the last self-centered demagogue to succeed in politics, even if he loses money in business.
[Positive: Narcissists think they are better than others. Speculation. What we know is that they act as though they are better than others.
When you wrote the planet, can I assume that you meant collective conscious agents living on it? As far as I know, planets are not conscious agents. They don't have internal model of objective reality representing themselves in their environments. They don't have preference either. We can't say if the earth prefer current condition over Hadean period. Jupiter didn't seem to mind to be hit by Shoemaker-Levy 9 comets.I think we can all agree that a good moral rule is a useful one. But follow up question naturally comes up: useful according to who?
Depends on context. The planet, Society, British society, Yorkshiremen, family and friends, family only, or oneself? Or how about some Good Samaritan altruism? As long as you don't invoke any deities, the answer is usually fairly straightforward since the consequences of any action tend to diminish with distance from the source.
Radical skepticism or radical scepticism is the philosophical position that knowledge is most likely impossible.[1] Radical skeptics hold that doubt exists as to the veracity of every belief and that certainty is therefore never justified. To determine the extent to which it is possible to respond to radical skeptical challenges is the task of epistemology or "the theory of knowledge".[2]
Several Ancient Greek philosophers, including Plato, Cratylus, Carneades, Arcesilaus, Aenesidemus, Pyrrho, and Sextus Empiricus have been viewed as having expounded theories of radical skepticism.
In modern philosophy, two representatives of radical skepticism are Michel de Montaigne (most famously known for his skeptical remark, Que sçay-je ?, 'What do I know?' in Middle French; modern French Que sais-je ?) and David Hume (particularly as set out in A Treatise of Human Nature, Book 1: "Of the Understanding").
As radical skepticism can be used as an objection for most or all beliefs, many philosophers have attempted to refute it. For example, Bertrand Russell wrote “Skepticism, while logically impeccable, is psychologically impossible, and there is an element of frivolous insincerity in any philosophy which pretends to accept it.”
When you wrote the planet, can I assume that you meant collective conscious agents living on it? As far as I know, planets are not conscious agents. They don't have internal model of objective reality representing themselves in their environments. They don't have preference either. We can't say if the earth prefer current condition over Hadean period. Jupiter didn't seem to mind to be hit by Shoemaker-Levy 9 comets.Reward and punishment as tools to enforce moral rules can only be applied to conscious agents, especially those with clear preferences. Otherwise, we need other ways to make an agent behave in good manners.
When you wrote the planet, can I assume that you meant collective conscious agents living on it?Of course not! Since we haven't come up with a useful definition of consciousness, I couldn't possibly mean that! The planet is the physical context in which we act.
Otherwise, we need other ways to make an agent behave in good manners.The characteristic of many animals, especially humans, is their realisation that you can usually achieve more by collaboration than by competition. Thus we appreciate a sort of long-term integrated reward and most of us value that above immediate selfgratification. We use punishment and reward to bring into line those who don't.
If you don't want to call extended consciousness as I described previously as consciousness, that's fine. You can call it extended consciousness then. I've explain why consciousness can be useful in setting moral rules only if it is extended from clinical sense. A baby can be fully conscious clinically, but we can't expect them to follow moral rules intended for adults.When you wrote the planet, can I assume that you meant collective conscious agents living on it?Of course not! Since we haven't come up with a useful definition of consciousness, I couldn't possibly mean that! The planet is the physical context in which we act.
The "two common dangers" are actually one - philosophy. Like alcohol, it can be amusing in small doses but utterly destructive if you let it rule your life. Religion/relativism, whisky/beer, just different flavors, same poison.
How can we punish earth that created earthquakes and kills millions directly and indirectly? Or asteroids for hitting earth?Otherwise, we need other ways to make an agent behave in good manners.The characteristic of many animals, especially humans, is their realisation that you can usually achieve more by collaboration than by competition. Thus we appreciate a sort of long-term integrated reward and most of us value that above immediate selfgratification. We use punishment and reward to bring into line those who don't.
Religious fundamentalism commits false positive error type; it accepts a hypothesis that turn out to be false. Whereas moral relativism commits false negative error type; it rejects any hypotheses, including the correct one.The follow up question woud be: Is it possible to determine if something is true or false? how?
(English:) Accordingly, seeing that our senses sometimes deceive us, I was willing to suppose that there existed nothing really such as they presented to us; And because some men err in reasoning, and fall into Paralogisms, even on the simplest matters of Geometry, I, convinced that I was as open to error as any other, rejected as false all the reasonings I had hitherto taken for Demonstrations; And finally, when I considered that the very same thoughts (presentations) which we experience when awake may also be experienced when we are asleep, while there is at that time not one of them true, I supposed that all the objects (presentations) that had ever entered into my mind when awake, had in them no more truth than the illusions of my dreams. But immediately upon this I observed that, whilst I thus wished to think that all was false, it was absolutely necessary that I, who thus thought, should be something; And as I observed that this truth, I think, therefore I am,[e] was so certain and of such evidence that no ground of doubt, however extravagant, could be alleged by the Sceptics capable of shaking it, I concluded that I might, without scruple, accept it as the first principle of the philosophy of which I was in search.[h]https://en.wikipedia.org/wiki/Cogito,_ergo_sum
This proposition became a fundamental element of Western philosophy, as it purported to form a secure foundation for knowledge in the face of radical doubt. While other knowledge could be a figment of imagination, deception, or mistake, Descartes asserted that the very act of doubting one's own existence served—at minimum—as proof of the reality of one's own mind; there must be a thinking entity—in this case the self—for there to be a thought.
While we thus reject all of which we can entertain the smallest doubt, and even imagine that it is false, we easily indeed suppose that there is neither God, nor sky, nor bodies, and that we ourselves even have neither hands nor feet, nor, finally, a body; but we cannot in the same way suppose that we are not while we doubt of the truth of these things; for there is a repugnance in conceiving that what thinks does not exist at the very time when it thinks. Accordingly, the knowledge,[m] I think, therefore I am,[e] is the first and most certain that occurs to one who philosophizes orderly.
That we cannot doubt of our existence while we doubt, and that this is the first knowledge we acquire when we philosophize in order.
The Search for Truthhttps://en.wikipedia.org/wiki/Cogito,_ergo_sum#The_Search_for_Truth
Descartes, in a lesser-known posthumously published work dated as written ca. 1647[13] and titled La Recherche de la Vérité par La Lumiere Naturale (The Search for Truth by Natural Light),[14] wrote:
(Latin:) … Sentio, oportere, ut quid dubitatio, quid cogitatio, quid exsistentia sit antè sciamus, quàm de veritate hujus ratiocinii : dubito, ergo sum, vel, quod idem est, cogito, ergo sum[e] : plane simus persuasi.
(English:) … [I feel that] it is necessary to know what doubt is, and what thought is, [what existence is], before we can be fully persuaded of this reasoning — I doubt, therefore I am — or what is the same — I think, therefore I am.[p]
existence
/ɪɡˈzɪst(ə)ns,ɛɡˈzɪst(ə)ns/
noun
the fact or state of living or having objective reality.
think
/θɪŋk/
verb
1.
have a particular belief or idea.
verbHere is my summary of Decartes' idea. To search for the truth, we need to have the ability to doubt. To doubt something, we must have internal model meant to represent objective reality, and we must realize that those two do not always agree. To think about objective reality, the thinker must have internal model meant to represent it. To possess an internal model which represent objective reality, it must exist in objective reality.
verb: doubt; 3rd person present: doubts; past tense: doubted; past participle: doubted; gerund or present participle: doubting
1.
feel uncertain about.
"I doubt my ability to do the job"
question the truth or fact of (something).
"who can doubt the value and necessity of these services?"
Sinonim: think something unlikely, have (one's) doubts about, question, query, be dubious, lack conviction, have reservations about
disbelieve or lack faith in (someone).
"I have no reason to doubt him"
Sinonim: disbelieve, distrust, mistrust, suspect, lack confidence in, have doubts about, be suspicious of, have suspicions about, have misgivings about, feel uneasy about, feel apprehensive about, call into question, query, question, challenge, dispute, have reservations about
feel uncertain, especially about one's religious beliefs.
Sinonim: be undecided, have doubts, be irresolute, be hesitant
Finally we get to the last question: how. There are some basic strategies to preserve information which I borrow from IT business:The existence of a thinker is subject to natural selection.
Choosing robust media.
Creating multilayer protection.
Creating backups.
Create diversity to avoid common mode failures.
Since the existence of the thinker is the only thing that can't be doubted, it must be defended at all cost.Cogito ergo sum is just one of an infinite number of possible axioms. It's not a strong foundation.
Cogito ergo sum is just one of an infinite number of possible axioms. It's not a strong foundation.Decartes demonstrated by reductio ad absurdum, that if a thinker rejects its own existence, it leads to contradiction.
At the beginning of the second meditation, having reached what he considers to be the ultimate level of doubt—his argument from the existence of a deceiving god—Descartes examines his beliefs to see if any have survived the doubt. In his belief in his own existence, he finds that it is impossible to doubt that he exists. Even if there were a deceiving god (or an evil demon), one's belief in their own existence would be secure, for there is no way one could be deceived unless one existed in order to be deceived.https://en.wikipedia.org/wiki/Cogito,_ergo_sum#Interpretation
But I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I, too, do not exist? No. If I convinced myself of something [or thought anything at all], then I certainly existed. But there is a deceiver of supreme power and cunning who deliberately and constantly deceives me. In that case, I, too, undoubtedly exist, if he deceives me; and let him deceive me as much as he can, he will never bring it about that I am nothing, so long as I think that I am something. So, after considering everything very thoroughly, I must finally conclude that the proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind. (AT VII 25; CSM II 16–17[v])
There are three important notes to keep in mind here. First, he claims only the certainty of his own existence from the first-person point of view — he has not proved the existence of other minds at this point. This is something that has to be thought through by each of us for ourselves, as we follow the course of the meditations. Second, he does not say that his existence is necessary; he says that if he thinks, then necessarily he exists (see the instantiation principle). Third, this proposition "I am, I exist" is held true not based on a deduction (as mentioned above) or on empirical induction but on the clarity and self-evidence of the proposition. Descartes does not use this first certainty, the cogito, as a foundation upon which to build further knowledge; rather, it is the firm ground upon which he can stand as he works to discover further truths.[35] As he puts it:
Archimedes used to demand just one firm and immovable point in order to shift the entire earth; so I too can hope for great things if I manage to find just one thing, however slight, that is certain and unshakable. (AT VII 24; CSM II 16)
Best to avoid philosophy and stick to science. Scientific knowledge is the residue of disprovable hypotheses that have not been disproved. That's all there is. "Common" knowledge is the bunch of hypotheses, rules of thumb and tabulated data that we have found adequate for everyday use.Why so? Scientific experiments can be costly, while available resources are finite. We must prioritize which ones to be done first. That's where philosophy comes into play.
None of which has anything to do with morality. We obviously can't act in contradiction to the laws of physics, but morality is about how we should act within those constraints.
Do unto others as you would have them do unto you. Simples!Does this rule applicable universally, regardless of personality, gender, race, ideology, nationality, species?
If another does unto me as I would not like, an eye for an eye is just retribution.
The moral imperative is universal as long as you accept the "eye for an eye" part. Ideology is philosophy and therefore is at best irrelevant and at worst poisonous. Species has some limitation as all animals have to eat things that were formerly alive, but AFAIK all "normal" humans prefer a clean kill, except for oysters.I can see that you use a very narrow definition of morality, thus many problems most people regard as moral issues are not covered.
The trolley problem isn't a moral issue. It's one of statistics.
Why so? Scientific experiments can be costly, while available resources are finite. We must prioritize which ones to be done first. That's where philosophy comes into play.
As for the cost of scientific experiments, I think it was Harold Wilson who said "if you think education is expensive, try ignorance". Most scientific investigation derives from product failure, so the budget is set according to how many lives it might save to know what went wrongWhat is the portion of US annual budget dedicated to scientific experiments? Why can't it be 100%?
"Blue sky" research has its own justification. Ronald Reagan asked, at the Lawrence Livermore laboratory, how their work contributed to the defence of the nation. The response was "It is what makes the nation worth defending." Some curiosity-driven medical research is justified on a risk/benefit ratio: if it does little harm but might lead to a big reward in areas we haven't considered, let's investigate. Other non-failure research falls into the category of public art: we fly to the moon or launch orbital telescopes principally out of public interest.
I can see that you use a very narrow definition of morality, thus many problems most people regard as moral issues are not covered.
The trolley problem.I can see that you use a very narrow definition of morality, thus many problems most people regard as moral issues are not covered.
Can you provide an example?
Because, as Lincoln pointed out, a country consists of a defensible border, and the irreducible function of government is to raise enough taxes to pay the army that defends it. The secondary functions like enforcing rights and prosecuting wrongs take up a fair bit of the budget, and it is generally preferable to hand out welfare payments rather than have the unemployed steal food. Then there's the cost of the greater glorification of the Fuhrer: whilst the Queen travels in a Range Rover or whatever aircraft the military has available (literally - if the Royal Flight is on operations, they charter Jim Smith's Air Taxi or join a BA scheduled flight) , El Presidente Trump is so unpopular that he needs a motorcade of 20 armoured Lincolns and umpteen motorbikes to go shopping. Next come the banks: crooks who are too big to fail, so must get their bonuses when there is nobody left to cheat. Whatever is left, can be spent on science, arts, or general bribery and chicanery.The resources are divided some way as to best preserve the existence of conscious system, according to the knowledge/understanding of the current system. If someday they are convinced that there is a better way to spend their resources to achieve their ultimate goal due to improved knowledge or change of their environment, they will change the budgetary structure/composition.
The trolley problem.What's the moral question? You can do something or nothing. Doing something will result in one death, doing nothing will result in five deaths. One is less than five. Failing to act can be considered negligent or even complicit.
spend their resources to achieve their ultimate goalThe ultimate goal of a politician is to be re-elected. This is achieved by judicious spending of other people's money, spouting meaningless slogans, and licking the arse of whoever can bring you the most votes.
The survey results show that slight modifications to the original trolley problem had made many people switch their desicions. It means that people in the survey have different priorities or knowledge about the problem. For moral relativists, it would make no difference which decision you'd take, even if your decision is made solely based on coin toss. But for the rest of us, there should be some basic principles to judge if an action is considered moral or not.The trolley problem.What's the moral question? You can do something or nothing. Doing something will result in one death, doing nothing will result in five deaths. One is less than five. Failing to act can be considered negligent or even complicit.
Such decisions have to be made from time to time. A classic was the sacrifice of the Calais garrison to delay the German advance towards Dunkirk in 1940. Fortunately the Allies were commanded by soldiers, who are paid to find solutions, not philosophers, who are paid to invent problems.
Here is an example where eye for an eye doesn't work as moral guidance.The moral imperative is universal as long as you accept the "eye for an eye" part. Ideology is philosophy and therefore is at best irrelevant and at worst poisonous. Species has some limitation as all animals have to eat things that were formerly alive, but AFAIK all "normal" humans prefer a clean kill, except for oysters.I can see that you use a very narrow definition of morality, thus many problems most people regard as moral issues are not covered.
The trolley problem isn't a moral issue. It's one of statistics.
Golden rule has limitations when dealing with asymmetrical relationships, such as parents to kids, humans to animals, normal to disabled.
The eye on eye is even narrower, since it only deals with negative behavior. It only speaks about what shouldn't be done, while saying nothing about what should be done.
It can only happen in a democratic society. Moreover, what would they do if they got reelected? Can they just rest in peace? If not, then it can't be their actual ultimate/terminal goal.spend their resources to achieve their ultimate goalThe ultimate goal of a politician is to be re-elected. This is achieved by judicious spending of other people's money, spouting meaningless slogans, and licking the arse of whoever can bring you the most votes.
Astute demagogues (Hitler, Thatcher, Blair, Trump) have no interest in promoting cooperative behaviour. Defending the electorate from "the enemy within" (Jews, coalminers...), or inventing a new external enemy (Argentinians, Iraquis, Mexicans...) can be a vote winner. The trick, of course, is to choose an enemy you can defeat.
Here is an example where eye for an eye doesn't work as moral guidance.An old man rapes his own little kid many times over a period of ten years.Let the punishment fit the crime. There has never been a problem recruiting a public hangman.
Here is another one.A man borrow some money and use it for gambling. He dies before paying the debt.
A man kills his neighbor's dog for being noisy.Wrong, of course. He should have spent a fortune getting a court order to have the dog destroyed by a professional. How else can lawyers make a living?
It can only happen in a democratic society. Moreover, what would they do if they got reelected? Can they just rest in peace? If not, then it can't be their actual ultimate/terminal goal."All political careers end in failure" (Churchill). Or death (Calverd). It's a bit like skiing - you proceed to ever more difficult and dangerous runs until you break something. But what a ride!
Deception to gain political power only work if the constituents are gullible enough to believe it.Never underestimate the gullibility of the electorate. "Make America Great" my arse. WTF does that actually mean? Destroy the social fabric, support mass murder, pardon criminals, and put ignorant prejudiced scum on the Supreme Court bench. It's a vote winner!
They can systematically dumb down their people, but that would bring unwanted consequences in the long term.In Thatcher's case, dementia. In Blair's case, loadsamoney. The Nazi high command enjoyed feasts and adulation up to the point where the Red Army were literally breaking the door down. Cologne, Dresden, Hamburg...just show up and say something defiant over the smouldering ruins, and das volk will cheer as always.
So no drug dealer or pimp would vote Republican. Why not? Surely these are the very people who favour private enterprise and low taxes? Or are they hoping for state-funded addiction and prostitution in the Land of the Free?No, either party prosecutes such crimes. The Democrats for some reason gain the votes from the Mexican immigrants. Controlled immigration can be like gerrymandering.
Human Nature lays out these tantalizing possibilities alongside some even more far-out applications, like Crispr-ing pigs to grow human organs. Then viewers spend time with Steven Hsu, the chief scientific officer at Genomic Prediction, a company that generates genetic scorecards for prospective parents’ IVF embryos. Hsu believes that using Crispr to create children free of disease will one day be routine, and that parents who leave their genetic recombination up to chance will be the ones deemed unethical by societies of the future.https://www.wired.com/story/crisprs-origin-story-comes-to-life-in-a-new-documentary/
those on the left are usually corrupted by money, those on the right by sex.If impeached presidents are useful as indicator, then the US would be a different story.
How can this rule help to solve moral problems such as trolley problem?99.9% of the morality of the trolley problem is resolved by:
The concept of IQ has been around for more than a centuryEQ or "Emotional Quotient" has been around for much less time, but it relates to an emotional connection with people, rather than an intellectual connection.
99.9% of the morality of the trolley problem is resolved by:I don't know how you can come up with the number. It seems like you only considered the original version of trolley problem where it happens accidentally. But there are some variations where it is deliberately set up by some villains such as in superhero movies.
- An annual thorough inspection of the brakes, lights, windscreen wipers, etc...
- A several-times daily check of the engine, brakes etc when each new driver starts his/her shift.
- Keeping to the speed limit appropriate for the conditions
- Reporting any brake problems as soon as they occur, rather than waiting until there are 6 people tied to the tracks.
The US is weird in many ways, beginning with pinning the wrong colors on their political parties and ending up with electing a drooling idiot as president, despite his coming second in the popular vote. Impeachment for sexual shenanigans is quite absurd: any modern French or British politician would say "so what?" as long as there was no compromise of national security.those on the left are usually corrupted by money, those on the right by sex.If impeached presidents are useful as indicator, then the US would be a different story.
EQ or "Emotional Quotient" has been around for much less time, but it relates to an emotional connection with people, rather than an intellectual connection.
Whether a leader can be emotionally connected to millions of people is an open question...
See: https://en.wikipedia.org/wiki/Emotional_intelligence
The Oxford Dictionary definition of emotion is "A strong feeling deriving from one's circumstances, mood, or relationships with others."[22] Emotions are responses to significant internal and external events.[23]https://en.wikipedia.org/wiki/Emotion#Definitions
Emotions can be occurrences (e.g., panic) or dispositions (e.g., hostility), and short-lived (e.g., anger) or long-lived (e.g., grief).[24] Psychotherapist Michael C. Graham describes all emotions as existing on a continuum of intensity.[25] Thus fear might range from mild concern to terror or shame might range from simple embarrassment to toxic shame.[26] Emotions have been described as consisting of a coordinated set of responses, which may include verbal, physiological, behavioral, and neural mechanisms.[27]
Emotions have been categorized, with some relationships existing between emotions and some direct opposites existing. Graham differentiates emotions as functional or dysfunctional and argues all functional emotions have benefits.[28]
In some uses of the word, emotions are intense feelings that are directed at someone or something.[29] On the other hand, emotion can be used to refer to states that are mild (as in annoyed or content) and to states that are not directed at anything (as in anxiety and depression). One line of research looks at the meaning of the word emotion in everyday language and finds that this usage is rather different from that in academic discourse.[30]
In practical terms, Joseph LeDoux has defined emotions as the result of a cognitive and conscious process which occurs in response to a body system response to a trigger.[31]
While concept of intelligence is meant to represent problem solving capability, the concept of consciousness includes the ability to determine which problems to solve first.It is generally assumed that given the same amount of information/knowledge, people with higher intelligence are more likely and quickly to solve problems compared to those with lower intelligence. So some knowledge and wisdom are excluded from measurement of intelligent. We can get high score in IQ test without knowing about Maxwell's equations or history of USA. Our physical prowess don't seem to matter either.
A disability is any condition that makes it more difficult for a person to do certain activities or interact with the world around them. These conditions, or impairments, may be cognitive, developmental, intellectual, mental, physical, sensory, or a combination of multiple factors. Impairments causing disability may be present from birth or occur during a person's lifetime.https://en.wikipedia.org/wiki/Disability
While concept of intelligence is meant to represent problem solving capability, the concept of consciousness includes the ability to determine which problems to solve first.
Brain-implanted rats and human addicts will solve the problem of getting the next fix rather than getting the next meal. This behaviour may seem illogical to you, but if you use it to determine consciousness, you are applying your arbitrary values to another entity in a different environment, so it's subjective. Think about a parent who knowingly sacrifices himself to save a child: same outcome (self destruction) for the same stimulus (feeling good).The difference is the outcome in the long run. The sacrifice of parents are compensated by the survival of children who inherit most of parent's characteristics, and probably some improvements, and acumulated knowledge of the society. Without adequate compensation, self destruction is always a bad behavior.
The compensation may be relief of chronic pain, emotional suffering, or obvious looming disaster. Or simply to give a lifetime's accumulated wealth to one's children instead of wasting it on terminal "care". I fully intend to take my own life rather than suffer pain and indignity.With adequate knowledge, we should be able to kill pain without unintended side effects.
Who authorised any of the aforementioned old perverts to judge "adequate"?The conscious agents who are still alive in the future, just like we judge actions of people from previous generations.
Several years after Saul’s victory against the Philistines at Michmash Pass, Samuel instructs Saul to make war on the Amalekites and to "utterly destroy" them,[14] in fulfilment of a mandate set out Deuteronomy 25:19:
When the Lord your God has given you rest from all your enemies on every hand, in the land that the Lord your God is giving you as an inheritance to possess, you shall blot out the remembrance of Amalek from under heaven; do not forget.
Having forewarned the Kenites who were living among the Amalekites to leave, Saul goes to war and defeats the Amalekites. Saul kills all the men, women, children and poor quality livestock, but leaves alive the king and best livestock. When Samuel learns that Saul has not obeyed his instructions in full, he informs Saul that God has rejected him as king due to his disobedience. As Samuel turns to go, Saul seizes hold of his garments and tears off a piece; Samuel prophesies that the kingdom will likewise be torn from Saul. Samuel then kills the Amalekite king himself. Samuel and Saul each return home and never meet again after these events (1 Samuel 15:33-35).
Now there's a problem! Some laws are made by politicians or perverts for their own aggrandisement, some for the sake of social cohesion, and some as an emergency provision. The case you quote suggests personal aggrandisement: the war was over and the prophecy was to "blot out the remembrance", i.e. to re-educate, not eradicate, the population.Here is the more complete quote.
Not much evidence of an acceptable moral standard in the statute books, nor the bible, I fear.
Several years after Saul’s victory against the Philistines at Michmash Pass, Samuel instructs Saul to make war on the Amalekites and to "utterly destroy" them,[14] in fulfilment of a mandate set out Deuteronomy 25:19:I don't know how you translate that into re-education. Let's scrutinize this.
When the Lord your God has given you rest from all your enemies on every hand, in the land that the Lord your God is giving you as an inheritance to possess, you shall blot out the remembrance of Amalek from under heaven; do not forget.
Saul kills all the men, women, children and poor quality livestock, but leaves alive the king and best livestock. When Samuel learns that Saul has not obeyed his instructions in fullSo, the instructions are to kill all the men (including the king), women, children and livestock (either poor or best quality). Saul did kill all the men (except the king), women, children and livestock (except the best quality), thus Saul has not obeyed his instructions in full.
,Quoteyou shall blot out the remembrance of Amalek from under heaven;I don't know how you translate that into re-education.
The trolley problem demonstrates just how dire the coronavirus pandemic is becoming — with a touch of surrealist humor, of course.https://mashable.com/article/trolley-problem-coronavirus-meme/
So. I was reading the London Review of Books the other day and came across this passage by the philosopher Kieran Setiya:
Some of the most striking discoveries of experimental philosophers concern the extent of our own personal inconsistencies . . . how we respond to the trolley problem is affected by the details of the version we are presented with. It also depends on what we have been doing just before being presented with the case. After five minutes of watching Saturday Night Live, Americans are three times more likely to agree with the Tibetan monks that it is permissible to push someone in front of a speeding train carriage in order to save five. . . .
I’m not up on this literature, but I was suspicious. Watching a TV show for 5 minutes can change your view so strongly?? I was reminded of the claim from a few years ago, that subliminal smiley faces had huge effects on attitudes toward immigration—it turns out the data showed no such thing. And I was bothered, because it seemed that a possibly false fact was being used as part of a larger argument about philosophy. The concept of “experimental philosophy”—that’s interesting, but only if the experiments make sense.
And, just to be clear, I agree that there’s nothing special about an SNL video or for that matter about a video at all. My concern about the replication studies is more of a selection issue: if a new study doesn’t replicate the original claim, then a defender can say it’s not a real replication. I guess we could call that “the no true replication fallacy”! Kinda like those notorious examples where people claimed that a failed replication didn’t count because it was done in a different country, or the stimulus was done for a different length of time, or the outdoor temperature was different.Trolley problem and its variations used as tools to find moral principles have their benefits as well as limitations. At least they give some sense of practicality by placing us in a possible real world situations which require us to make moral decisions, instead of just imagining abstracts to weigh in which moral principles should be prioritized over the others. But they also introduce uncertainty about cause and effect relationship of available actions in some people's mind. Some people tried to find third option to break the dilemma.
The real question is, what did they find and how do these findings relate to the larger claim?
And the answer is, it’s complicated.
First, the two new studies only look at the footbridge scenario (where the decision is whether to push the fat man), not the flip-the-switch-on-the-trolley scenario, which is not so productive to study because most people are already willing to flip the switch. So the new studies to not allow comparison the two scenarios. (Strohminger et al. used 12 high conflict moral dilemmas; see here)
Second, the two new studies looked at interactions rather than main effects.
"The Trolley Problem"—as the above situation and its related variations are called—is a mainstay of introductory ethics courses, where it is often used to demonstrate the differences between utilitarian and Kantian moral reasoning. Utilitarianism (also called consequentialism) judges the moral correctness of an action based solely on its outcome. A utilitarian should switch the tracks. Just do the math: One dead is better than five, in terms of outcomes. Kantian, or rule-based, ethics relies on a set of moral principles that must be followed in all situations, regardless of outcome. A Kantian might not be able to justify switching the track if, say, their moral principles hold actively killing someone to be worse than being a bystander to death.
The rise of autonomous vehicles has given the thought experiment a renewed urgency. If a self-driving car has to choose between crashing into two different people—or two different groups of people—how should it decide which to kill, and which to spare? What value system are we coding into our machines?
These questions about autonomous vehicles have, for years, been haunting journalists and academics. Last month, the Massachusetts Institute of Technology released the results of its "Moral Machine," an online survey of two million people across 200 countries, demonstrating their preferences for, well, who they'd prefer a self-driving car to kill. Should a car try to hit jaywalkers, rather than people following the rules for crossing? Senior citizens rather than younger people? People in better social standing than those less well-regarded?
One concern I have is with regard to how the moral machine project has been publicized is that, for ethicists, looking at what other cultures think about different ethical questions is interesting, but [that work] is not ethics. It might cause people to think that all that ethics is is just about surveying different groups and seeing what their values are, and then those values are the right ones. I'm concerned about moral relativism, which is already very troubling with our world, and this may be playing with that. In ethics, there's a right and there's a wrong, and this might confuse people about what ethics is. We don't call people up and then survey them.
Cogito ergo sum is just one of an infinite number of possible axioms. It's not a strong foundation.Decartes demonstrated by reductio ad absurdum, that if a thinker rejects its own existence, it leads to contradiction.QuoteAt the beginning of the second meditation, having reached what he considers to be the ultimate level of doubt—his argument from the existence of a deceiving god—Descartes examines his beliefs to see if any have survived the doubt. In his belief in his own existence, he finds that it is impossible to doubt that he exists. Even if there were a deceiving god (or an evil demon), one's belief in their own existence would be secure, for there is no way one could be deceived unless one existed in order to be deceived.https://en.wikipedia.org/wiki/Cogito,_ergo_sum#Interpretation
But I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I, too, do not exist? No. If I convinced myself of something [or thought anything at all], then I certainly existed. But there is a deceiver of supreme power and cunning who deliberately and constantly deceives me. In that case, I, too, undoubtedly exist, if he deceives me; and let him deceive me as much as he can, he will never bring it about that I am nothing, so long as I think that I am something. So, after considering everything very thoroughly, I must finally conclude that the proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind. (AT VII 25; CSM II 16–17[v])
There are three important notes to keep in mind here. First, he claims only the certainty of his own existence from the first-person point of view — he has not proved the existence of other minds at this point. This is something that has to be thought through by each of us for ourselves, as we follow the course of the meditations. Second, he does not say that his existence is necessary; he says that if he thinks, then necessarily he exists (see the instantiation principle). Third, this proposition "I am, I exist" is held true not based on a deduction (as mentioned above) or on empirical induction but on the clarity and self-evidence of the proposition. Descartes does not use this first certainty, the cogito, as a foundation upon which to build further knowledge; rather, it is the firm ground upon which he can stand as he works to discover further truths.[35] As he puts it:
Archimedes used to demand just one firm and immovable point in order to shift the entire earth; so I too can hope for great things if I manage to find just one thing, however slight, that is certain and unshakable. (AT VII 24; CSM II 16)
The video is titled "The Self - A Thought Experiment".Spoiler: showQuoteProfessor Patrick Stokes of Deakin University gives a thought experiment from Thomas Nagel. This comes from a talk given at the Ethics Centre from an episode of the podcast The Philosopher's Zone.
"The Trolley Problem"—as the above situation and its related variations are called—is a mainstay of introductory ethics courses, where it is often used to demonstrate the differences between utilitarian and Kantian moral reasoning. Utilitarianism (also called consequentialism) judges the moral correctness of an action based solely on its outcome. A utilitarian should switch the tracks. Just do the math: One dead is better than five, in terms of outcomes. Kantian, or rule-based, ethics relies on a set of moral principles that must be followed in all situations, regardless of outcome. A Kantian might not be able to justify switching the track if, say, their moral principles hold actively killing someone to be worse than being a bystander to death.I wonder what a Kantian would think if the 6 people on the track are equally valuable to him, e.g. all of them are his own twin kids. Will he let 5 of them die for his principle?
Typical philosopher's problem. Based on a dangerously faulty premise! The list of everything in the universe must include the list itself, but the existence of the list is itself a fact that must now be added to the list, so we must add the fact that we have added a fact to the list.....
But a philosopher would set that aside, allowing an infinitely expanding list (on the basis that cogito ergo sum applies also to lists). Now look yourself up in the list. You are doing something that isn't already on the list, so we have to add that to the description of you, ad infinitum... The problem becomes one of mathematics: you can't define "you" on the basis of that particular model. It's an inherently crap model because it imposes divergency on any proposed solution.
In many situations we don't need infinite precision. We can often make good decision with finite information.Precision isn't the problem. It's the more fundamental issue of the properties of a set which is member of itself - maths, not philosophy or morals!
I conclude that their purpose is to preserve the existence of consciousness in objective reality.I bet you can't define any of those words!
I bet you can't define any of those words!We can look up the dictionary to find the definition of each words. Some words may have different meanings according to context. The meaning of words may change over time following evolution of languages.
Without delving too deeply into the definition of morality or ethics, I think we can usefully approach the subject through "universal". The test is whether any person considered normal by his peers, would make the same choice or judgement as any other in a case requiring subjective evaluation.
their purpose is to preserve the existence of consciousness in objective reality.I can't think of better words to represent what you mean because I have no idea what you mean!
for moral rules... I conclude that their purpose is to preserve the existence of consciousness in objective reality.Many species have been observed to have rules of moral behavior that work for them.
How about moral rules being the lubricant of society?Moral rules are not limited to be the lubricant of society. They also cover individual affairs, such as keeping oneself sober and healthy, and avoid suicidal behaviors.
Mass murder in Jonestownhttps://en.wikipedia.org/wiki/Jim_Jones#Mass_murder_in_Jonestown
Houses in Jonestown, Guyana, the year after the mass murder-suicide, 1979
Later that same day, 909 inhabitants of Jonestown,[94] 304 of them children, died of apparent cyanide poisoning, mostly in and around the settlement's main pavilion.[95] This resulted in the greatest single loss of American civilian life (murder + suicide, though not on American soil) in a deliberate act until the September 11 attacks.[96] The FBI later recovered a 45-minute audio recording of the suicide in progress.[97]
On that tape, Jones tells Temple members that the Soviet Union, with whom the Temple had been negotiating a potential exodus for months, would not take them after the airstrip murders. The reason given by Jones to commit suicide was consistent with his previously stated conspiracy theories of intelligence organizations allegedly conspiring against the Temple, that men would "parachute in here on us," "shoot some of our innocent babies" and "they'll torture our children, they'll torture some of our people here, they'll torture our seniors." Jones's prior statements that hostile forces would convert captured children to fascism would lead many members who held strong opposing views to fascism to view the suicide as valid. [98]
With that reasoning, Jones and several members argued that the group should commit "revolutionary suicide" by drinking cyanide-laced grape-flavored Flavor Aid. Later-released Temple films show Jones opening a storage container full of Kool-Aid in large quantities. However, empty packets of grape Flavor Aid found on the scene show that this is what was used to mix the solution, along with a sedative. One member, Christine Miller, dissents toward the beginning of the tape.[98]
When members apparently cried, Jones counseled, "Stop these hysterics. This is not the way for people who are socialists or communists to die. No way for us to die. We must die with some dignity." Jones can be heard saying, "Don't be afraid to die," that death is "just stepping over into another plane" and that it's "a friend." At the end of the tape, Jones concludes: "We didn't commit suicide; we committed an act of revolutionary suicide protesting the conditions of an inhumane world."[98]
According to escaping Temple members, children were given the drink first by their own parents; families were told to lie down together.[99] Mass suicide had been previously discussed in simulated events called "White Nights" on a regular basis.[83][100] During at least one such prior White Night, members drank liquid that Jones falsely told them was poison.[83][100]
I wasn't talking about a set which is a member of itself. I posted the video to show that the more information we have, the more objective we can become.In many situations we don't need infinite precision. We can often make good decision with finite information.Precision isn't the problem. It's the more fundamental issue of the properties of a set which is member of itself - maths, not philosophy or morals!
The cogito ergo sum provide subjective certainty as a starting point. To get to objective certainty, we need to collect and assemble more information and knowledge to build an accurate and precise model of objective reality.To overcome subjectivity, our model of objective reality doesn't necessarily contain complete information of itself. It only needs to contain representation of itself in the model. A windows desktop is a commonly seen example.
The progress to build better AI and toward AGI will eventually get closer to the realization of Laplace demon which is already predicted as technological singularity.QuoteThe better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.https://pathmind.com/wiki/neural-networkQuoteIn some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.
That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.
We can start with a narrow and simple definition of consciousness which is widely accepted, such as in clinical context. Immediately we will realize that it is too narrow to be useful for determining moral rules. We clearly need to extend it as I've shown in previous posts here https://www.thenakedscientists.com/forum/index.php?topic=75380.msg591376#msg591376Quote from: hamdani yusuffor moral rules... I conclude that their purpose is to preserve the existence of consciousness in objective reality.Many species have been observed to have rules of moral behavior that work for them.
But we can't easily define consciousness in humans, let alone define what it means for other species (even familiar ones like the domesticated dog).
- Of course, the anthropocentric chauvinists default to "consciousness is unique to humans..."
Given that superintelligence will one day be technologically feasible, will people choose to develop it? This
question can pretty confidently be answered in the affirmative. Associated with every step along the road to
superintelligence are enormous economic payoffs. The computer industry invests huge sums in the next
generation of hardware and software, and it will continue doing so as long as there is a competitive pressure
and profits to be made. People want better computers and smarter software, and they want the benefits these
machines can help produce. Better medical drugs; relief for humans from the need to perform boring or
dangerous jobs; entertainment—there is no end to the list of consumer-benefits. There is also a strong military
motive to develop artificial intelligence. And nowhere on the path is there any natural stopping point where
technophobics could plausibly argue "hither but not further."
—NICK BOSTROM, “HOW LONG BEFORE SUPERINTELLIGENCE?” 1997
It is hard to think of any problem that a superintelligence could not either solve or at least help us solve.
Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a
superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a
superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through
the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also
create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could
assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful
gameplaying, relating to each other, experiencing, personal growth, and to living closer to our ideals.
—NICK BOSTROM, “ETHICAL ISSUES IN ADVANCED ARTIFICIAL INTELLIGENCE," 2003
Will robots inherit the earth? Yes, but they will be our children.
—MARVIN MINSKY, 1995
Lawrence Kohlberg's stages of moral development constitute an adaptation of a psychological theory originally conceived by the Swiss psychologist Jean Piaget. Kohlberg began work on this topic while being a psychology graduate student at the University of Chicago in 1958 and expanded upon the theory throughout his life.[1][2][3]https://en.wikipedia.org/wiki/Lawrence_Kohlberg%27s_stages_of_moral_development
The theory holds that moral reasoning, a necessary (but not sufficient) condition for ethical behavior,[4] has six developmental stages, each more adequate at responding to moral dilemmas than its predecessor.[5] Kohlberg followed the development of moral judgment far beyond the ages studied earlier by Piaget, who also claimed that logic and morality develop through constructive stages.[6][5] Expanding on Piaget's work, Kohlberg determined that the process of moral development was principally concerned with justice and that it continued throughout the individual's life, a notion that led to dialogue on the philosophical implications of such research.[7][8][2]
The six stages of moral development occur in phases of pre-conventional, conventional and post-conventional morality. For his studies, Kohlberg relied on stories such as the Heinz dilemma and was interested in how individuals would justify their actions if placed in similar moral dilemmas. He analyzed the form of moral reasoning displayed, rather than its conclusion and classified it into one of six stages.[2][9][10][11]
Kohlberg's six stages can be more generally grouped into three levels of two stages each: pre-conventional, conventional and post-conventional.[9][10][11] Following Piaget's constructivist requirements for a stage model, as described in his theory of cognitive development, it is extremely rare to regress in stages—to lose the use of higher stage abilities.[16][17] Stages cannot be skipped; each provides a new and necessary perspective, more comprehensive and differentiated than its predecessors but integrated with them.[16][17]
Kohlberg's Model of Moral Development
Level 1 (Pre-Conventional)
1. Obedience and punishment orientation
(How can I avoid punishment?)
2. Self-interest orientation
(What's in it for me?)
(Paying for a benefit)
Level 2 (Conventional)
3. Interpersonal accord and conformity
(Social norms)
(The good boy/girl attitude)
4. Authority and social-order maintaining orientation
(Law and order morality)
Level 3 (Post-Conventional)
5. Social contract orientation
6. Universal ethical principles
(Principled conscience)
The understanding gained in each stage is retained in later stages, but may be regarded by those in later stages as simplistic, lacking in sufficient attention to detail.
A woman was on her deathbed. There was one drug that the doctors thought might save her. It was a form of radium that a druggist in the same town had recently discovered. The drug was expensive to make, but the druggist was charging ten times what the drug cost him to produce. He paid $200 for the radium and charged $2,000 for a small dose of the drug. The sick woman's husband, Heinz, went to everyone he knew to borrow the money, but he could only get together about $1,000 which is half of what it cost. He told the druggist that his wife was dying and asked him to sell it cheaper or let him pay later. But the druggist said: “No, I discovered the drug and I'm going to make money from it.” So Heinz got desperate and broke into the man's laboratory to steal the drug for his wife. Should Heinz have broken into the laboratory to steal the drug for his wife? Why or why not?
From a theoretical point of view, it is not important what the participant thinks that Heinz should do. Kohlberg's theory holds that the justification the participant offers is what is significant, the form of their response. Below are some of many examples of possible arguments that belong to the six stages:(https://upload.wikimedia.org/wikipedia/commons/thumb/4/4a/Kohlberg_Model_of_Moral_Development.svg/800px-Kohlberg_Model_of_Moral_Development.svg.png)
Moral rules are not limited to be the lubricant of society. They also cover individual affairs, such as keeping oneself sober and healthy, and avoid suicidal behaviors.As long as I don't burden others, I can see no wrong in getting drunk, overeating or killing myself by these or other means. Thus no first-order moral implications: the key is whether or not I burden others by my actions, which would indeed break the protective film of lubricant. In a civilised society these actions are not illegal, though they may exclude you from some aspects of a social contract through "contributory negligence".
I can't think of better words to represent what you mean because I have no idea what you mean!I recommend you to read Ray Kurzweil's book Singularity is Near. You'll get a clear picture of what I mean there. What amazed me is that the book was already written in 2004, which shows me how insightful the author is.
IMO, suicidal behavior can only be acceptable if we know that there are other conscious beings which are not suicidal, and get some benefit from our death.Moral rules are not limited to be the lubricant of society. They also cover individual affairs, such as keeping oneself sober and healthy, and avoid suicidal behaviors.As long as I don't burden others, I can see no wrong in getting drunk, overeating or killing myself by these or other means. Thus no first-order moral implications: the key is whether or not I burden others by my actions, which would indeed break the protective film of lubricant. In a civilised society these actions are not illegal, though they may exclude you from some aspects of a social contract through "contributory negligence".
You might compare Jonestown with Masada, where 1000 defenders committed suicide after a 2 year siege rather than be enslaved by the Romans. In the Jonestown case it was pretty clear that the defenders had committed crimes against others so the moral implications are clear, even if their personal judgement was suspended in favour of the ravings of a priest. At Masada the defenders had committed no wrong but made a strategic decision based on the known proclivities of the Romans who had been occupying the country for a couple of hundred years.
There have been already studies similar to the usage of consciousness level to determine morality, such as Lawrence Kohlberg's stages of moral development.We can find a pattern there where more developed moral stages show more inclusiveness and longer term goals. It is unsurprising since they require more thinking capabilities.
Johan (brother of Heinz) has a painful terminal illness with no hope of recovery. He has spent all his money on failed treatment and is now living on the street.Can you tell me the reason?
Wilhelm (their cousin) is stinking rich with no debts, and has four adult children with big student loans to repay, and the same genetic condition as Joachim.
According to your ethics, W should top himself ASAP but J must stay in the gutter (and avoid being hit by a bus) until the Good Lord calls him to rest.
I disagree.
IMO, suicidal behavior can only be acceptable if we know that there are other conscious beings which are not suicidal, and get some benefit from our death.
Nobody else will benefit from J's death, but W's kids will inherit his fortune.The existence of any human beings have their own costs and benefits to the society. Lost of one's life means there are more available resources for the others. But it also means lost of his/her contributions. In principle, we can calculate the balance, and find out which option brings more benefit for achieving the universal goal.
https://en.wikipedia.org/wiki/Heinz_dilemmaIt's unfortunate that Kohlberg's theory doesn't help us in making a hard moral decision. It doesn't say what condition would make one option better than its alternative.
The Heinz dilemma is a frequently used example in many ethics and morality classes. One well-known version of the dilemma, used in Lawrence Kohlberg's stages of moral development, is stated as follows[1]:QuoteA woman was on her deathbed. There was one drug that the doctors thought might save her. It was a form of radium that a druggist in the same town had recently discovered. The drug was expensive to make, but the druggist was charging ten times what the drug cost him to produce. He paid $200 for the radium and charged $2,000 for a small dose of the drug. The sick woman's husband, Heinz, went to everyone he knew to borrow the money, but he could only get together about $1,000 which is half of what it cost. He told the druggist that his wife was dying and asked him to sell it cheaper or let him pay later. But the druggist said: “No, I discovered the drug and I'm going to make money from it.” So Heinz got desperate and broke into the man's laboratory to steal the drug for his wife. Should Heinz have broken into the laboratory to steal the drug for his wife? Why or why not?QuoteFrom a theoretical point of view, it is not important what the participant thinks that Heinz should do. Kohlberg's theory holds that the justification the participant offers is what is significant, the form of their response. Below are some of many examples of possible arguments that belong to the six stages:(https://www.thenakedscientists.com/forum/index.php?action=dlattach;topic=75380.0;attach=30644)
There is no universal goal in the case of suicide. The goal is to end or avert personal suffering by the most certain and final means.I think you've misunderstood my statement. Here is the more complete sentences in my post that you've cut.
Indeed the practical problem with decriminalising assisted suicide is to ensure that nobody is coerced towards death for the benefit of others. So here's a good moral problem: how do you distinguish between a truly voluntary Will (that includes the costs and reasonable profit of whoever assists - I've always wanted to own a comfortable suicide hostel) and excessive pressure from potential beneficiaries?
In my scenario J had nothing, contributed nothing, and simply lived off scraps in dustbins, so would not be permitted to kill himself by your code of ethics, whereas W's death would profit several people and could therefore be permitted or even encouraged by society. That's all wrong, surely?
Like any other rules, moral rules are also made to serve some purpose. For example, game rules are set to make the game more interesting for most people, so the game will be kept being played. That's why we get something like hands ball and off side rules in foot ball, or rocade and en passant in chess.The consiousness in my post refers to the existence of known/verified conscious being in the universe, not a particular subjective conscious agent. Hence if the trend of technological advancement can be relied upon, my assertion would be:
Likewise for moral rules. I conclude that their purpose is to preserve the existence of consciousness in objective reality. Due to incomplete information and limited resource to perform actions, we need to deal with probability theory. Something is morally good if it can be demonstrated to increase the probability of preserving consciousness and bad if it can be demonstrated to decrease the probability of preserving consciousness. Without adequate support, we can't decide if something is morally good or bad.
IMO, suicidal behavior can only be acceptable if we know that there are other conscious beings which are not suicidal, and get some benefit from our death.
My concern was in relation to thisWhatever J consumed to stay alive would become available for someone else. There would be less waste to the environment.IMO, suicidal behavior can only be acceptable if we know that there are other conscious beings which are not suicidal, and get some benefit from our death.
Nobody apart from J will benefit from his suicide, so you say that is wrong, but W's children might encourage W to commit suicide for their benefit, which you say is right.
I beg to differ - and so does the law!
J was living out of waste bins. His suicide will only benefit the population of urban foxes.In your case, someone elses get benefit from J's death, although it may not be felt significant. There would be more O2 and less CO2. More space. Less disease vector. Less sh1t and urine. If J's existence can't compensate the burden he brings to the others, then letting him go would be a better option, especially when he himself doesn't want to live anymore.
I think your moral code says that no matter how wretched, awful, unremittingly painful and pointless one's existence, suicide is only permitted if it benefits someone else. In my book, that is a disgusting attitude. Whose life is it?
Anyway, let's run with it. The kamikaze pilot has sworn to die for the greater glory of the Emperor. He has several choices, including defecting to the enemy, deliberately missing his target and crashing into the sea, killing a thousand enemy sailors, or even turning back to his base and wiping out the rest of the squadron. What would you do, and what would be the greater moral good? You may tackle the simpler problem of the suicide bomber if you wish.Given the knowledge of what would happen in the future, the option is obvious. He should defect to the enemy. Giving them information he have to help ending the war as quickly as possible.
In your case, someone elses get benefit from J's death, although it may not be felt significant. There would be more O2 and less CO2. More space. Less disease vector. Less sh1t and urine. If J's existence can't compensate the burden he brings to the others, then letting him go would be a better option, especially when he himself doesn't want to live anymore.But that would be the case for any suicide. So it's a universally good thing to do. I think we agree.
Given the knowledge of what would happen in the future, the option is obvious. He should defect to the enemy. Giving them information he have to help ending the war as quickly as possible.
IMO, death is a technical problem, which should be solved technically.In your case, someone elses get benefit from J's death, although it may not be felt significant. There would be more O2 and less CO2. More space. Less disease vector. Less sh1t and urine. If J's existence can't compensate the burden he brings to the others, then letting him go would be a better option, especially when he himself doesn't want to live anymore.But that would be the case for any suicide. So it's a universally good thing to do. I think we agree.
The anthropic principle is a philosophical consideration that any data we collect about the universe is filtered by the fact that, in order for it to be observable in the first place, it must be compatible with the conscious and sapient life that observes it.It should be obvious that suicidal behavior is self defeating.
IMO, suicidal behavior can only be acceptable if we know that there are other conscious beings which are not suicidal, and get some benefit from our death.Consider an extreme situation that I posted here.
To get the most universal moral rule, we can test them against various situations, and see which rules stand out all of them. In many ordinary situations, most common moral rules would pass. Fundamental rules must still be followed in some extreme cases, such as trolley problems and Heinz dilemma. If an exception can be justified when dealing with those extreme cases, that particular rule is not universally applicable.
Here is the most extreme case I can think of. A gamma ray burst suddenly attack earth killing all known conscious being, except you who is currently in a spaceship toward Mars.
You are the last conscious being in the universe. Your most fundamental moral duty is to survive. You'll need to improve yourself to be better at survival. You'll need to improve your knowledge and make better tools to help you survive. You may need to modify yourself, either genetically or by merging with robotics. You may need to create backup/clones to eliminate a single point failure. You may spread to different places and introduce diversity in the system to prevent common mode failure.
Once you have backup, your own survival is no longer the highest priority. It enables altruism so it's ok to sacrifice yourself if it can improve the chance that your duplicates will continue to survive.
My answer above didn't take presumptions. It was made based on known fact about what would happen until long after the war ended.Given the knowledge of what would happen in the future, the option is obvious. He should defect to the enemy. Giving them information he have to help ending the war as quickly as possible.
The essence of effective command is that the cannon fodder know nothing of value to the enemy. That way, prisoners become a burden rather than an asset.
Wars end when one side has won. Your solution presumes at least that the moral right is owned by the target.
It should be obvious that suicidal behavior is self defeating.Unless your objective in life is to kill others (like a bee, a kamikaze or a suicide bomber) or to avoid an unpleasant future, in which case it can be 100% effective.
My answer above didn't take presumptions. It was made based on known fact about what would happen until long after the war ended.You suggested that defection would be the morally correct decision as it would shorten the war.
IMO, death is a technical problem, which should be solved technically.]No, it's the non-technical solution to the problem of overcrowding, mass starvation, and loss of capacity for independent survival.
You are the last conscious being in the universe. Your most fundamental moral duty is to survive.Duty to whom?
The bees don't go extinct because they only commit suicide to protect their duplicates.It should be obvious that suicidal behavior is self defeating.Unless your objective in life is to kill others (like a bee, a kamikaze or a suicide bomber) or to avoid an unpleasant future, in which case it can be 100% effective.
It depends on which side you are in. If your side's ultimate goal isn't compatible with universal moral values, you better leave as soon as possible.My answer above didn't take presumptions. It was made based on known fact about what would happen until long after the war ended.You suggested that defection would be the morally correct decision as it would shorten the war.
The Calais garrison was ordered to fight to the last man to protect the retreat to Dunkirk. Obvious suicide. They could have surrendered or even defected to clearly superior forces, allowing the Nazis to reach Dunkirk, wipe out the Allied armies, and thus shorten WWII by about 3 years. In what way would that have been morally correct?
No, it's the non-technical solution to the problem of overcrowding, mass starvation, and loss of capacity for independent survival.All of those are technical problems which could be solved technically. They are due to lack of good planning which makes available resources couldn't be distributed properly to achieve desired result effectively and efficiently.
To prevent unpleasant future, we can collectively build a system which can represent objective reality accurately and precisely,We already have a system that represents reality. It's called reality. And we don't seem very good at dealing with it.
The fact that you're still alive to write this post is an evidence that you don't really think that suicide is a universally good moral action.So the fact that I'm not completely penniless is evidence that I don't think it is morally good to donate to charity, eh? Come on, mate, you can do better than that! Moral does not mean compulsory.
What would happen if Hirohito didn't surrender?My father, with around 400,000 others, would have invaded Japan, and after several years of war and millions more deaths one side would have imposed martial law on the other.
The bees don't go extinct because they only commit suicide to protect their duplicates.Not a good example. Bees don’t expect to commit suicide, their sting has evolved to kill other insects and they can sting them repeatedly without dying when protecting the hive. When they (rarely) sting thick skinned mammals eg humans the sting gets lodged in the skin and if torn out will kill the bee.
So it means that bee's death after stinging enemy is an unintended consequence, rather than desired result. A better result for them is when they can repel the enemy without killing themselves.The bees don't go extinct because they only commit suicide to protect their duplicates.Not a good example. Bees don’t expect to commit suicide, their sting has evolved to kill other insects and they can sting them repeatedly without dying when protecting the hive. When they (rarely) sting thick skinned mammals eg humans the sting gets lodged in the skin and if torn out will kill the bee.
Sometimes you will see the bee lodged in your skin, if you allow the bee to spin round or help it by holding it by the wings, it is possible for the sting to come out and the bee to survive.
Duty to whom?To future conscious beings who will bring singularity into reality.
So it means that bee's death after stinging enemy is an unintended consequence, rather than desired result. A better result for them is when they can repel the enemy without killing themselves.Much better. When the hive is attacked by a predator such as a wasp, the guard bees will often use a technique called balling. A large number of them will surround the wasp forming a ball with the wasp at the centre, they will then use their standard heat generating technique of dislocating their wings and vibrating the wing muscles to generate heat - much like we do when shivering. The temperature at the centre of the ball is enough to kill the wasp.
We already have a system that represents reality. It's called reality. And we don't seem very good at dealing with it.What I mean is something we can use to predict the future and simulate what would be the consequence if we do some actions so we can choose the options which would eventually bring us desired results. For example, we already have sequenced complete DNA of corona virus, but the tests for vaccine still need a long time. The system could speed up the trial and error process so we can get the result much faster.
Research is happening at breakneck speed. About 80 groups around the world are researching vaccines and some are now entering clinical trials.https://www.bbc.com/news/health-51665497
The first human trial for a vaccine was announced last month by scientists in Seattle. Unusually, they are skipping any animal research to test its safety or effectiveness
In Oxford, the first human trial in Europe has started with more than 800 recruits - half will receive the Covid-19 vaccine and the rest a control vaccine which protects against meningitis but not coronavirus
Pharmaceutical giants Sanofi and GSK have teamed up to develop a vaccine
Australian scientists have begun injecting ferrets with two potential vaccines. It is the first comprehensive pre-clinical trial involving animals, and the researchers hope to test humans by the end of April
However, no-one know how effective any of these vaccines will be.
When will we have a coronavirus vaccine?
A vaccine would normally take years, if not decades, to develop. Researchers hope to achieve the same amount of work in only a few months.
Most experts think a vaccine is likely to become available by mid-2021, about 12-18 months after the new virus, known officially as Sars-CoV-2, first emerged.
That would be a huge scientific feat and there are no guarantees it will work.
Four coronaviruses already circulate in human beings. They cause common cold symptoms and we don't have vaccines for any of them.
So the fact that I'm not completely penniless is evidence that I don't think it is morally good to donate to charity, eh?It is evidence that you think there is something more important than donating to charity.
their sting has evolved to kill other insects and they can sting them repeatedly without dying when protecting the hive.Off topic, but this is something that has always bothered me.
Desired by whom? If you don't class genocide or rape as a moral action, you have led yourself into a circular argument: a moral action must be desired by a moral person, that is a person whose actions are moral...….Desired by the conscious beings evaluating those actions, based on moral standards that they believe to be true. If they turn out to be in conflict with the universal moral standard, then they must have made one or more false assumptions.
The Nazis had a huge parliamentary majority. "Death to the infidel" is believed by millions, some of whom consider rape to be their prerogative. "Stone the Catholics" is a moral imperative for many Protestants.I've stated in the opening of this thread.
You can't claim that any of these offensive groups are in conflict with the Universal Moral Standard until you have defined the UMS, so we are still in a circular argument!
I consider this topic as a spinoff of my previous subjectSo, I define universal moral standard as a moral standard which can help to achieve the universal ultimate goal, which I discuss in separate thread.
https://www.thenakedscientists.com/forum/index.php?topic=71347.0
It is split up because morality itself is quite complex and can generate a discussion too long to be covered there.
Still circular! You have now defined a moral rule as one that is not immoral!Read again carefully. I just showed that something intended to be moral can become immoral when it's based on false assumptions.
Samuel Johnson's definition of a net as "a reticulated assemblage of holes separated by string" was absurd but at least it was linear.
human sacrifice to appeas gods, caste system, kamikaze,None of these assumptions has been falsified. The sun still rises over Essex even though virgin sacrifices are no longer possible, but that may be because the gods were sufficiently appeased by the few that our ancestors were able to find. The caste system persists, despite being outlawed. Kamikaze did exactly what it was intended to do - sink American ships with a kill ratio of hundreds to one, which is why it is still practised by idiots.
I think we've already proven that volcanic eruptions, earthquake, storm, drought, famine, eclipse are caused by natural phenomena, instead of gods' wrath.human sacrifice to appeas gods, caste system, kamikaze,None of these assumptions has been falsified. The sun still rises over Essex even though virgin sacrifices are no longer possible, but that may be because the gods were sufficiently appeased by the few that our ancestors were able to find. The caste system persists, despite being outlawed. Kamikaze did exactly what it was intended to do - sink American ships with a kill ratio of hundreds to one, which is why it is still practised by idiots.