0 Members and 1 Guest are viewing this topic.
You define it there as a spectrum (and I agree with that), but above you make it a binary thing where some critical threshold needs to be crossed. Where is that threshold? Just above a virus? No? Just humans? If so, how then is your definition not the usual one I mentioned?
What are moral rules except rules that help the survival rate of the group that defines the morals? That's not universal, that's morals of the group. Cells follow morals of the body and not anything larger than that.I'm not trying to be contradictory, just trying to illustrate the lack of difference between a human and anything else, and the complete lack of a code that comes from anywhere else except the group with which you relate. Yes, I'm a relativist, in far more ways that just moral relativism.
How do you judge if an action is morally right or wrong?
Is there something more important than your own life that you are willing to sacrifice for it?
Have you tried to expand the group that defines the moral rules?
Can you find a moral rule that's applicable for all human being?
I have proposed to expand the group to all conscious beings
Historically, highest consious beings have been increasing with time.
Who know how humans will evolve into in distance future.
By being relativist, do you think that perpetrators of 9/11 are moral in their own respect because they follow moral rules of their group?
what about human sacrifice by the Aztecs? holocaust by Nazi? slavery by the confederacy?human cannibalism by some cultures?
Why the word 'being'? What distinguishes a being from a non-being? Sure, it seems pretty straight forward with the sample of one that we have (it's a being if you're related to it), but that falls apart once we discover a new thing on some planet and have to decide if its a being or not.
I've been taught them by parents, community, employer, etc.
What if the ebola virus were as sentient as us? What would the moral code for such a species be like? Would it be wrong for them to infect and kill a creature? Only if it's a human? I read a book that included a sentient virus, and also a R-strategist intelligence and more. Much of the storytelling concerned the conflicts in the morals each group found obvious.
So the subject doesn't know if what it's doing is right or wrong. Does this epistemological distinction matter? If some action is wrong, then doing that action is wrong, period, regardless of whether the thing doing it knows it's wrong or not.What does wrong mean, anyway? Suppose I do something wrong, but don't know it. What does it mean that I've done a wrong thing? Sure, if there is some kind of consequence to be laid on me due to the action, then there's a distinction. I take the wrong turn in the maze and don't get the cheese. That makes turning left immoral, but only if there's a cheese one way and not the other? Just trying to get a bit of clarity on 'right/wrong/ought-to'.
I am not very familiar with the teachings of all these cultures, but one culture oppressing some other culture has been in the moral teachings of most groups I can think of, especially the religious ones. My mother witnessed the holocaust and current votes for it happening again. It only looks ugly in hindsight, and only if you lose. Notice everyone vilifies Hitler, but Lenin and Stalin get honored tombs, despite killing far more jews and other undesirables. Translation: It is immoral to lose.
You can use other words such as 'things' if you'd like to.
How do you resolve when some of their teachings are contradictory to each other?
Actions with bad consequences are wrong. Actions known to have bad consequences, but are done anyway, are immoral.
If someday it can be demonstrated that some viruses can reach that level of complexity, than be it.
But if they show the tendency to destroy other consious agents, especially with higher level of consciousness, they must be fought back.
Actions with bad consequences are wrong.
By concluding that morals are not universal. For one, a higher goal takes priority over a lower one when they indicate contradictory choices to be made. Even simple devices work that way.
In the case above, the high priority goal makes one choose an action that violates the lower priority goal, hence an action that is bad (for a greater good). Your statement above asserts that such actions are immoral. For instance, I injure a child (bad consequence) as a surgeon to prevent that child from dying of appendicitis. Your statement at face value says this is an immoral action. Better to do nothing and let the child die (worse consequence, but not due to explicit action on your part) leaving you morally intact, except doing nothing is also a choice. Maybe get a different surgeon to do the immoral thing of saving this kid's life.
How do you determine wchich priority is the higher one?
You perform a surgery to the child is morally better then letting them die.
We'll be able to correct defects by gene editing in the future, so there's no need for any approach like eugenics to improve the species.
As for a universal moral code, I've already provided it several times in this thread without anyone appearing to notice. Morality is mathematics applied to harm management and it's all about calculating the harm:benefit balance.
It only applies to sentient things, but it applies to all of them, fleas and intelligent aliens all included.
It's easy to understand the harm:benefit balance calculations for a single-participant system, and a multi-participant system can be reduced to a single-participant system just by considering all the sentient participants in it to be the same individual living all those lives in turn. The entirety of morality is right there.
Is there a way to compute harm without being relative to a peer group? Humans seem to be causing a lot more harm than benefit, with an estimated genocide of 80% of the species on the planet in the holocene extinction event. Any harm to a species like that would probably be viewed as a total benefit by all these other species
A rock is not considered an agent of choice. A tree might be, but it gets difficult to justify it. How about a self-driving car? It meets the definition of slave. Does a true slave carry any moral responsibility? I almost say no.
Does the species need to consider the harm done to the environment/other species, or only harm done to its own kind? What if it has no concept of 'species' or 'kind', or possibly not even 'individual' or 'agent'?
QuoteIt's easy to understand the harm:benefit balance calculations for a single-participant system, and a multi-participant system can be reduced to a single-participant system just by considering all the sentient participants in it to be the same individual living all those lives in turn. The entirety of morality is right there.I haven't read the entire thread. How has been the response to this. It's a good attempt. It's just that harm seems subjective. What good for X is not necessarily good for Y, so its measure seems context dependent.
Yes, by definition, actions with a bad consequences are wrong. How in any way is this relevant to the discussion? If a consequence is deemed bad only by some group, then it is wrong only relative to that group. If it is bad period, then it's universal, but you've made no argument for that case with the statement here. I'm trying to get the discussion on track.