0 Members and 1 Guest are viewing this topic.
That's why laws and moral rules were created. They supposed to tip the balance of cost and benefit to incentivize and encourage behaviors that benefit the society as a whole in the long run, and vice versa. They should make even selfish actions to bring benefits to the society.
The Sagan standard is the adage that “extraordinary claims require extraordinary evidence” (a concept abbreviated as ECREE). This signifies that the more unlikely a certain claim is, given existing evidence on the subject, the greater the standard of proof that is expected of it. https://effectiviology.com/sagan-standard-extraordinary-claims-require-extraordinary-evidence/
Tesla has devised a formula for calculating the insurance premium for Tesla owners. “A default Safety Score of 90 is used to calculate the premium for your first two months,” says Tesla in its quote.This means irrespective of your Safety Score above 90, Tesla will use the default score of 90 for the first two months. After two months, the premium rates will vary based on the Safety Score. If the Score is 90 and below 100, the premium will be $89.16/mo. If an owner maintains a score of 100 the premium will drop to $53.9/mo.
Morals are basically Logic combined with goals. They are harder to achieve consensus because the terminal goals were kept obscured. The cause and effect relationships among different parameters are not perfectly known, and may involve uncertainty, chaos and black swan events.
Moral relativism or ethical relativism (often reformulated as relativist ethics or relativist morality) is a term used to describe several philosophical positions concerned with the differences in moral judgments across different peoples and their own particular cultures. An advocate of such ideas is often labeled simply as a relativist for short. In detail, descriptive moral relativism holds only that people do, in fact, disagree fundamentally about what is moral, with no judgment being expressed on the desirability of this. Meta-ethical moral relativism holds that in such disagreements, nobody is objectively right or wrong.[1] Normative moral relativism holds that because nobody is right or wrong, everyone ought to tolerate the behavior of others even when considerably large disagreements about the morality of particular things exist.[2]Moral relativism is generally posed as a direct antithesis to "moral idealism" (also known as "ethical idealism" and "principled idealism"). Through an idealistic framework, examples being that of Kantianism and other doctrines advocated during the Enlightenment era, certain behavior seen as contrary to higher ideals often gets labeled as not only morally wrong but fundamentally irrational. However, like many fuzzy concepts, the distinction between idealist and relativist viewpoints is frequently vague.[citation needed]Moral relativism has been debated for thousands of years across a variety of contexts during the history of civilization. Arguments of particular notability have been made in areas such as ancient Greece and historical India while discussions have continued to the present day. Besides the material created by philosophers, the concept has additionally attracted attention in diverse fields including art, religion, and science.https://en.m.wikipedia.org/wiki/Moral_relativism
I think you are looking at Tesla's insurance scheme through the wrong end of the telescope. The theoretical object of an insurance premium is to collect enough money from good drivers to pay out for the damage done by the bad ones. You then adjust the premiums to encourage good drivers to contribute, and screw the bad ones to pay more. There's nothing selfish about careful driving. Quite the opposite, in fact. So the reward is for being selfless and considerate.
Selfish drivers cause accidents and get penalised by their insurers and the courts.
What makes you think that your side of the telescope is the right one?Imagine a society whose moral rules penalize careful driving severely. What a selfish driver would do?
There's nothing selfish about careful driving.
Previously I've described the universal moral standard based on the universal terminal goal as the morality with the least requirements/necessary assumptions. The minimum requirements are implied by the definitions of each words in the universal terminal goal itself.This can be seen as the basic foundation for other moralities. In other words, other moralities can be aligned with the universal moral standard by adding some conditionals or assumptions that correctly represent objective reality. If those requirements are not met, then they deviate from universal moral standard, hence universally immoral.
Story of Santa Claus can make even selfish kids to behave well.
Quote from: hamdani yusuf on 03/11/2021 07:22:39Story of Santa Claus can make even selfish kids to behave well.but only for selfish motives. When did the promise of gifts inspire a selfish scumbag to think of others?
Descartes told us that the only thing a conscious agent can be sure of is its own existence. Any other information can be misleading.The ultimate justification that a conscious entity can have to support a bit of information is its necessity for enabling the existence of the conscious entity. For example, we accept the existence of corona virus because this information is necessary to create effective treatments against the negative effects brought by the virus and keep us alive.
Quote from: hamdani yusuf on 10/10/2021 07:31:52Part of the magic of reinforcement learning relies on regularly rewarding the agents for actions that lead to a better outcome. That models works great in dense reward environments like games in which almost every action correspond to a specific feedback but what happens if that feedback is not available? In reinforcement learning this is known as sparse rewards environments and, unfortunately, it’s a representation of most real-world scenarios. A couple of years ago, researchers from Google published a new paper proposing a technique for achieving generalization with reinforcement learning that operate in sparse reward environments.I bring this here from my other thread because it can help us understand the fundamental requirements for sustainable moral standards.Survival of consciousness is the universal ultimate reward. But its success or failure may not be obvious for billions of years in a world where consciousness can naturally emerge. Natural consciousness came up with survival of species and individual survival as meta rewards or instrumental goals. The results can be found in shorter periods, eg. million years or decades, respectively.Pain avoidance and pleasure from eating food have made good meta rewards for individual survival. While sexual desire and instinctive care for the young have made good meta rewards for survival of species.
Part of the magic of reinforcement learning relies on regularly rewarding the agents for actions that lead to a better outcome. That models works great in dense reward environments like games in which almost every action correspond to a specific feedback but what happens if that feedback is not available? In reinforcement learning this is known as sparse rewards environments and, unfortunately, it’s a representation of most real-world scenarios. A couple of years ago, researchers from Google published a new paper proposing a technique for achieving generalization with reinforcement learning that operate in sparse reward environments.
I heard that people are getting sick of thought and prayers
When a conscious entity has adequate level of consciousness, it will start to realize that there are other conscious entities besides itself. Some of them are similar in many aspects, while some others can be very different.It will then realize that other conscious entities stop existing. Although it will not realize when it has already stopped existing itself. But it can reason that it too can stop existing in the future. It can conclude that one of the best strategy to preserve consciousness is to create back ups or duplicates of itself.
In other words, kin morality for a conscious entity works based on following assumptions:1. The conscious entity embracing it is mortal.2. The other conscious entities with similar traits as itself are the best candidates to extend consciousness to the future after its own death.
When a conscious entity has adequate level of consciousness, it will start to realize that there are other conscious entities besides itself.
//www.youtube.com/watch?v=oDvzbBRiNlAWhy do things exist? Setting the stage for evolution.This video kicks off the evolution series by going broad and thinking about why things - including non-living things - exist at all. The first in a series on evolution.