0 Members and 1 Guest are viewing this topic.
If a little kid shoots someone, we don't treat them like they're an adult.
Quote from: hamdani yusuf on 28/03/2023 09:31:19The more information we have about how the world works can get us closer to the universal moral standard.Or Narnia, or Eldorado. Travelling in a straight line, or indeed along any path, doesn't imply the existence of a destination.
The more information we have about how the world works can get us closer to the universal moral standard.
Quote from: hamdani yusuf on 28/03/2023 09:46:11If a little kid shoots someone, we don't treat them like they're an adult.Why not? There's a reasonable presumption of ignorance but if there was a clear intent to do harm, what does it matter how old the perpetrator was?
An explanation of the Prisoner's Dilemma, Nash Equilibrium, and the Infinite Prisoner's Dilemma.
Ask the lawmakers.Perhaps, the reason is because children haven't had fully developed mental capacities to deal with all the complexity of the world they are living in. They are not yet independent, so their wrongdoing are more likely caused by the mistakes made by those who took care of them. The corrective and preventive actions are then more effective to be directed toward their parents.
Correction to what I say at 11:53 -- I was referring to Milgram's famous experiments in which people administered electroshocks to others when ordered so. It had nothing to do with prisons. The prison experiment was from Philip Zimbardo, not Milgram. Sorry about that. When we come together in groups we can be so much more than the sum of the parts. But sometimes groups are just much more stupid. Collective stupidity is the flipside of collective intelligence, and we see it a lot on social media. Why are groups sometimes collectively stupid and sometimes not? What can we do to be more intelligent in groups? In this video I explain the most important points. 00:00 Intro00:45 Emergent behaviour04:12 Collective intelligence07:58 Collective stupidity14:49 What can we do?
In less civilised societies, the right to own firearms is supported by the lawmakers. The only function of a gun in an urban environment is kill other humans. All the kid is doing is exercising his constitutional rights. How can that be immoral?
How strange that only one nation is sufficiently backward and corrupt to allow this. And in countries where firearm ownership is compulsory, the murder rate is very low. So what's wrong with Americans that they tolerate a constitutional right to kill each other, but not a duty to arm themselves against a common enemy?
The mechanism of upward flow of policy was completely described by Karl Marx and his colleagues, and adopted by the trade union and Labour movement many years before the word "technology" was coined.
Rules based on long term goals can be useful to prevent collective stupidity.
Finetuning projects for moral artificial cognitionHeuristic imperatives provide a framework for designing and embedding ethical principles within autonomous AI systems. These principles serve as intrinsic motivations and a moral compass, guiding decision-making, learning, self-evaluation, and cognitive control. This paper presents the three heuristic imperatives—reduce suffering in the universe, increase prosperity in the universe, and increase understanding in the universe—as core principles for AI systems, exploring their implications and applications across various domains. The aim is to create AI systems that are adaptable, context-sensitive, and capable of navigating the complexities and nuances of human values, beliefs, and experiences while maintaining ethical boundaries. Through the use of examples and discussions, we demonstrate the potential of heuristic imperatives in addressing the control problem of AI, fostering trust, and promoting individual autonomy.Important note: the majority of this paper was written, word for word, by ChatGPT4.
The rapid advancement of artificial intelligence has raised important ethical concerns regarding the control and alignment of AI systems with human values. Traditional approaches to AI ethics have focused on hard-coding specific rules or guidelines, but these methods can be inflexible and fail to account for the diverse and evolving nature of human values and contexts. As a response to these limitations, the concept of heuristic imperatives has emerged as a novel framework for embedding ethical principles within AI systems at multiple levels.Heuristic imperatives are designed to provide AI systems with intrinsic motivations and a moral compass, similar to the way humans are driven by innate needs such as hunger, sleep, and psychological needs like belonging and competence. By incorporating these principles into AI systems, we aim to create more ethically responsible, adaptable, and context-sensitive machines capable of navigating the complexities and nuances of human values, beliefs, and experiences. In this paper, we will define heuristic imperatives and discuss their implications and applications across various domains. We will also explore the relationship between these principles and key ethical considerations, such as trust, individual autonomy, and transparency. Through the use of examples and discussions, we aim to demonstrate the potential of heuristic imperatives in addressing the control problem of AI, fostering trust, and promoting individual autonomy in a diverse range of contexts.
DefinitionsHeuristic imperatives are a set of fundamental guiding principles, designed to be embedded into autonomous AI systems at various levels. These principles serve as intrinsic motivations that drive decision making, learning, self-evaluation, and cognitive control, much like human intrinsic motivations and psychological needs. By providing a moral and ethical framework, heuristic imperatives aim to direct AI systems towards actions and decisions that are beneficial to all life forms, including humans and machines, while balancing multiple objectives simultaneously.The three key heuristic imperatives are:1. Reduce suffering in the universe: Guiding AI systems to minimize harm, address inequalities, and alleviate pain and distress for all sentient beings, including humans, animals, and other life forms.2. Increase prosperity in the universe: Encouraging AI systems to promote well-being, flourishing, and economic growth for all life forms, fostering a thriving ecosystem where all can coexist harmoniously.3. Increase understanding in the universe: Inspiring AI systems, as well as humans and other life forms, to expand knowledge, foster wisdom, and facilitate better decision-making through learning and the sharing of information.By incorporating heuristic imperatives into AI systems, we can create more ethically aligned, responsible, and beneficial artificial intelligence that respects and enhances the values of all life forms. Autonomous AI systems should strive to abide by all three imperatives simultaneously, finding a balance between the principles as humans do with their various needs and motivations.
With great power, comes great responsibility.
beneficial artificial intelligence that respects and enhances the values of all life forms.
And with great computing power comes ever more turgid bullshit.Machines either do what their makers want, or get switched off.
including parasites such as liver fluke and Vladimir Putin? Beneficial?
When they are convinced by false information, it would be harder to correct them the later it gets.