0 Members and 1 Guest are viewing this topic.
Whatever happened in the past will become memory for present and future conscious beings. Whatever we are doing now are becoming events in the past. If our actions have no effect whatsoever to future conscious beings, they will be meaningless. It could happen if we go extinct and the conscious beings exist in the future emerge/evolve independently from our lineage. Whatever the future conscious beings might be, they are extremely unlikely to appear suddenly out of nowhere in a single shot. It's much more probable that they will emerge as products of evolutionary process through natural selection in many generations. The process will be continued by artificial selection. The variations of their characteristics will shift from mainly provided by random mutation to a more directed intentional changes.
Directed intentional changes means that before implementation, the changes would be simulated first in a virtual environment. It can be someone's brain or many types of computers, or some experimental setup. Only changes wich are expected to bring intended consequences and minimum unwanted side effects will then be implemented. Otherwise they would be discarded.
We can't change conditions of the past. So we shouldn't waste our time and other resources trying to do that. Present condition will become the past in a moment. Hence we should direct our efforts and allocate resources to improve our conditions in the future. The conscious beings exist in the future could include the continuation of our ego, our direct descendants, or something else that we create. They are basically modified duplicates of ourselves, better suited for future conditions. So if our actions now don't align with the goal of improving the well being of future conscious beings, those actions will be considered as wasteful, hence must be hindered.
Morality talks about the good and bad things from the perspective of conscious beings
Quote from: hamdani yusuf on 21/04/2021 06:06:30Morality talks about the good and bad things from the perspective of conscious beingsNo, just from the perspective of homo sapiens. Even if you could define conscious, we have very little idea of what any other species thinks.
About which you know nothing - even the concept is meaningless.
The universe contains a lot of stuff that is not known for being "aware", which seems to be the essence of consciousness. Ergo I cannot ascribe a meaning to universal consciousness.
Ability to feel empathy requires a conscious agent to model a situation from the perspective of other agents with similar characteristics to itself. For some complex organisms, it is an innate ability. Most simple organisms doesn't have it.Most organisms don't have the ability to model surrounding situations from the perspective of other conscious entities significantly different than themselves. It takes computational resources which might cost too much for surviving in the wild. So it's understandable that it doesn't develop naturally.
The goal of moral rules with predetermined reward and punishment system is to tip the balance of reward function in the conscious agents algorithms so that they would still positively contribute to the system as a whole even if they act selfishly.
That doesn't sound very "moral". You seem to be suggesting this:1. Reward people, if they do what "the system" wants2. Punish people, if they don't do what "the system" wants.
This leads to the obvious question : who, or what, is "the system"?
To which, there seem to be only two answers, according to your theory:1. The system is ruled by Darwinian Natural Selection, which is ruthless and has no morals2. The system is ruled by a human Dictator, who is ruthless and has no moralsBoth of these answers don't seem very nice. So perhaps your theory is wrong?
Perhaps you should stop asking meaningless questions.
Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you’ll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us.Topics discussed include:-Max and Yuval’s views and intuitions about consciousness-How they ground and think about morality-Effective altruism and its cause areas of global health/poverty, animal suffering, and existential risk-The function of myths and stories in human society-How emerging science, technology, and global paradigms challenge the foundations of many of our stories-Technological risks of the 21st centuryYou can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/31/o...Timestamps:0:00 Intro3:14 Grounding morality and the need for a science of consciousness11:45 The effective altruism community and it's main cause areas13:05 Global health14:44 Animal suffering and factory farming17:38 Existential risk and the ethics of the long-term future23:07 Nuclear war as a neglected global risk24:45 On the risks of near-term AI and of artificial general intelligence and superintelligence28:37 On creating new stories for the challenges of the 21st century32:33 The risks of big data and AI enabled human hacking and monitoring47:40 What does it mean to be human and what should we want to want?52:29 On positive global visions for the future59:29 Goodbyes and appreciations01:00:20 Outro and supporting the Future of Life Institute Podcast
Feelings and emotions also provide feedback mechanism for the neural networks. They become internal leading indicators so the system know whether it is going to the right direction. Simple organisms with no internal feedback mechanism must rely on external feedbacks to evaluate their actions. Their survival from an event is their positive feedback, while their death is their negative feedback. It's extremely hard to learn when your own death is your only negative feedback. In artificial neural networks, the learning process is done by adjusting weight of neural connections through back propagation. No learning is possible when the whole network is destroyed.
In some societies, majority of people get their feelings hurt more by cartoon drawings rather than mass killings.
Quote from: hamdani yusuf on 27/04/2021 22:44:04In some societies, majority of people get their feelings hurt more by cartoon drawings rather than mass killings.I doubt it. It is clear that various old perverts teach their flock to hate anyone who says that a drawing represents the Prophet, but that clearly isn't a genuine personal insult because nobody knows what he looked like anyway (assuming that he actually practised what he is said to have preached).