0 Members and 1 Guest are viewing this topic.
Why would you reach for a definition like that one for a thread on morals? My front yard motion detector is more conscious than I am by that definition since it never sleeps or lowers its awareness. Does it thus carry more moral responsibility than I do?
Quote from: hamdani yusuf on 02/10/2020 09:03:29The role of moral rules with reward and punishment are then to modify internal/subjective preferences of conscious systems to make them aligned with the goal of larger systems they are being part of (e.g. their family, tribe, company, nation). Primitive forms of those manipulation are done by inflicting pain and pleasure which can be directly felt. The next forms are done by causing fear and giving hope, which can only work for conscious systems with capability of understanding cause and effect, so they can predict/anticipate future condition when some information about the present is given.Moral rules can be considered as a subset or a special case of reward function to modify a conscious agent's response to various stimuli/inputs. Reward and punishments are indirect methods to execute back propagation process in neural network training, which adjust the weights of each neural connection. They are only needed if there is no known practical method to modify the behaviour of conscious agent, such as rewiring brain circuitry. Some drugs may have limited usage with temporary effect, but there could be unknown side effects in long term. Similar case for surgery of some organs to modify hormone activations. They work indirectly. Direct brain connection may offer some help, but it needs extreme cautions for unwanted consequences if the users are not aware of the universal terminal goal. Traditional reward and punishment rely on the facts that most consious agents in existence was products of biological natural selection who posses desire to preserve their lives. The pain and pleasure signalings are methods to achieve that. So do fear and hope.Most currently existing intelligent machines are not designed to put their existence as one of highest priority in their job. They are considered expendable. That's why we don't apply reward and punishment to modify their misaligned behaviors. Direct readjustment of their memory or weight of artificial neural connections are much more effective and efficient.
The role of moral rules with reward and punishment are then to modify internal/subjective preferences of conscious systems to make them aligned with the goal of larger systems they are being part of (e.g. their family, tribe, company, nation). Primitive forms of those manipulation are done by inflicting pain and pleasure which can be directly felt. The next forms are done by causing fear and giving hope, which can only work for conscious systems with capability of understanding cause and effect, so they can predict/anticipate future condition when some information about the present is given.
Would you consider a brain in the vat as a conscious being?
The dog's behavior is not entirely surprising either. Especially if you have some future version of neuralink implanted on its head, or you are a veterinarian.Here is the definition of intelligence accorsing to dictionary.Quote the ability to acquire and apply knowledge and skills. Usually, it represents problem solving or information processing capability, but doesn't take into account the ability to manipulate its environment nor self awareness. AlphaGo is considered intelligent since it can solve problem of playing go better then human champion. Alpha zero is even more intelligent since it can beat Alpha Go 100:0.Even though they don't have the ability to move any piece of go.On the other hand, consciousness covers more factors into account. For example, if you got paralyzed so you can't move your arms and legs, you are considered less conscious than your normal state, even though you can still think clearly.
the ability to acquire and apply knowledge and skills.
Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".[4]As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[5] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[6] For instance, optical character recognition is frequently excluded from things considered to be AI,[7] having become a routine technology.[8] Modern machine capabilities generally classified as AI include successfully understanding human speech,[9] competing at the highest level in strategic game systems (such as chess and Go),[10] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.[11]
Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."[70]A typical AI analyzes its environment and takes actions that maximize its chance of success.[3] An AI's intended utility function (or goal) can be simple ("1 if the AI wins a game of Go, 0 otherwise") or complex ("Perform actions mathematically similar to ones that succeeded in the past"). Goals can be explicitly defined or induced. If the AI is programmed for "reinforcement learning", goals can be implicitly induced by rewarding some types of behavior or punishing others.[a] Alternatively, an evolutionary system can induce goals by using a "fitness function" to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food.[71] Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data.[72] Such systems can still be benchmarked if the non-goal system is framed as a system whose "goal" is to successfully accomplish its narrow classification task.[73]
The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.[1]Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."[2] AIS researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"[3]
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yetA view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.Pamela McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked."[4]When IBM's chess playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used "brute force methods" and it wasn't real intelligence.[5] Fred Reed writes:"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright."[6]Douglas Hofstadter expresses the AI effect concisely by quoting Larry Tesler's Theorem:"AI is whatever hasn't been done yet."[7]When problems have not yet been formalised, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by a human. This formalisation is referred to as human-assisted Turing machine.[8]AI applications become mainstreamSoftware and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI.Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."[9]According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"[10]Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"[11]Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."[12]
Saving a place for humanity at the top of the chain of beingMichael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe".[14] By discounting artificial intelligence people can continue to feel unique and special. Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.[citation needed]Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."[15]
n intelligence quotient (IQ) is a total score derived from a set of standardized tests or subtests designed to assess human intelligence.[1]
Health is important in understanding differences in IQ test scores and other measures of cognitive ability. Several factors can lead to significant cognitive impairment, particularly if they occur during pregnancy and childhood when the brain is growing and the blood–brain barrier is less effective. Such impairment may sometimes be permanent, sometimes be partially or wholly compensated for by later growth.[citation needed]
especially when it's better than expectation.
It's possible to have people with high IQ who are ignorant about basic theories in math, physics, chemistry, or biology. You may find someone with high IQ who became members of ISIS or other cults.
but, are they more or less conscious or morally driven than others?
Every man of faith knows with absolute certainty that the universe was created in 7 days by a man with a beard, and looked exactly as it does now. Scientists have no idea how it happened or what makes it work. Religion clearly has the most accurate and comprehensive model, even though it is unintelligent and useless.
You missed the point. The statement that the universe was created exactly as it is now is the most accurate and comprehensive model of the universe because it accounts for absolutely every detail of what we observe. But it is also crap.
How can a statement which is accurate and comprehensive can be crap?
You must have heard the story of the balloonist who descended through fog and had no idea of his location. He asked a passer-by "Where am I?"Reply: "You are in a balloon, six feet above the ground" "So you are an accountant?""How did you know?""I asked a simple question and the answer you gave me was absolutely correct and no bloody use."
Philosophy is often thought to be an “ivory tower” pursuit, unconcerned with the practical affairs of everyday life. Philosophers who want to promote the relevance of their field invariably point to one branch of philosophy that seems to have obvious implications for our action in the world: ethics, the study of right and wrong.But we do not see the masses beating down the doors of university philosophy departments seeking practical advice about important life decisions. Students typically take ethics classes to fulfill a requirement, not to answer burning questions. Few if any books about ethics by philosophers make the best-seller lists. Why have today’s academic ethicists failed so miserably to sell the merits of their research?
Until ethicists can agree about how to support ethical principles for navigating an ordinary life, it’s unlikely that they can answer questions about extraordinary emergency cases.
Recognizing that a life of conflict with others is not inevitable severely undercuts the assumption that the only viable ethical code is one that calls us to sacrifice our own interests for the sake of the alleged interests of others. As Ayn Rand argued, it is the popularity of the altruistic theory of morality (the theory which equates the subject of morality with choices about sacrifice) that we should hold responsible for the widespread view that morality has no relevance to everyday life:Altruism declares that any action taken for the benefit of others is good, and any action taken for one’s own benefit is evil. Thus the beneficiary of an action is the only criterion of moral value . . . .Observe what this beneficiary-criterion of morality does to a man’s life. The first thing he learns is that morality is his enemy; he has nothing to gain from it, he can only lose; self-inflicted loss, self-inflicted pain and the gray, debilitating pall of an incomprehensible duty is all that he can expect. . . . Apart from such times as he manages to perform some act of self-sacrifice, he possesses no moral significance: morality takes no cognizance of him and has nothing to say to him for guidance in the crucial issues of his life; it is only his own personal, private, “selfish” life and, as such, it is regarded either as evil or, at best, amoral.The idea that ethics is a code of values one needs to guide one’s life as a whole informs Rand’s own view of moral virtue, which she develops at length in her essay “The Objectivist Ethics.” She was also not the first to see it this way. The whole of ancient Greek ethics, from Socrates through Aristotle to the Stoics had a similar outlook, even as these figures differed in important ways about what a morally virtuous life actually consists in.If today’s ethicists want to offer real guidance for living, they should revisit their assumption that ethics is only about resolving conflicts and that they are its referees. Life is not a zero-sum game and ethics should not be about solving made-up puzzles that are part of such a game.
Altruism declares that any action taken for the benefit of others is good, and any action taken for one’s own benefit is evil. Thus the beneficiary of an action is the only criterion of moral value . . . .
Quote from: hamdani yusuf on 23/11/2020 11:14:45Altruism declares that any action taken for the benefit of others is good, and any action taken for one’s own benefit is evil. Thus the beneficiary of an action is the only criterion of moral value . . . .Wrong from the start. Altruism is an abstract noun that cannot "declare" anything. We ascribe good work with no apparent return to the doer, as altruistic. It is not in the gift of philosophers to misuse the English language - or any other.
Let's make your substitution."An altruist is a person who declares that any action taken for the benefit of others is good, and any action taken for one’s own benefit is evil."Have you ever heard a sane individual say that? Laying down your life for another may be the ultimate act of altruism, but not eating when you are hungry, or not avoiding a charging elephant (because that would not benefit anyone else) is the inaction of an idiot. Everyday altruism includes turning up at first aid or water lifesaving classes. And what is the first lesson they teach you? Don't endanger yourself. One hand for the ship, one for yourself. Make sure your car is between the approaching traffic and the casualty. Switch off at the mains. Wear a mask. And on and on.....
Why have today’s academic ethicists failed so miserably to sell the merits of their research?