0 Members and 1 Guest are viewing this topic.
You write about a seminal moment in your childhood when your older brother suffered a serious brain injury. Can you describe what happened? I was 4 and he was 6. My parents were yachting and I was down at the water’s edge, but he, with some friends, clambered onto the roof of the clubhouse. Then he tripped and fell three stories onto the pavement below and fractured his skull. He lost consciousness on impact and sustained an intracerebral hemorrhage. We were living in a small village, so he had to be flown to a hospital in Cape Town, and he was very lucky to survive the accident. What was so disturbing and really difficult to comprehend for me was the fact that he looked the same, but was utterly changed. He lost his developmental milestones. For example, he became incontinent and his personality was very changed. He was much more emotional, irascible and difficult, but also intellectually, he was changed.You say this had a profound impact on you.It did. We underestimate little children. You start thinking, How can it be that the brain is this thing in his head that’s been damaged and now he looks the same but isn’t the same? Where is he? How can this person, my brother, be an organ? I quickly extrapolated that to my own case and thought, “Hmmm, am I my brain and how can that be? If my brain were to be damaged, would I be a different person? Where would the original version of me go?” And it was a tragedy for my parents. They felt terribly guilty.
The major point of contention is whether consciousness can be reduced to the laws of physics or biology. The philosopher David Chalmers has speculated that consciousness is a fundamental property of nature that’s not reducible to any laws of nature.I accept that, except for the word “fundamental.” I argue that consciousness is a property of nature, but it’s not a fundamental property. It’s quite easy to argue that there was a big bang very long ago and long after that, there was an emergence of life. If Chalmers’ view is that consciousness is a fundamental property of the universe, it must have preceded even the emergence of life. I know there are people who believe that. But as a scientist, when you look at the weight of the evidence, it’s just so much less plausible that there was already some sort of elementary form of consciousness even at the moment of the Big Bang. That’s basically the same as the idea of God. It’s not really grappling with the problem.
Where are those feelings rooted in the brain?Feeling arises in a very ancient part of the brain, in the upper brainstem in structures we share with all vertebrates. This part of the brain is over 500 million years old. The very telling fact is that damage to those structures—tiny lesions as small as the size of a match head in parts of the reticular activating system—obliterates all consciousness. That fact alone demonstrates that more complex cognitive consciousness is dependent upon the basic affective form of consciousness that’s generated in the upper brainstem.So we place too much emphasis on the cortex, which we celebrate because it’s what makes humans smart.Exactly. Our evolutionary pride and joy is the huge cortical expanse that only mammals have, and we humans have even more of it. That was the biggest mistake we’ve made in the history of the neuroscience of consciousness. The evidence for the cortex being the seat of consciousness is really weak. If you de-corticate a neonatal mammal—say, a rat or a mouse—it doesn’t lose consciousness. Not only does it wake up in the morning and go to sleep at night, it runs and hangs from bars, swims, eats, copulates, plays, raises its pups to maturity. All of this emotional behavior remains without any cortex.And the same applies to human beings. Children born with no cortex, a condition called hydranencephaly—not to be confused with hydrocephaly—are exactly the same as what I’ve just described in these experimental animals. They wake up in the morning, go to sleep at night, smile when they’re happy and fuss when they’re frustrated. Of course, you can’t speak to them, because they’ve got no cortex. They can’t tell you that they’re conscious, but they show consciousness and feeling in just the same way as our pets do.
You say we really have two brains—the brainstem and the cortex.Yes, but the cortex is incapable of generating consciousness by itself. The cortex borrows, as it were, its consciousness from the brainstem. Moreover, consciousness is not intrinsic to what the cortex does. The cortex can perform high level, uniquely human cognitive operations as reading with comprehension, without consciousness being necessary at all. So why does it ever become conscious? The answer is that we have to feel our way into cognition because this is where the values come from. Is this going well or badly? All choices, any decision-making, has to be grounded in a value system where one thing is better than another thing.
The only point of learning from past events is to better predict future events. That’s the whole point of memory. It’s not just a library where we file away everything that’s happened to us. And the reason why we need to keep a record of what’s happened in the past is so that we can use it as a basis for predicting the future. And yes, the hippocampus is every bit as much for imagining the future as remembering the past. You might say it’s remembering the future.
Nanobots that patrol our bodies, killer immune cells hunting and destroying cancer cells, biological scissors that cut out defective genes: these are just some of the technologies that Cambridge University researchers are developing and which are set to revolutionise medicine in the future. To tie-in with the recent launch of the Cambridge Academy of Therapeutic Sciences, researchers discuss some of the most exciting developments in medical research and set out their vision for the next 50 years.
As technology continues to rapidly advance, the future of AI looks promising, but it doesn’t come without risks. How we choose to govern artificial intelligence could play an integral role in protecting the human race. Experts believe the real risk of AI is its usage to threaten the legitimacy of political, financial, and social institutions. In the wrong hands, AI could be used to leverage one’s position and gain unchecked access to information, wealth, and power. Determining how to effectively create and apply regulations for AI governance is paramount to ensuring that the technology is appropriately leveraged to benefit society.
Children born with no cortex, a condition called hydranencephaly—not to be confused with hydrocephaly—are exactly the same as what I’ve just described in these experimental animals. They wake up in the morning, go to sleep at night, smile when they’re happy and fuss when they’re frustrated. Of course, you can’t speak to them, because they’ve got no cortex. They can’t tell you that they’re conscious, but they show consciousness and feeling in just the same way as our pets do.
Philosophers have been making the controversial claim that free will is an illusion for hundreds of years, but is there proof? Are their conclusions well founded?The idea that humans might not have complete autonomy over their lives brings into question what extent we do have control over. If free will is an illusion, and our control is actually limited, then things like criminal law and social status may be drawn into question. To advance our collective understanding of free will, Dr. Uri Maoz is leading a collaborative research project that’s bringing together neuroscientists and philosophers from around the world. Here’s his take on the age-old debate.
Rocket Science Explained By Elon Musk who founded Space Exploration Technologies, or SpaceX. His goal was to make rockets for space travel more affordable, with the ultimate goal of creating a colony on Mars. “What Elon did was very different. He didn’t just throw some play money in. He put in his heart, his soul and his mind.”Chapters:0:00 Intro0:15 Orbital dynamics in rocket science 4:25 Rocket stage separation4:43 Why rocket stages need to land in ocean on a drone ship6:43 Rocket control in vacuum (Nitrogen Jets)7:30 Rocket control in air (Grid Fins)9:01 Why reusability of rockets is importantThanks for the inspiration to @SpaceX , @NASA and for music thanks to @newarta and @Happy Soul Music Library
Visionary biochemist Jennifer Doudna shared the Nobel Prize last year for the gene-editing technology known as CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats), which has the potential to cure diseases caused by genetic mutations. Correspondent David Pogue talks with Doudna about the promises and perils of CRISPR; and with Walter Isaacson, author of the new book "The Code Breaker," about why the biotech revolution will dwarf the digital revolution in importance."CBS Sunday Morning" features stories on the arts, music, nature, entertainment, sports, history, science and Americana, and highlights unique human accomplishments and achievements. Check local listings for CBS Sunday Morning broadcast times.
https://twitter.com/svpino/status/1372286114669551616?s=19tools and frameworks continue to evolve. What makes you money today will be automated in some form. You must keep moving.
To be fair, for the sake of the argument, and for brainstorming, it is logically possible to set the preservation of smaller systems as the terminal goal, such as some specific organs, tissues, cells, genes. Hence, someone's death isn't necessarily means their failure, nor the end of their terminal goal, as long as the body parts whose preservation is set as the terminal goal still exists.
It would be hard for him to convince other people to commit to some coordinated actions, even when they share their believes that there is no such thing as a terminal goal. Their commitment to achieve common goals are limited to their shared temporary desires, which would be less effective compared to coordinated actions which are done with all out commitment.
The Orthogonality Thesis states that an artificial intelligence can have any combination of intelligence level and goal, that is, its Utility Functions(94) and General Intelligence(52) can vary independently of each other. This is in contrast to the belief that, because of their intelligence, AIs will all converge to a common goal. The thesis was originally defined by Nick Bostrom in the paper "Superintelligent Will", (along with the instrumental convergence thesis). For his purposes, Bostrom defines intelligence to be instrumental rationality.
Defense of the thesisIt has been pointed out that the orthogonality thesis is the default position, and that the burden of proof is on claims that limit possible AIs. Stuart Armstrong writes that,One reason many researchers assume superintelligences to converge to the same goals may be because most humans have similar values. Furthermore, many philosophies hold that there is a rationally correct morality, which implies that a sufficiently rational AI will acquire this morality and begin to act according to it. Armstrong points out that for formalizations of AI such as AIXI and Gödel machines, the thesis is known to be true. Furthermore, if the thesis was false, then Oracle AIs would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.Pathological CasesThere are some pairings of intelligence and goals which cannot exist. For instance, an AI may have the goal of using as little resources as possible, or simply of being as unintelligent as possible. These goals will inherently limit the degree of intelligence of the AI.
From this imperfect analogy, one can start to see some holes in the orthogonality thesis, or OT (it’s a mouthful). Just as rules in sports like basketball will determine how an athlete trains to achieve peak performance, the moral framework (or lack thereof) to which an AGI (artificial general intelligence) applies its intelligence will determine how that AGI develops its abilities to best meet its goals. However, as Peter Voss mentions, there are a “large range of common (sub-) goals required by AGI.” These sub-goals mirror aspects of athleticism useful to many sports’ rule frameworks — vertical jump for examples is pivotal in all three of Wilt’s ventures. From this analogy it seems that intelligence does not fit the orthogonality thesis in two ways:An AGI will develop intelligence according to the long-term rules (whether explicit or implicit) of its morality. Humans will initially create the intelligence, but at a tipping point AGI will take control of its own development.An AGI will develop certain capabilities independent of those morality rules. Developing these capabilities and not others precludes certain combinations of an AGI’s intelligence. To see how, let’s take a look at AGI value alignment.Athletic abilities like Wilt’s changed the rule of the game, where developments within AGI will likely necessitate changes to AI value alignment. As Peter Voss mentions, “orthogonality is undermined by the fact that AGIs will inherently help to narrow down worthwhile goals.” To ultimately improve the survivability of the game of basketball, the league commissioner changed rules based upon certain individuals’ athleticism. As AI grows more intelligent, we — as programmers of AI, or morality commissioners — will need to improve the value alignment of that AI.