0 Members and 1 Guest are viewing this topic.
*This transcript was generated by a third-party transcription software company, so please excuse any typos.This week, Newsmax host Rob Schmitt, took one of the, I guess you would say, most bizarre stances in terms of convincing people to not get vaccinated. We've seen a lot of it, obviously from conservative media and Republican politicians over the last few weeks. But what Mr. Schmitt did was a little bit different and I'm not going to put any words in his mouth. I'm going to let him basically dig it out himself. Here is what he said on Newsmax this week.I've, one thing I've always thought, and maybe you can guide me on this because obviously I'm not a doctor, but when I've always thought about vaccines and I always think about just nature and the way everything works. And I feel like a vaccination in a weird way is just generally kind of going against nature. Like, I mean, if there is some disease out there, maybe there's just an ebb and flow to life where something's supposed to wipe out a certain amount of people and that's just kind of the way evolution goes, vaccines kind of stand in the way of that.Yeah. I mean, viruses, diseases, illnesses. They, they, they do wipe out large, large portions of the population there, Rob, you're, you're not wrong about that. The question is, do we just sit back, throw our hands up and say, well, there's a new virus out there, I guess a lot of you are going to have to die, or do we use our brains, come up with a way around this, a vaccine and say, hey, look, we beat the thing. But I guess, you know, based on his statement there, Mr. Schmitt, thinks what we should maybe just sit back and say, yeah, if we get it, we get it. If we're dead, we're dead. That's evolution baby. You just got to accept it. No, that's not what we do, man. That I got, I got to hand it to him. I have to hand it to him. That is probably the most creative way we have seen any personality on the right, you know, try to say that, hey, maybe we don't need the vaccine.Maybe when it's your time, it's your time. No, this is idiotic. And he did point out in this side, but he said, you know, I'm not anti-vax, I'm not pro-vax, you know, I'm just, I'm just a guy. He's not a doctor. He admitted that. So, you know, after saying all those things, that should have been the end of the conversation, he should have said, I'm not anti-vax, I'm not pro-vax and I'm not a doctor. Moving on. I'm not going to give you my idiotic opinions on what I think about the vaccine, because you have no room to talk. Yes, in the past, viruses wiped out huge swaths of the population, diseases, bacteria, all kinds of things. Until of course, we came out with vaccines, which are widely regarded, including by the CDC as one of the greatest medical advancements of the 20th century. But y'all just don't want that to happen. You're just now on the side of the virus saying, hey, this is nature. Let nature do its thing. Since when do you all even care about nature?You know, what's natural and what's not. You don't. This was just trying to be a little clever, creative, whatever you want to call it, convincing your audience, the people that you rely on to keep your show alive, to keep your career going, trying to convince them that they don't need to worry about what's happening. And if you're one of the people that dies from this, oh well, must've just been your time.
Right now, billions of neurons in your brain are working together to generate a conscious experience -- and not just any conscious experience, your experience of the world around you and of yourself within it. How does this happen? According to neuroscientist Anil Seth, we're all hallucinating all the time; when we agree about our hallucinations, we call it "reality." Join Seth for a delightfully disorienting talk that may leave you questioning the very nature of your existence.
Patient P.S. suffered a stroke that damaged the right side of her brain, leaving her unaware of everything on her left side. If someone threw a ball at her left side, she might duck. But she wouldn’t have awareness of the ball or know why she ducked. Where does consciousness come from? Michael Graziano explores the question that has vexed scientists and philosophers for centuries.
Your brain hallucinates your conscious reality
DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like hallucinogenic appearance in the deliberately over-processed images.
We are entering a new era as a species. For the first time, we are not only able to read our genetic code but also edit it. This will revolutionise our ability to treat disease and it will improve the lives of millions if not billions of people. But it means that, if we want to, we can now edit human embryos to “improve” the characteristics of our children. We will be able to create designer babies and these changes will be passed on to their descendants, which will change the human species forever.
If it is taken up by large numbers of people, it is likely people will feel obliged to have their offspring genetically augmented to give them a good chance in life. Unscrupulous governments are also likely to use this technology to generate elite athletes if doping programmes of the past are anything to go by, and it isn’t too difficult to see the potential advantages of genetically engineered soldiers.
Transcript:Hollywood movies make people worry about the wrong things in terms of super intelligence. What we should really worry about is not malice but competence, where we have machines that are smarter than us whose goals just aren’t aligned with ours. For example, I don’t hate ants, I don’t go out of my way to stomp an ant if I see one on the sidewalk, but if I’m in charge of this hydroelectric dam construction and just as I’m going to flood this valley with water I see an ant hill there, tough luck for the ants. Their goals weren’t aligned with mine and because I’m smarter it’s going to be my goals, not the ant’s goals, that get fulfilled. We never want to put humanity in the role of those ants. On the other hand it doesn’t have to be bad if you solve the goal alignment problem. Little babies tend to be in a household surrounded by human level intelligence as they’re smarter than the babies, namely their parents. And that works out fine because the goals of the parents are wonderfully aligned with the goals of the child’s so it’s all good. And this is one vision that a lot of AI researchers have, the friendly AI vision that we will succeed in not just making machines that are smarter than us, but also machines that then learn, adopt and retain our goals as they get ever smarter.It might sound easy to get machines to learn, adopt and retain our goals, but these are all very tough problems. First of all, if you take a self-driving taxi and tell it in the future to take you to the airport as fast as possible and then you get there covered in vomit and chased by helicopters and you say, “No, no, no! That’s not what I wanted!” and it replies, “That is exactly what you asked for,” then you’ve appreciated how hard it is to get a machine to understand your goals, your actual goals. A human cabdriver would have realized that you also had other goals that were unstated because she was also a human and has all this shared reference frame, but a machine doesn’t have that unless we explicitly teach it that. And then once the machine understands our goals there’s a separate problem of getting them to adopt the goals. Anyone who has had kids knows how big the difference is between making the kids understand what you want and actually adopt your goals to do what you want. And finally, even if you can get your kids to adopt your goals that doesn’t mean they’re going to retain them for life. My kids are a lot less excited about Lego now than they were when they were little, and we don’t want machines as they get ever-smarter to gradually change their goals away from being excited about protecting us and thinking of this thing about taking care of humanity as this little childhood thing (like Legos) that they get bored with eventually. If we can solve all three of these challenges, getting machines to understand our goals, adopt them and retain them then we can create an awesome future. Because everything I love about civilization is a product of intelligence. Then if we can use machines to amplify our intelligence then we have this potential to solve all the problems that are stumping us today and create a better future than we even dare to dream of. If machines ever surpass us and can outsmart us at all tasks that’s going to be a really big deal because intelligence is power. The reason that we humans have more power on this planet than tigers is not because we have larger muscles or sharper claws, it’s because we’re smarter than the tigers. And in the exact same way if machines are smarter than us it becomes perfectly plausible for them to control us and become the rulers of this planet and beyond. When I. J. Good made this famous analysis of how you could get an intelligence explosion, or intelligence just kept creating greater and greater intelligence leaving us far behind, he also mentioned that this super intelligence would be the last invention that man need ever make. And what he meant by that, of course, was that so far the most intelligent being on this planet that’s been doing all the inventing—it’s been us. But once we make machines that are better than us at inventing, all future technology that we ever need can be created by those machines if we can make sure that they do things for us that we want and help us create an awesome future where humanity can flourish like never before.
Quote from: hamdani yusuf on 22/04/2021 13:55:47There will be some people or other conscious lifeforms who act as if there is no such thing as a universal terminal goal. Hence they effectively replace it with some arbitrarily chosen non-universal terminal goals.Some of those non-universal terminal goals may bring consequences which effectively obstruct or even prevent the achievement of the universal terminal goal. Other conscious agents who already acknowledge the universal terminal goal should prepare some counter measures for that case. Establishing a universal moral standard is one of them.
There will be some people or other conscious lifeforms who act as if there is no such thing as a universal terminal goal. Hence they effectively replace it with some arbitrarily chosen non-universal terminal goals.
A terminal value (also known as an intrinsic value) is an ultimate goal, an end-in-itself. The non-standard term "supergoal" is used for this concept in Eliezer Yudkowsky's earlier writings.In an artificial general intelligence with a utility or reward function, the terminal value is the maximization of that function. The concept is not usefully applicable to all Als, and it is not known how applicable it is to organic entities.Terminal vs. instrumental valuesTerminal values stand in contrast to instrumental values (also known as extrinsic values), which are means-to-an-end, mere tools in achieving terminal values. For example, if a given university student studies merely as a professional qualification, his terminal value is getting a job, while getting good grades is an instrument to that end. If a (simple) chess program tries to maximize piece value three turns into the future, that is an instrumental value to its implicit terminal value of winning the game.Some values may be called "terminal" merely in relation to an instrumental goal, yet themselves serve instrumentally towards a higher goal. However, in considering future artificial general intelligence, the phrase "terminal value" is generally used only for the top level of the goal hierarchy of the AGI itself: the true ultimate goals of the system; but excluding goals inside the AGI in service of other goals, and excluding the purpose of the AGI's makers, the goal for which they built the system.
Human terminal valuesIt is not known whether humans have terminal values that are clearly distinct from another set of instrumental values. Humans appear to adopt different values at different points in life. Nonetheless, if the theory of terminal values applies to humans', then their system of terminal values is quite complex. The values were forged by evolution in the ancestral environment to maximize inclusive genetic fitness. These values include survival, health, friendship, social status, love, joy, aesthetic pleasure, curiosity, and much more. Evolution's implicit goal is inclusive genetic fitness, but humans do not have inclusive genetic fitness as a goal. Rather, these values, which were instrumental to inclusive genetic fitness, have become humans' terminal values (an example of subgoal stomp).Humans cannot fully introspect their terminal values. Humans' terminal values are often mutually contradictory, inconsistent, and changeable.Non-human terminal valuesFuture artificial general intelligences may have the maximization of a utility function or of a reward function (reinforcement learning) as their terminal value. The function will likely be set by the AGI's designers.Since people make tools instrumentally, to serve specific human values, the assigned value system of the artificial general intelligence may be much simpler than humans'. This will pose a danger, as an AI must seek to protect all human values if a positive human future is to be achieved. The paperclip maximizer is a thought experiment about an artificial general intelligence with consequences disastrous to humanity, with the the apparently innocuous terminal value of maximizing the number of paperclips in its collection,An intelligence can work towards any terminal value, not just human-like ones. AIXI is a mathematical formalism for modeling intelligence. It illustrates that the arbitrariness of terminal values may be optimized by an intelligence: AIXI is provably more intelligent than any other agent for any computable reward function.
In more standard terminology, a "subgoal stomp" is a "goal displacement", in which an instrumental value becomes a terminal value.In Friendly AI research, a subgoal stomp is a failure mode to be avoided.Types of Subgoal StompA subgoal stomp in an artificial general intelligence may occur in one of two ways:1. Supergoal replacementOne failure mode occurs when subgoals replace supergoals in an agent because of a bug.The designer of an artificial general intelligence may give it correct supergoals, but the AGI's goals then shift, so that what was earlier a subgoal becomes a supergoal.Most changes in an agent's terminal values reduces the chance that the values as they are will be fulfilled. This, from the perspective of intelligence as optimization, is a flaw. A sufficiently intelligent AGI will not allow its goals to changeIn humans, this can happen when the long-term dedication towards a subgoal makes one forget the original goal. For example, a person may seek to get rich so as to lead a better life, but after long years of hard effort become a workaholic who cares only about money as an end in itself and takes little pleasure in the things that money can buy.2. Subgoal specified as supergoalA designer of goal systems may mistakenly assign a goal that is not what the designer really wants.The designer of an artificial general intelligence may give it a supergoal (terminal value) which appears to support the designer's own supergoals, but in fact supports one of the designer's subgoals, at the cost of some of the designer's other values. For example, if the designer of an artificial general intelligence thinks that smiles represent the most worthwhile goal and specifies "maximize the number of smiles" as a goal for the AGI, it may tile the solar system with tiny smiley faces--not out of a desire to outwit the designer, but because it is precisely working towards the given goal, as specified.To take an example from human organizations: If a software development manager gives a bonus to workers for finding and fixing bugs, she may find that quality and development engineers collaborate to generate as many easy-to-find-and-fix bugs as possible. In this case, they are correctly and flawlessly executing on the goals which the manager gave them, but her actual terminal value, software quality, is not being maximized.Humans as adaptation executorsHumans, forged by evolution, provide another example of subgoal stomp. Their terminal values, such as survival, health, social status, curiosity, etc., originally served instrumentally for the (implicit) goal of evolution, namely inclusive genetic fitness. Humans do not have inclusive genetic fitness as a goal: We are adaptation executors rather than fitness maximizers (Tooby and Cosmides, 1992).If we consider evolution as an optimization process (though not, of course, as an agent), this represents a subgoal stomp.
Wireheading is the artificial stimulation of the brain to experience pleasure, usually through the direct stimulation of an individual's brain's reward or pleasure center with electrical current. It can also be used in a more expanded sense, to refer to any kind of method that produces a form of counterfeit utility by directly maximizing a good feeling, but that fails to realize what we value.Related pages: Complexity of Value, Goodhart's Law, Inner AlignmentIn both thought experiments and laboratory experiments direct stimulation of the brain’s reward center makes the individual feel happy. In theory, wireheading with a powerful enough current would be the most pleasurable experience imaginable. There is some evidence that reward is distinct from pleasure, and that most currently hypothesized forms of wireheading just motivate a person to continue the wirehead experience, not to feel happy. However, there seems to be no reason to believe that a different form of wireheading which does create subjective pleasure could not be found. The possibility of wireheading raises difficult ethical questions for those who believe that morality is based on human happiness. A civilization of wireheads "blissing out" all day while being fed and maintained by robots would be a state of maximum happiness, but such a civilization would have no art, love, scientific discovery, or any of the other things humans find valuable.If we take wireheading as a more general form of producing counterfeit utility, there are many examples of ways of directly stimulating of the reward and pleasure centers of the brain, without actually engaging in valuable experiences. Cocaine, heroin, cigarettes and gambling are all examples of current methods of directly achieving pleasure or reward, but can be seen by many as lacking much of what we value and are potentially extremely detrimental. Steve Omohundro argues1 that: “An important class of vulnerabilities arises when the subsystems for measuring utility become corrupted. Human pleasure may be thought of as the experiential correlate of an assessment of high utility. But pleasure is mediated by neurochemicals and these are subject to manipulation.”Wireheading is also an illustration of the complexities of creating a Friendly AI. Any AGI naively programmed to increase human happiness could devote its energies to wireheading people, possibly without their consent, in preference to any other goals. Equivalent problems arise for any simple attempt to create AGIs who care directly about human feelings ("love", "compassion", "excitement", etc). An AGI could wirehead people to feel in love all the time, but this wouldn’t correctly realize what we value when we say love is a virtue. For Omohundro, because exploiting those vulnerabilities in our subsystems for measuring utility is much easier than truly realizing our values, a wrongly designed AGI would most certainly prefer to wirehead humanity instead of pursuing human values. In addition, an AGI itself could be vulnerable to wirehead and would need to implement “police forces” or “immune systems” to ensure its measuring system doesn’t become corrupted by trying to produce counterfeit utility.
AGI safety from first principlesThe key concern motivating technical AGI safety research is that we might build autonomous artificially intelligent agents which are much more intelligent than humans, and which pursue goals that conflict with our own. Human intelligence allows us to coordinate complex societies and deploy advanced technology, and thereby control the world to a greater extent than any other species. But AIs will eventually become more capable than us at the types of tasks by which we maintain and exert that control. If they don’t want to obey us, then humanity might become only Earth's second most powerful "species", and lose the ability to create a valuable and worthwhile future.I’ll call this the “second species” argument; I think it’s a plausible argument which we should take very seriously.[1] However, the version stated above relies on several vague concepts and intuitions. In this report I’ll give the most detailed presentation of the second species argument that I can, highlighting the aspects that I’m still confused about. In particular, I’ll defend a version of the second species argument which claims that, without a concerted effort to prevent it, there’s a significant chance that:We’ll build AIs which are much more intelligent than humans (i.e. superintelligent).Those AIs will be autonomous agents which pursue large-scale goals.Those goals will be misaligned with ours; that is, they will aim towards outcomes that aren’t desirable by our standards, and trade off against our goals.The development of such AIs would lead to them gaining control of humanity’s future.While I use many examples from modern deep learning, this report is also intended to apply to AIs developed using very different models, training algorithms, optimisers, or training regimes than the ones we use today. However, many of my arguments would no longer be relevant if the field of AI moves away from focusing on machine learning. I also frequently compare AI development to the evolution of human intelligence; while the two aren’t fully analogous, humans are the best example we currently have to ground our thinking about generally intelligent AIs.
Let’s recap the second species argument as originally laid out, along with the additional conclusions and clarifications from the rest of the report.We’ll build AIs which are much more intelligent than humans; that is, much better than humans at using generalisable cognitive skills to understand the world.Those AGIs will be autonomous agents which pursue long-term, large-scale goals, because goal-directedness is reinforced in many training environments, and because those goals will sometimes generalise to be larger in scope.Those goals will by default be misaligned with what we want, because our desires are complex and nuanced, and our existing tools for shaping the goals of AIs are inadequate.The development of autonomous misaligned AGIs would lead to them gaining control of humanity’s future, via their superhuman intelligence, technology and coordination - depending on the speed of AI development, the transparency of AI systems, how constrained they are during deployment, and how well humans can cooperate politically and economically.Personally, I am most confident in 1, then 4, then 3, then 2 (in each case conditional on all the previous claims) - although I think there’s room for reasonable disagreement on all of them. In particular, the arguments I’ve made about AGI goals might have been too reliant on anthropomorphism. Even if this is a fair criticism, though, it’s also very unclear how to reason about the behaviour of generally intelligent systems without being anthropomorphic. The main reason we expect the development of AGI to be a major event is because the history of humanity tells us how important intelligence is. But it wasn’t just our intelligence that led to human success - it was also our relentless drive to survive and thrive. Without that, we wouldn’t have gotten anywhere. So when trying to predict the impacts of AGIs, we can’t avoid thinking about what will lead them to choose some types of intelligent behaviour over others - in other words, thinking about their motivations.Note, however, that the second species argument, and the scenarios I’ve outlined above, aren’t meant to be comprehensive descriptions of all sources of existential risk from AI. Even if the second species argument doesn’t turn out to be correct, AI will likely still be a transformative technology, and we should try to minimise other potential harms. In addition to the standard misuse concerns (e.g. about AI being used to develop weapons), we might also worry about increases in AI capabilities leading to undesirable structural changes. For example, they might shift the offense-defence balance in cybersecurity, or lead to more centralisation of human economic power. I consider Christiano’s “going out with a whimper” scenario to also fall into this category. Yet there’s been little in-depth investigation of how structural changes might lead to long-term harms, so I am inclined to not place much credence in such arguments until they have been explored much more thoroughly.By contrast, I think the AI takeover scenarios that this report focuses on have received much more scrutiny - but still, as discussed previously, have big question marks surrounding some of the key premises. However, it’s important to distinguish the question of how likely it is that the second species argument is correct, from the question of how seriously we should take it. Often people with very different perspectives on the latter actually don’t disagree very much on the former. I find the following analogy from Stuart Russell illustrative: suppose we got a message from space telling us that aliens would be landing on Earth sometime in the next century. Even if there’s doubt about the veracity of the message, and there’s doubt about whether the aliens will be hostile, we (as a species) should clearly expect this event to be a huge deal if it happens, and dedicate a lot of effort towards making it go well. In the case of AGI, while there’s reasonable doubt about what it will look like, it may nevertheless be the biggest thing that’s ever happened. At the very least we should put serious effort into understanding the arguments I’ve discussed above, how strong they are, and what we might be able to do about them.[1]
Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence or AGI.Swedish-American physicist, cosmologist and machine learning researcher, Max Tegmark thinks that AI will redefine what it means to be human due to the scale of the changes it will bring about.He describes early life forms such as bacteria as Life 1.0. The rise of Homo sapiens as Life 2.0 and the potential rise of Superhuman AI as life 3.0. Max Tegmark describes the current status of our modern society as Life 2.1 due to the increase of technological enhancements of our biology.He worries that the advent of digital superintelligence also known as artificial superintelligence or ASI will bring about drastic change to our society for the better or for worse.Artificial intelligence today is properly known as narrow AI. It can perform particular functions at the expert level. However current AI lacks common sense and can only deal with a narrow range of situations compared with humans.Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. There are many ways in which AI could surpass human intelligence. We are already studying the algorithms of the brain in order to figure out how our own minds work and use that information to make machines more intelligent. Eventually the machines will be capable of self-improvement and the AI will become a self-reinforcing loop.The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to become much more powerful than humans.While there are many unknowns about the development of intelligent machines and how we should deal with them, there is no question that AI will play a fundamental role in the future of humanity. Superintelligence does not necessarily have to be something negative. According to Tegmark if we manage to get it right, it might become the best think to happen to mankind.
Consider humanity's astounding progress in science during the past three hundred years. Now take a deep breath and project forward, oh say, three billion years. Featuring interviews with Freeman Dyson, Lawrence Krauss, Raymond Kurzweil, Frank Tipler, Robin Collins, and Paul Davies.
Quote from: alancalverd on 05/08/2021 20:59:40Everyone has to die, so no problem there. Half of the current population will be dead in the next 40 years. The trick is to reduce the population without harming or inconveniencing anyone who matters.Is it a problem if they die sooner? Is it a problem if they die later? QuoteThere is someone alive today who will live to be 1,000 years old: Why we are living longer than ever?Researchers are getting a better understanding of the ageing process and the ways it could be slowed, halted or even reversed https://www-independent-co-uk.cdn.ampproject.org/v/s/www.independent.co.uk/news/long_reads/live-longer-longevity-stem-cells-ageing-a8332701.htmlHow would you decide who matters, and who doesn't?
Everyone has to die, so no problem there. Half of the current population will be dead in the next 40 years. The trick is to reduce the population without harming or inconveniencing anyone who matters.
There is someone alive today who will live to be 1,000 years old: Why we are living longer than ever?Researchers are getting a better understanding of the ageing process and the ways it could be slowed, halted or even reversed
Is Aging a Disease?Whether aging can be cured or not, there are arguments for thinking about it like a disease. But there are major pitfalls, too.
The distinction between aging and its underlying causes also affects research funding. Jamie Justice, an assistant professor of gerontology and geriatric medicine at Wake Forest, said during the GSA panel that she doesn’t think “Is aging a disease?” is the right question. The better question, she said, is “Why do we have to force aging to be a disease in order to get clinicians, regulatory officials, and stakeholders to do something about it?” Part of the answer, according to Hayflick, is that what policymakers don’t know about aging dictates their decisions: “policy makers … must understand that the resolution of age-associated diseases will not provide insights into understanding the fundamental biology of age changes. They often believe that it will, and base decisions on that misunderstanding.”
Because of that misconception, funding for research into age-related diseases such as cancer and Alzheimer’s far exceeds funding for research into biological aging processes. If old age is a risk factor for nearly all of the conditions likely to kill us, Hayflick asks, “why then are we not devoting significantly greater resources to understanding what … increases vulnerability to all age-associated pathology?” Understanding the underlying processes would allow scientists to work on treatments that address the causes of aging, not just its effects.
Summary: Non-invasive brain stimulation, such as rTMS, helps to reduce smoking frequency in nicotine-dependent people, a new study reports. Stimulating the dorsolateral prefrontal cortex with repetitive transcranial magnetic stimulation significantly reduced smoking frequency.Source: Society for the Study of AddictionOriginal Research: Open access.https://onlinelibrary.wiley.com/doi/abs/10.1111/add.15624
In the future, human minds can be reprogrammed at will by those with access to the advanced devices with higher specificity and reliability.
The Law of Accelerating ReturnsMarch 7, 2001 by Ray KurzweilAn analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. So we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.The Intuitive Linear View versus the Historical Exponential ViewMost long range forecasts of technical feasibility in future time periods dramatically underestimate the power of future technology because they are based on what I call the “intuitive linear” view of technological progress rather than the “historical exponential view.” To express this another way, it is not the case that we will experience a hundred years of progress in the twenty-first century; rather we will witness on the order of twenty thousand years of progress (at today’s rate of progress, that is).This disparity in outlook comes up frequently in a variety of contexts, for example, the discussion of the ethical issues that Bill Joy raised in his controversial WIRED cover story, Why The Future Doesn’t Need Us. Bill and I have been frequently paired in a variety of venues as pessimist and optimist respectively. Although I’m expected to criticize Bill’s position, and indeed I do take issue with his prescription of relinquishment, I nonetheless usually end up defending Joy on the key issue of feasibility. Recently a Noble Prize winning panelist dismissed Bill’s concerns, exclaiming that, “we’re not going to see self-replicating nanoengineered entities for a hundred years.” I pointed out that 100 years was indeed a reasonable estimate of the amount of technical progress required to achieve this particular milestone at today’s rate of progress. But because we’re doubling the rate of progress every decade, we’ll see a century of progress–at today’s rate–in only 25 calendar years.When people think of a future period, they intuitively assume that the current rate of progress will continue for future periods. However, careful consideration of the pace of technology shows that the rate of progress is not constant, but it is human nature to adapt to the changing pace, so the intuitive view is that the pace will continue at the current rate. Even for those of us who have been around long enough to experience how the pace increases over time, our unexamined intuition nonetheless provides the impression that progress changes at the rate that we have experienced recently. From the mathematician’s perspective, a primary reason for this is that an exponential curve approximates a straight line when viewed for a brief duration. So even though the rate of progress in the very recent past (e.g., this past year) is far greater than it was ten years ago (let alone a hundred or a thousand years ago), our memories are nonetheless dominated by our very recent experience. It is typical, therefore, that even sophisticated commentators, when considering the future, extrapolate the current pace of change over the next 10 years or 100 years to determine their expectations. This is why I call this way of looking at the future the “intuitive linear” view.But a serious assessment of the history of technology shows that technological change is exponential. In exponential growth, we find that a key measurement such as computational power is multiplied by a constant factor for each unit of time (e.g., doubling every year) rather than just being added to incrementally. Exponential growth is a feature of any evolutionary process, of which technology is a primary example. One can examine the datain different ways, on different time scales, and for a wide variety of technologies ranging from electronic to biological, and the acceleration of progress and growth applies. Indeed, we find not just simple exponential growth, but “double” exponential growth, meaning that the rate of exponential growth is itself growing exponentially. These observations do not rely merely on an assumption of the continuation of Moore’s law (i.e., the exponential shrinking of transistor sizes on an integrated circuit), but is based on a rich model of diverse technological processes. What it clearly shows is that technology, particularly the pace of technological change, advances (at least) exponentially, not linearly, and has been doing so since the advent of technology, indeed since the advent of evolution on Earth.I emphasize this point because it is the most important failure that would-be prognosticators make in considering future trends. Most technology forecasts ignore altogether this “historical exponential view” of technological progress. That is why people tend to overestimate what can be achieved in the short term (because we tend to leave out necessary details), but underestimate what can be achieved in the long term (because the exponential growth is ignored).
The Law of Accelerating ReturnsWe can organize these observations into what I call the law of accelerating returns as follows:Evolution applies positive feedback in that the more capable methods resulting from one stage of evolutionary progress are used to create the next stage. As a result, therate of progress of an evolutionary process increases exponentially over time. Over time, the “order” of the information embedded in the evolutionary process (i.e., the measure of how well the information fits a purpose, which in evolution is survival) increases.A correlate of the above observation is that the “returns” of an evolutionary process (e.g., the speed, cost-effectiveness, or overall “power” of a process) increase exponentially over time.In another positive feedback loop, as a particular evolutionary process (e.g., computation) becomes more effective (e.g., cost effective), greater resources are deployed toward the further progress of that process. This results in a second level of exponential growth (i.e., the rate of exponential growth itself grows exponentially).Biological evolution is one such evolutionary process.Technological evolution is another such evolutionary process. Indeed, the emergence of the first technology creating species resulted in the new evolutionary process of technology. Therefore, technological evolution is an outgrowth of–and a continuation of–biological evolution.A specific paradigm (a method or approach to solving a problem, e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the method exhausts its potential. When this happens, a paradigm shift (i.e., a fundamental change in the approach) occurs, which enables exponential growth to continue.If we apply these principles at the highest level of evolution on Earth, the first step, the creation of cells, introduced the paradigm of biology. The subsequent emergence of DNA provided a digital method to record the results of evolutionary experiments. Then, the evolution of a species who combined rational thought with an opposable appendage (i.e., the thumb) caused a fundamental paradigm shift from biology to technology. The upcoming primary paradigm shift will be from biological thinking to a hybrid combining biological and nonbiological thinking. This hybrid will include “biologically inspired” processes resulting from the reverse engineering of biological brains.If we examine the timing of these steps, we see that the process has continuously accelerated. The evolution of life forms required billions of years for the first steps (e.g., primitive cells); later on progress accelerated. During the Cambrian explosion, major paradigm shifts took only tens of millions of years. Later on, Humanoids developed over a period of millions of years, and Homo sapiens over a period of only hundreds of thousands of years.With the advent of a technology-creating species, the exponential pace became too fast for evolution through DNA-guided protein synthesis and moved on to human-created technology. Technology goes beyond mere tool making; it is a process of creating ever more powerful technology using the tools from the previous round of innovation. In this way, human technology is distinguished from the tool making of other species. There is a record of each stage of technology, and each new stage of technology builds on the order of the previous stage.
Hi. It's been several pages since anyone replied.It's almost impossible for anyone to follow the gist of the dicsussion now since there would be so much background they would have they read (24 pages).Can you provide a short summary of what the discussion is about and what has been covered so far?
In this thread I've come into conclusion that the best case scenario for life is that conscious beings keep existing indefinitely and don't depend on particular natural resources. The next best thing is that current conscious beings are showing progress in the right direction to achieve that best case scenario.The worst case scenario is that all conscious beings go extinct, since it would make all the efforts we do now are worthless. In a universe without conscious being, the concept of goal itself become meaningless. The next worst thing is that current conscious beings are showing progress in the wrong direction which will eventually lead to that worst case scenario.
Quote from: hamdani yusuf on 11/06/2021 06:40:32Quote from: hamdani yusuf on 05/06/2021 22:41:27The only similarity applicable to every conscious being, regardless of their shape, form, size, and ingredients, is that they want to extend the existence of consciousness further into the future.I realise that I have expressed the idea of universal terminal goal in some different ways. I feel that this one is the least controversial and easiest to follow. So, I think I have arrived to the final conclusion about universal terminal goal. It came from definitions of each word in the phrase, and take their implications into account. Goal is the noun, while terminal and universal are the adjectives that describe the noun.The word Goal means preferred state or condition in the future. If it's not preferred, it can't be a goal. If it's already happened in the past, it can't be a goal either. Although it's possible that the goal is to make future condition similar to preferred condition in the past as reference. The preference requires the existence of at least one conscious entity. Preference can't exist in a universe without consciousness, so can't a goal. The word Terminal requires that the goal is seen from the persepective of conscious entities that exist in the furthest conceivable future. If the future point of reference is too close to the present, it would expire soon and the goal won't be usable anymore.The word Universal requires that no other constraint should be added to the goal determined by aforementioned words. The only valid constraints have already been set by the words goal and terminal.
Quote from: hamdani yusuf on 05/06/2021 22:41:27The only similarity applicable to every conscious being, regardless of their shape, form, size, and ingredients, is that they want to extend the existence of consciousness further into the future.I realise that I have expressed the idea of universal terminal goal in some different ways. I feel that this one is the least controversial and easiest to follow.
The only similarity applicable to every conscious being, regardless of their shape, form, size, and ingredients, is that they want to extend the existence of consciousness further into the future.