0 Members and 1 Guest are viewing this topic.
Life is about transpiration, respiration, combustion, synthesis, whatever.
There must be a defining chemical process.If the organism is distinct from its environment, which we can assume to be passive and lifeless for the sale of simplicity, then the organism achieves homeostasis or function by extracting energy and material from its environment.So the environment must in the first instance be friendly and conducive to life, and the organism cannot therefore be independent of it.All living organisms expel waste from their chemical processes, and the waste, by definition, is not friendly and conducive to life.So an organism in a finite environment will eventually exhaust the resources it needs to live, and fill the environment with toxins.You can get somewhere towards Utopia in a closed biosphere. Not sure if they are still available for sale but essentially they consisted of a globe containing water, an aquatic plant, air, and a shrimp. As long as the sun shines and the globe can lose heat to the environment (including radiating heat into space) the shrimp and the seaweed can in principle live for ever. But they are still dependent on getting the right amount of sunshine and not overheating, so not actually independent of environment.Evolution is about adaptation to an environmental niche. On a geological or astronomical timescale, there are no stable niches, so no single Utopia.
In recent years, reinforcement learning has yielded impressive performance in complex game environments ranging from Atari, Go, and chess to Dota 2 and StarCraft II, with artificial agents rapidly surpassing the human level of play in increasingly complex domains. Games are an ideal platform for developing and testing machine learning algorithms. They present challenging tasks that require a range of cognitive abilities to accomplish, mirroring skills needed to solve problems in the real world. Machine learning researchers can run thousands of simulated experiments on the cloud in parallel, generating as much training data as needed for the system to learn.Crucially, games often have a clear objective, and a score that approximates progress towards that objective. This score provides a useful reward signal for reinforcement learning agents, and allows us to get quick feedback on which algorithmic and architectural choices work best.The agent alignment problemUltimately, the goal of AI progress is to benefit humans by enabling us to address increasingly complex challenges in the real world. But the real world does not come with built-in reward functions. This presents some challenges because performance on these tasks is not easily defined. We need a good way to provide feedback and enable artificial agents to reliably understand what we want, in order to help us achieve it. In other words, we want to train AI systems with human feedback in such a way that the system’s behavior aligns with our intentions. For our purposes, we define the agent alignment problem as follows:How can we create agents that behave in accordance with the user’s intentions?The alignment problem can be framed in the reinforcement learning framework, except that instead of receiving a numeric reward signal, the agent can interact with the user via an interaction protocol that allows the user to communicate their intention to the agent. This protocol can take many forms: the user can provide demonstrations, preferences, optimal actions, or communicate a reward function, for example. A solution to the agent alignment problem is a policy that behaves in accordance with the user’s intentions.There are several challenges that will need to be addressed in order to scale reward modeling to such complex problems. Five of these challenges are listed below and described in more depth in the paper, along with approaches for addressing them.
Quote from: hamdani yusuf on 15/06/2020 07:28:36To demonstrate that consiousness is a continuous parameter, we can use a thought experiment. Take a human subject which we can all agree that he/she is a conscious being. Destroy one neuron out of billions that exist in the brain, and then ask if he/she is still conscious. Repeat the experiment until we all agree that he/she is not conscious. The experiment will most likely give different result for different researchers, depending on their assumed threshold of consciousness level. It may also depend on the order of the neuron destruction.We can find a similar situation in determining adulthood. At which point in your life you change from a kid into an adult? Humans grow from a zygote into an embryo, fetus, baby, toddler, kid, adult, elderly. At which point it turns from non-conscious thing into a conscious being?This realization brings us to next question: what factors can contribute to the increase and decrease of consciousness?We can revisit the thought experiment and imagine following situations:- At some point, destroying one neuron doesn't change any measurable effect.- At some point, destroying one neuron makes the human subject lose some memory.- At some other point, he/she may lose some ability for numerical processing, verbal processing, or spatial processing.- Other abilities that may be lost at some point of the experiment are sensing (visual, audio, touch, taste, balance), motoric (such as moving a finger, arm, leg, blinking, breathing, hartbeating), acquired skill (swimming, bicycling, driving, juggling, singing, dancing, writing, coding, playing chess).- At some point the human subject may stop thinking, and eventually dead at the end of the experiment.I think we can safely argue that losing some of those abilities reduces consciousness of the human subject. On the other hand, restoring those abilities also restores consciousness, even if the method used to restore it doesn't make the brain structure exactly the same as before the experiment. If the experiment is continued to add some new ability which was not exist in the original human subject (e.g. seeing in infrared spectra, performing one arm push up, translating Chinese, computing advanced Algebra), we can say that his/her consciousness has increased.
To demonstrate that consiousness is a continuous parameter, we can use a thought experiment. Take a human subject which we can all agree that he/she is a conscious being. Destroy one neuron out of billions that exist in the brain, and then ask if he/she is still conscious. Repeat the experiment until we all agree that he/she is not conscious. The experiment will most likely give different result for different researchers, depending on their assumed threshold of consciousness level. It may also depend on the order of the neuron destruction.We can find a similar situation in determining adulthood. At which point in your life you change from a kid into an adult? Humans grow from a zygote into an embryo, fetus, baby, toddler, kid, adult, elderly. At which point it turns from non-conscious thing into a conscious being?
Yann Le Cun, John S. Denker and Sara A. Sol1aAT&T Bell Laboratories, Holmdel, N. J. 07733ABSTRACTWe have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neuralnetwork. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewertraining examples required, and improved speed of learning and/orclassification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and trainingset error. Experiments confirm the usefulness of the methods on areal-world application.1 INTRODUCTIONMost successful applications of neural network learning to real-world problems havebeen achieved using highly structured networks of rather large size [for example(Waibel, 1989; Le Cun et al., 1990a)]. As applications become more complex, thenetworks will presumably become even larger and more structured. Design toolsand techniques for comparing different architectures and minimizing the networksize will be needed. More importantly, as the number of parameters in the systemsincreases, overfitting problems may arise, with devastating effects on the generalization performance. We introduce a new technique called Optimal Brain Damage(OBD) for reducing the size of a learning network by selectively deleting weights.We show that OBD can be used both as an automatic network minimization procedure and as an interactive tool to suggest better architectures. The basic idea of OBD is that it is possible to take a perfectly reasonable network,delete half (or more) of the weights and wind up with a network that works just aswell, or better. It can be applied in situations where a complicated problem must be solved, and the system must make optimal use of a limited amount of trainingdata. It is known from theory (Denker et al., 1987; Baum and Haussler, 1989; Sollaet al., 1990) and experience (Le Cun, 1989) that, for a fixed amount of trainingdata, networks with too many weights do not generalize well. On the other hand.networks with too few weights will not have enough power to represent the dataaccurately. The best generalization is obtained by trading off the training error andthe network complexity.
This realization brings us to next question: what factors can contribute to the increase and decrease of consciousness?We can revisit the thought experiment and imagine following situations:- At some point, destroying one neuron doesn't change any measurable effect.- At some point, destroying one neuron makes the human subject lose some memory.- At some other point, he/she may lose some ability for numerical processing, verbal processing, or spatial processing.- Other abilities that may be lost at some point of the experiment are sensing (visual, audio, touch, taste, balance), motoric (such as moving a finger, arm, leg, blinking, breathing, hartbeating), acquired skill (swimming, bicycling, driving, juggling, singing, dancing, writing, coding, playing chess).- At some point the human subject may stop thinking, and eventually dead at the end of the experiment.
little blue-and-black fish swims up to a mirror. It maneuvers its body vertically to reflect its belly, along with a brown mark that researchers have placed on its throat. The fish then pivots and dives to strike its throat against the sandy bottom of its tank with a glancing blow. Then it returns to the mirror. Depending on which scientists you ask, this moment represents either a revolution or a red herring.Alex Jordan, an evolutionary biologist at the Max Planck Institute for Ornithology in Germany, thinks this fish — a cleaner wrasse — has just passed a classic test of self-recognition. Scientists have long thought that being able to recognize oneself in a mirror reveals some sort of self-awareness, and perhaps an awareness of others’ perspectives, too. For almost 50 years, they have been using mirrors to test animals for that capacity. After letting an animal get familiar with a mirror, they put a mark someplace on the animal’s body that it can see only in its reflection. If the animal looks in the mirror and then touches or examines the mark on its body, it passes the test.Humans don’t usually reach this milestone until we’re toddlers. Very few other species ever pass the test; those that do are mostly or entirely big-brained mammals such as chimpanzees. And yet as reported in a study that appeared on bioRxiv.org earlier this year and that is due for imminent publication in PLOS Biology, Jordan and his co-authors observed this seemingly self-aware behavior in a tiny fish.Jordan’s findings have consequently inspired strong feelings in the field. “There are researchers who, it seems, do not want fish to be included in this secret club,” he said. “Because then that means that the [primates] are not so special anymore.”If a fish passes the mirror test, Jordan said, “either you have to accept that the fish is self-aware, or you have to accept that maybe this test is not testing for that.” The correct explanation may be a little of both. Some animals’ mental skills may be more impressive than we imagined, while the mirror test may say less than we thought. Moving forward in our understanding of animal minds might mean shattering old ideas about the mirror test and designing new experiments that take into account each species’ unique perspective on the world.
“Recognition of one’s own reflection would seem to require a rather advanced form of intellect,” Gallup wrote in 1970. “These data would seem to qualify as the first experimental demonstration of a self-concept in a subhuman form.”Either a species shows self-awareness or it doesn’t, as Gallup describes it — and most don’t. “And that’s prompted a lot of people to spend a lot of time trying to devise ways to salvage the intellectual integrity of their favorite laboratory animals,” he told me.But Reiss and other researchers think self-awareness is more likely to exist on a continuum. In a 2005 study, the Emory University primatologist Frans de Waal and his co-authors showed that capuchin monkeys make more eye contact with a mirror than they do with a strange monkey behind Plexiglas. This could be a kind of intermediate result between self-awareness and its lack: A capuchin doesn’t seem to understand the reflection is itself, but it also doesn’t treat the reflection as a stranger.Scientists also have mixed feelings about the phrase “self-awareness,” for which they don’t agree on a definition. Reiss thinks the mirror test shows “one aspect of self-awareness,” as opposed to the whole cognitive package a human has. The biologists Marc Bekoff of the University of Colorado, Boulder, and Paul Sherman of Cornell University have suggested a spectrum of “self-cognizance” that ranges from brainless reflexes to a humanlike understanding of the self.
Do you think of yourself as having a brain or being a brain? Can you conceive of your mind, your personality, your self, as entirely and only the product of your physical brain? The mind seems non-physical, ethereal and spiritual. The intuitive sense that mind and brain are separate entities can be hard to shake. But, what we know from science is that the mind comes from the brain and nothing but the brain. The mind is what the brain does. Any theory that does not begin with this assumption would necessarily imply that practically all the rest of modern science is fundamentally incorrect.The physical basis of consciousness is a guiding principle behind a great many practical and effective treatments for mental illnesses. Daily, I witness the subtle or dramatic effects of varying degrees of disturbance of brain functioning on the ‘mind’ or ‘personality.’ I also witness the beneficial cognitive, emotional, and behavioral effects of physically based medical treatments1. There is no aspect of the mind, the personality, the ‘self,’ or the ‘will’ that is not completely susceptible to chemical influences or physical diseases that disrupt neuronal circuitry.If you have ever had someone close to you suffer from gradually progressive dementia, serious head injury, or a variety of other forms of brain damage or serious mental disorder, then you have witnessed the disruption or a kind of ‘disassembly’ of the mind—and of the person or personality you once knew. Such a change highlights how the mind is entirely a product of the physical brain and is dependent on intact neural circuitry.
There are gradations of conscious self-awareness in humans at different levels of early development, in people with different levels of impairment of brain function, and in animals at different levels of evolutionary complexity.5We are the sum of all our complex, dynamically interconnected brain networks. We are composed of a lifetime of remembered experiences, knowledge, learned behaviors and habits. We are all of that information, physically embodied in the total network’s connections, recursively reflecting on itself in a cybernetic loop. We are organized matter. Information is physical and humans are a dynamic network of information.
Like any other systems, an agent can be broken down into three main parts: input, process, and output.Conscious agents get information from their inputs to build a simplified model of their current surrounding environment. The model is then processed by the system's core using some algorithm/function involving current inputs, memorized previous inputs, some internal/built in parameters, as well as current and memorized previous outputs.An efficient system must use minimum resource to achieve target. One way to do that is by data compression. The agent's environment is continuously changing, hence the data from the input parts must also change accordingly. Memorized previous inputs then would accumulate from time to time. Without data compression, the memory would be depleted in no time.Another way is by discarding unnecessary/insignificant data. Data that don't have impact to the result must be removed and overwritten in the memory.Yet another way to become an efficient system is by resource and load sharing. A multicellular organim is basically a collection of cells that work together for common goals, which are to survive and thrive. They develop specialized tissues, which means some cells develop some functions to be more effective at doing some task while abandoning other functions to save resource and be more efficient. Not every cell has to be photosensitive, and not every cell has to develop hard shell to provide protection.QuoteMulticellularity allows an organism to exceed the size limits normally imposed by diffusion: single cells with increased size have a decreased surface-to-volume ratio and have difficulty absorbing sufficient nutrients and transporting them throughout the cell. Multicellular organisms thus have the competitive advantages of an increase in size without its limitations. They can have longer lifespans as they can continue living when individual cells die. Multicellularity also permits increasing complexity by allowing differentiation of cell types within one organism.The necessity of data compression becomes more apparent the higher the conscience level of the agent is. It's even become inevitable for Laplace's demon. Without data compression, all matter in universe will be used up as memory modelling the universe itself in current state, leaving nothing for input and output parts. Without input and output, an agent can not execute its plan.
This combination resembles an artificial neural network.
Quote from: alancalverd on 28/06/2020 18:35:44Please remind me, in one paragraph, of your universal terminal goal, and whether we agreed on it!Keeping the existence of the last conscious being.Any conscious being can be considered as a modified copy of it, hence there is some value in keeping their existence.
Please remind me, in one paragraph, of your universal terminal goal, and whether we agreed on it!
The task of distinguishing individuals can be difficult — and not just for scientists aiming to make sense of a fragmented fossil record. Researchers searching for life on other planets or moons are bound to face the same problem. Even on Earth today, it’s clear that nature has a sloppy disregard for boundaries: Viruses rely on host cells to make copies of themselves. Bacteria share and swap genes, while higher-order species hybridize. Thousands of slime mold amoebas cooperatively assemble into towers to spread their spores. Worker ants and bees can be nonreproductive members of social-colony “superorganisms.” Lichens are symbiotic composites of fungi and algae or cyanobacteria. Even humans contain at least as many bacterial cells as “self” cells, the microbes in our gut inextricably linked with our development, physiology and survival.
Krakauer and Flack, in collaboration with colleagues such as Nihat Ay of the Max Planck Institute for Mathematics in the Sciences, realized that they’d need to turn to information theory to formalize their principle of the individual “as kind of a verb.” To them, an individual was an aggregate that “preserved a measure of temporal integrity,” propagating a close-to-maximal amount of information forward in time.Their formalism, which they published in Theory in Biosciences in March, is based on three axioms. One is that individuality can exist at any level of biological organization, from the subcellular to the social. A second is that individuality can be nested — one individual can exist inside another. The most novel (and perhaps most counterintuitive) axiom, though, is that individuality exists on a continuum, and entities can have quantifiable degrees of it.“This isn’t some binary function that suddenly has a jump,” said Chris Kempes, a physical biologist at the Santa Fe Institute who was not involved in the work. To him as a physicist, that’s part of the appeal of the Santa Fe team’s theory. The emphasis on quantifying over categorizing is something biology could use more of, he thinks — in part because it gets around tricky definitional problems about, say, whether a virus is alive, and whether it’s an individual. “The question really is: How living is a virus?” he said. “How much individuality does a virus have?”
The problem of individuality is very important to clarify if we want to build argumentation about morality. People often limit their scope of individuality to commonly found cases, which are biological human individuals. Some have expanded its definition to include other biological animal. But very few seem to be willing to expand it further to other systems, such as non-biological entities.Even if we restrict individuality to only include biological entities, we still face problems, e.g:- people with multiple personality disorder.- conjoined twins- double headed animals- half brained person (e.g. the other half has been removed due to a disease)- biological colony https://en.wikipedia.org/wiki/Colony_(biology)#Modular_organisms https://en.wikipedia.org/wiki/Pando_(tree) - symbionts https://en.wikipedia.org/wiki/Lichen - parasites- cancer cells- organellesHow should we count the number of individus when being presented with those things? The problem arise if we treat individuality as a discrete thing. Using the concept of individuality as mentioned in my previous post can help solve this problem.If we look back to biological evolutionary process, multicellular organisms are products of cells letting go some of their individuality to form a bigger system which gains some individuality. Those cells lose some basic functionalities so they can no longer survive when set free in an open environment. But they can develop special functionalities which are useful for the bigger system they are being part of, such as photosensitivity, nervous system, circulatory system, armor for protection, food digestion, chemical weaponry. Similar story also happened when ancestor of mitochondria were engulfed by archaea to form eukaryotic organisms. Another similar story is the formation of ant or bee colonies.The case of modern human has similarity too. Many of them have very specialised skill set which make no longer capable to survive in the wilderness for long duration. They depend on their society. How many people still grow/hunt their own food, build their own house, knit their own clothes, or heal their own wound?
The case of modern human has similarity too. Many of them have very specialised skill set which make no longer capable to survive in the wilderness for long duration. They depend on their society. How many people still grow/hunt their own food, build their own house, knit their own clothes, or heal their own wound?
I don’t mean to alarm you, but the average human brain size is shrinking. And we can’t blame reality T.V. or Twitter.No, this decline began tens of thousands of years ago. It’s something of a well-known secret among anthropologists: Based on measurements of skulls, the average brain volume of Homo sapiens has reportedly decreased by roughly 10 percent in the past 40,000 years. This reduction is a reversal of the trend of cranial expansion, which had been occurring in human evolution for millions of years prior
More convincing evidence for cranial decline comes from studies that applied the same measuring technique to hundreds or even thousands of skulls from a particular region across the millennia. For instance, a 1988 Human Biology paper analyzed more than 12,000 Homo sapiens crania from Europe and North African. It showed cranial capacity decreased in the past 10,000 years by about 10 percent (157 mL) in males and 17 percent (261 mL) in females. A similar reduction was found among skulls from elsewhere on the planet, including sub-Saharan Africa, East Asia and Australia.
Explaining Our Cranial DeclineFrom every region with data, there seems to have been a roughly half cup decrease in endocranial volume that began when the Ice Age gave way to the Holocene, the most recent geological epoch, which is characterized by a comfortable, stable climate. Since this pattern was first noticed in the late 1980s, researchers have proposed a number of possible explanations.Some say the decrease came from from a slight reduction in body size and robustness, related to the warmer conditions of the Holocene. Bigger bodies were better during the Ice Age, and then became disadvantageous as the climate warmed. But anthropologist John Hawks has countered this idea by showing that the documented brain reduction is too great to be explained by simply having slightly smaller bodies.Other researchers point to the fact that brains are energetically costly organs. Though the modern human brain is only 2 percent of our body weight, it consumes almost one quarter our energy input. By inventing ways to store information externally — cave art, writing, digital media — humans were able to shed some brain bulk, according to one proposal.But perhaps the most convincing hypothesis is that Homo sapiens underwent self-domestication, a proposal that stems from our understanding of animal domestication. Sheep, dogs and other domesticated species differ from their wild ancestors by a number of physical and behavioral traits. These include tameness, reduced timidity, juvenile appearance into adulthood and smaller brains.Research has shown these traits, collectively known as the domestication syndrome, are influenced by the same hormones and genes. Humans selectively bred animals with these desirable features, creating today’s pets and livestock. The self-domestication hypothesis — or what anthropologist Brian Hare called “survival of the friendliest” — suggests we also did this to ourselves.The idea is, within Stone Age societies, cooperative, level-headed individuals were more likely to survive and reproduce than combative, aggressive ones. Those pro- or anti-social inclinations were influenced by genes regulating hormones, which also affected physical traits, including body and brain size. Over time, “survival of the friendliest” led to humans with slighter builds and brains on average. So although there was a reduction in skull size — and possibly intelligence — human cooperation grew, cultivating greater collective wisdom. A few social smaller brains can surely outwit one lonely large noggin.
There are reasons why I used those words as the title of this thread.The term universal is to emphasize that the goal is applicable universally, including for aliens and artificial lives.The term utopia is to show that in my opinion, the goal is still unachievable in foreseeable future.Focusing too much to internal state while neglecting external condition can be fatal. Just see drug addicts who hack their brain chemistry just to feel good and happy regardless their surrounding reality.As I discussed in another thread, I think that feelings, love, happiness, sadness, pain and pleasure are tools to help us getting better chance to survive. Only survivors can think/contemplate retrospectively.
The Great Filter, in the context of the Fermi paradox, is whatever prevents non-living matter from undergoing abiogenesis, in time, to expanding lasting life as measured by the Kardashev scale.[1][2] The concept originates in Robin Hanson's argument that the failure to find any extraterrestrial civilizations in the observable universe implies the possibility something is wrong with one or more of the arguments from various scientific disciplines that the appearance of advanced intelligent life is probable; this observation is conceptualized in terms of a "Great Filter" which acts to reduce the great number of sites where intelligent life might arise to the tiny number of intelligent species with advanced civilizations actually observed (currently just one: human).[3] This probability threshold, which could lie behind us (in our past) or in front of us (in our future), might work as a barrier to the evolution of intelligent life, or as a high probability of self-destruction.[1][4] The main counter-intuitive conclusion of this observation is that the easier it was for life to evolve to our stage, the bleaker our future chances probably are.The idea was first proposed in an online essay titled "The Great Filter - Are We Almost Past It?", written by economist Robin Hanson. The first version was written in August 1996 and the article was last updated on September 15, 1998. Since that time, Hanson's formulation has received recognition in several published sources discussing the Fermi paradox and its implications.
The most important thing is to keep the most important thing the most important thing.– From the book “Foundation design”, by Coduto, Donald P.
When compared to chess analogy, the universal utopia can be paired as follow:- Preventing checkmate on own king is like preventing currently existing conscious system from extinction. This rule is universal for any consceivable conscious system.- Getting checkmate of the opponent's king is like getting a maximum consciousness level system. The maximum is infinite, hence the term utopia is used.- Preserving time and energy is just like preserving available resource to achieve the goals above more efficiently, hence improve the probability of achieving those goals.
One way to look at universal utopia is by contrasting rich versus poor. If you were independently wealthy and rich, you can buy or rent aspects of external reality to help push your utopian buttons. You can eat the finest food so you can stimulate you taste buds for pleasure and joy. You can travel the world to stimulate you visual senses with awe. You can hire others to simply agree with you and tell you, that you are so great. You can migrate, house to house, on an annual cycle, so the climate is always the way you like it. This may work in terms of personal utopia. However, the problem is there are not enough resources for everyone to do this and make it universal. It can lead to individual utopia, but not universal. On the other hand, the poor man does not have the money to use the external world to push his utopia buttons. He cannot afford all the things needed to makes this daily and perpetual. The poor man can save and get a short term utopian buzz, here and there. Instead he needs to find ways to make the best of his limited external situation. He needs to find a place, inside himself, where he can push his own utopian bottoms, so he can see and feel good, using only the simple and free things of life. This approach does not need the same level of resources, as externally induced utopia. It could become universal, if enough people knew how to do it. However, it is easier to use the external prosthesis approach, based on money, since culture shows us the finer things. So people work hard to achieve that end, but with most falling short of full scale individual or universal utopia.
Bitcoin was officially born in January 2009, when a person or group going by the pseudonym Satoshi Nakamoto released the open source code for the software.Nakamoto mined the very first block of the first blockchain and left what has been variously interpreted as a statement, a clue, or a means of marking the date:‘The Times 03/Jan/2009 Chancellor on brink of second bailout for banks.’This is obviously a reference to a headline in The Times newspaper from that date. While it’s possible that Nakamoto just picked the first headline they saw on the nearest newspaper, and it was totally random, cryptocurrency enthusiasts tend to unanimously see it as a statement of intent. At the time, the 2008 financial crisis was still unravelling.It’s assumed that Bitcoin was, at least in part, a reaction to the widespread anger and frustration at the existing financial system.
Could artificial intelligence ever gain true consciousness? This documentary explores what might unfold if super intelligent AI acquired consciousness, how it might see itself, and what it’s impact might be on our world and beyond.