0 Members and 1 Guest are viewing this topic.
Black swans were assumed to be nonexistent because nobody had reported any, until they turned up. Not the same analysis by any means. There are three aspects to my statement: (a) no evidence of previous visits in x years and (b) assuming that there is something out there, what is the probability of its finding us before we are extinct in the next y years? Bayesian statistics says y/x = 1/9000 or thereabouts. (c) Now multiply by z, the likelihood that a superintelligent species has the time, inclination and capability of looking for and visiting a less intelligent one, and you have the nonzero but very small probability P = yz/x of this particular black swan turning up.Now consider what you can do about it. Said visitor by definition has more intelligence and more capability than us, and an unknown objective in making the visit. So we can't prevent it or predict the outcome. So there's no point in worrying about it - your black swan has become a lightning strike!
But technological advances is shown to be more exponential
Quote from: hamdani yusuf on 25/11/2023 00:32:29But technological advances is shown to be more exponentialI think not. Moore's Law, for instance, implies capacity extending beyond the physical boundary of one atom or even one electron per bit. Most technologies follow an S curve.
Aviation is a fine example of the opposite. Mach 2 passenger travel was available in the 1970s, and the 550-seat A380 flew in 2006, but both have been abandoned in favor of 300-seat subsonic aircraft that actually meet the need to move people over long distances in comfort and at a sensible price.
Quote from: alancalverd on 01/08/2023 10:23:34Quote from: hamdani yusuf on 01/08/2023 04:18:06Generative A.I. Will Change Everything.I think not. It's just another tool in the hands of humans, so still motivated by greed, superstition or altruism.Which human? Eventually the jobs of CEO, investors, politicians and lawmakers will be taken over by AI. Sooner or later we need to solve fundamental problem of goal alignment between humans and machines, which needs a common basic understanding of the universal terminal goal, universal moral compass, and accurate model of how the universe works. The later it gets solved, the more damages would be done, which would be a less efficient route to the future.
Quote from: hamdani yusuf on 01/08/2023 04:18:06Generative A.I. Will Change Everything.I think not. It's just another tool in the hands of humans, so still motivated by greed, superstition or altruism.
Generative A.I. Will Change Everything.
https://openai.com/blog/introducing-superalignmentSuperintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world?s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.While superintelligence seems far off now, we believe it could arrive this decade.(Here we focus on superintelligence rather than AGI to stress a much higher capability level. We have a lot of uncertainty over the speed of development of the technology over the next few years, so we choose to aim for the more difficult target to align a much more capable system.)Managing these risks will require, among other things, new institutions for governance and solving the problem of superintelligence alignment:How do we ensure AI systems much smarter than humans follow human intent?Currently, we don't have a solution for steering or controlling a potentially super-intelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans? ability to supervise AI. But humans won?t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.(Other assumptions could also break down in the future, like favorable generalization properties during deployment or our models? inability to successfully detect and undermine supervision during training.)Our approachOur goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence.To align the first automated alignment researcher, we will need to 1) develop a scalable training method, 2) validate the resulting model, and 3) stress test our entire alignment pipeline:To provide a training signal on tasks that are difficult for humans to evaluate, we can leverage AI systems to assist evaluation of other AI systems (scalable oversight). In addition, we want to understand and control how our models generalize our oversight to tasks we can?t supervise (generalization).To validate the alignment of our systems, we automate search for problematic behavior (robustness) and problematic internals (automated interpretability).Finally, we can test our entire pipeline by deliberately training misaligned models, and confirming that our techniques detect the worst kinds of misalignments (adversarial testing).We expect our research priorities will evolve substantially as we learn more about the problem and we?ll likely add entirely new research areas. We are planning to share more on our roadmap in the future.
Why can't it be the other way around, where humans need to align their values to the universal values which are also shared with the most intelligent agents possible?
Quote from: hamdani yusuf on 27/11/2023 13:56:09Why can't it be the other way around, where humans need to align their values to the universal values which are also shared with the most intelligent agents possible?Because we are human. We build machines to serve us, not the other way around. And you have no idea what a universal value might be.And what's so special about intelligence? On a crude measure like,say, an IQ test, Osama bin Laden would score higher than my dog. But my dog doesn't hate Jews or Americans, so I'd rather support her values and goals than his.
Intelligence is the ability to surprise.
And so far the only suggestion of a UTG is your own invention, with no test of your assertion that it is universal. In what way is it more valid than, say, Putin's or Hamas's goal?
Universal: (a - literal) found everywhere or (b - figurative) present in or adopted or practised by all humans. You have shown no proof either.
(a) therefore it isn't strictly universal unless you think every atom in the universe is conscious, or that the presence of one conscious entity (whatever that might mean) confers its goal to an infinite volume of space. But in that case the existence of any two conscious entities with different goals (e.g. predator and prey) means that there is no universal goal.
(b) common usage of the word "universal" as in "universal declaration of human rights".
How do you define human?
How much of known universe have human in it?
Quote from: hamdani yusuf on 29/11/2023 11:30:40How do you define human?a member of the species homo sapiens - a mostly hairless bipedal mammal with (mostly) 46 chromosomes and a unique desire and ability to kill other members of the same species for no logical reason QuoteHow much of known universe have human in it? All of it, but AFAIK only in one tiny place, and not for very long.
Currently, humans are special as the group with the most capability to pursue goals, i.e. most conscious, compared to other organisms.
Quote from: hamdani yusuf on 30/11/2023 09:40:30Currently, humans are special as the group with the most capability to pursue goals, i.e. most conscious, compared to other organisms. How do you know that? We certainly have a lot of very trivial goals but the average virus or bed bug is much better at achieving its one aim in life.