0 Members and 112 Guests are viewing this topic.
Just a minor quibble why do correspondents write 1021 when they mean 10 to the power of 21 ?
Artificial General Intelligence (AGI) means many different things to different people, but the most important parts of it have already been achieved by the current generation of advanced AI large language models such as ChatGPT, Bard, LLaMA and Claude. These ?frontier models? have many flaws: They hallucinate scholarly citations and court cases, perpetuate biases from their training data and make simple arithmetic mistakes. Fixing every flaw (including those often exhibited by humans) would involve building an artificial superintelligence, which is a whole other project.Nevertheless, today?s frontier models perform competently even on novel tasks they were not trained for, crossing a threshold that previous generations of AI and supervised deep learning systems never managed. Decades from now, they will be recognized as the first true examples of AGI, just as the 1945 ENIAC is now recognized as the first true general-purpose electronic computer.The ENIAC could be programmed with sequential, looping and conditional instructions, giving it a general-purpose applicability that its predecessors, such as the Differential Analyzer, lacked. Today?s computers far exceed ENIAC?s speed, memory, reliability and ease of use, and in the same way, tomorrow?s frontier AI will improve on today?s.But the key property of generality? It has already been achieved.
Why do people try to build robots that look like humans? No engineer would design a machine that needs half of its processing power to stand stillburns fuel at half maximum rate even when asleeponly has one opposable thumb on each handand so forth.Evolution produced this dead end. Time for some intelligent design!
We have plenty of humans to do work that is suitable for humans. The value of robots is to do stuff that humans can't, or could be better done by something more specific to the job.
. Plain language programming, self-fuelling and repairing, safety-conscious, often bringing new procedures and insights to the task, and if they run out of materials, they are quite capable of phoning the shop, ordering more, and taking the van to collect it.
I don't have to show them how to use a saw, hammer or shovel and I wouldn't want them to "replicate my actions" anyway - I'm pretty crap at most trades.
Large language models like GPT-3 have revolutionized AI by achieving impressive performance on natural language tasks. However, they are fundamentally limited by their fixed context window - the maximum number of tokens they can receive as input. This severely restricts their ability to carry out tasks that require long-term reasoning or memory, such as analyzing lengthy documents or having coherent, consistent conversations spanning multiple sessions.Researchers from UC Berkeley have developed a novel technique called MemGPT (project site is here, repo is here) that gives LLMs the ability to intelligently manage their own limited memory, drawing inspiration from operating systems. MemGPT allows LLMs to selectively page information in and out of their restricted context window, providing the illusion of a much larger capacity. This lets MemGPT tackle tasks involving essentially unbounded contexts using fixed-context LLMs.MemGPT represents an important milestone in overcoming the limited context problem for LLMs. The key insights are:Hierarchical memory systems allow virtualizing essentially infinite contexts.OS techniques like paging and interrupts enable seamless information flow between memory tiers.Self-directed memory management removes need for human involvement.Rather than blindly scaling model size and compute, MemGPT shows we can unlock LLMs' potential within their fundamental constraints through software and system design.
The robot makers only need to train one robot once for each task. The training results can be duplicated to limitless number of identical robots through over the air update as necessary.
Quote from: hamdani yusuf on 19/10/2023 03:19:29The robot makers only need to train one robot once for each task. The training results can be duplicated to limitless number of identical robots through over the air update as necessary.My construction gang can work anywhere without retraining. The sites vary from existing protected buildings in a noise-sensitive city, via derelict rubble and junkyards, to virgin woodland, but they can strip the site, build a shed and install an MRI unit, from a book of drawings. They even train their own apprentices whilst doing it.
They can master the complex tasks fresh from the factory.
Quote from: hamdani yusuf on 19/10/2023 15:53:17They can master the complex tasks fresh from the factory.So you have a carpenter robot and I have a human contractor. I tell my guy (Marek - he's very good): "We need a new reception desk in the office in London and a roof extension to the factory outside Bristol." Next day he gives me a price and orders the materials.You tell your robot: er......um.....
We discuss:- Why he expects AGI around 2028- How to align superhuman models- What new architectures needed for AGI- Has Deepmind sped up capabilities or safety more?- Why multimodality will be next big landmark- & much moreTimestamps(0:00:00) - Measuring AGI(0:11:41) - Do we need new architectures?(0:16:26) - Is search needed for creativity?(0:19:19) - Superhuman alignment(0:29:58) - Impact of Deepmind on safety vs capabilities(0:34:03) - Timelines(0:41:24) - Multimodality
The current explosion of exciting commercial and open-source AI is likely to be followed, within a few years, by creepily superintelligent AI ? which top researchers and experts fear could disempower or wipe out humanity. Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it's working for us, not the other way around.
About the episode: This model of computing would use 1/1000th of the energy today?s computers do. So why aren?t we using it? What if the next big technology was actually a pretty old technology? The first computers ever built were analog, and a return to analog processing might allow us to rebuild computing entirely. Analog computing could offer the same programmability, power, and efficiency as the digital standard, at 1000x less energy than digital. But would switching from digital to analog change how we interact with our technology? Aspinity is tackling the major hurdles to optimize the future landscape of computing.
In this video, I'll explore the intriguing concept that our brains might just be the original surveillance technology. We'll take a deep dive into how our minds have evolved to observe, adapt, and anticipate, much like the way AI operates.Meredith Whittaker's thought-provoking statement about AI being a "surveillance technology" sparked my curiosity. Is there a connection between the advanced surveillance capabilities of AI and the inherent surveillance mechanisms in our own brains? Let's find out.We'll break down the parallels between AI functionality and the human brain. Are our brains finely tuned to navigate complex social environments and ensure our survival? Could our admiration for AI's predictive power actually be an appreciation for our own innate surveillance capabilities?Join me as we scrutinize surveillance from its early roots in survival through environmental observation to the development of social surveillance within human societies. We'll explore how memory, predictive abilities, neuroplasticity, linguistic evolution, and more are all interconnected facets of our brains' surveillance prowess.And don't forget, as AI advances and mirrors our surveillance instincts, it's essential to contemplate the ethical implications, safeguards, and impacts on society, governance, and personal interactions.So, let's embark on this journey together, as we uncover the intriguing relationship between AI and the human brain in the context of surveillance. If you're curious like I am, stay tuned for a fascinating exploration of this topic.