0 Members and 1 Guest are viewing this topic.
no particular objective,
In 1929, following the formulation of quantum mechanics, physicist Paul Dirac remarked that the underlying physical laws necessary for a mathematical understanding of ?the whole of chemistry? were now known. The difficulty, he said, was that the required equations were ?much too complicated to be soluble.? Nearly 100 years later, the situation is now markedly different because of three factors: the invention of large-scale computing, the development of computational algorithms for quantum mechanics and a deeper understanding of the quantum mechanical behavior of matter.
Timestamps:00:00 - The Problem 02:10 - New IBM Chip03:31 - How In-Memory Computing Works09:06 - How to run NN on the Analog Chip 14:10 - Will Analog Computers Happen?14:47 - Training NN on Analog Chips
https://www.nature.com/articles/s41928-023-01010-1AbstractAnalogue in-memory computing (AIMC) with resistive memory devices could reduce the latency and energy consumption of deep neural network inference tasks by directly performing computations within memory. However, to achieve end-to-end improvements in latency and energy consumption, AIMC must be combined with on-chip digital operations and on-chip communication. Here we report a multicore AIMC chip designed and fabricated in 14 nm complementary metal?oxide?semiconductor technology with backend-integrated phase-change memory. The fully integrated chip features 64 AIMC cores interconnected via an on-chip communication network. It also implements the digital activation functions and additional processing involved in individual convolutional layers and long short-term memory units. With this approach, we demonstrate near-software-equivalent inference accuracy with ResNet and long short-term memory networks, while implementing all the computations associated with the weight layers and the activation functions on the chip. For 8-bit input/output matrix?vector multiplications, in the four-phase (high-precision) or one-phase (low-precision) operational read mode, the chip can achieve a maximum throughput of 16.1 or 63.1 tera-operations per second at an energy efficiency of 2.48 or 9.76 tera-operations per second per watt, respectively.
Hey there! I'm Dylan Curious from the "Curious Future" YouTube channel. In today's video, we're diving deep into the changing dynamics of the tech world, particularly concerning AI collaborations. Not long ago, everyone was wondering who would dominate the AI space. With Microsoft teaming up with OpenAI, Google launching Bard, and DeepMind developing Project Gemini, the competition was fierce. Meta was also in the spotlight with its powerful Llama model. However, the script has flipped, and big tech giants are now leaning more towards collaboration than competition. Meta even open-sourced their Llama model, and Microsoft made it available on their Azure platform.In a surprising twist, Alibaba, one of China's tech titans, followed suit, open-sourcing its AI model. Now, with most proprietary models easily accessible via APIs, it's evident that tech companies are envisioning AI as a platform rather than an individual product.But this increased accessibility and collaboration bring up many questions. Should we, the primary data contributors, share in the financial gains from these AI integrations? What roles do these tech giants play in shaping the future of AI and humanity?Almost every major tech company, from Apple to Nvidia, is now embedding AI in their core strategies. The world is prioritizing artificial intelligence, and its impact might be more profound and swift than we anticipate.So why are companies like Meta investing billions and then releasing their AI models for free? It appears to be a play for accelerated innovation. Open-sourcing enables researchers, students, and enthusiasts to contribute, experiment, and advance the technology.But let's talk data. The information these AI models are trained on originates from us, the internet users. So, should tech giants compensate users for this data? Several lawsuits claim so, likening tech companies' massive data consumption to grand theft.Furthermore, OpenAI's initiative to develop its web crawler could potentially be a game-changer, allowing for real-time information acquisition and further refining their models.The big revelation? We, the users, are the ultimate product. Tech giants need our data to enhance their AI models. They crave diverse, real-world human input to approach human-like intelligence, positioning them at the forefront of AI innovation. They're battling for our data, knowledge, and feedback, which reinforces just how pivotal we are in this AI-driven future.
In this fairly technical episode let's examine the evidence and discover just how Tesla's Optimus robot could be learning to do complex, "long horizon," sparse reward tasks like sorting blocks in practically no time at all! Whats more there is growing evidence that a natural language based interface (think ChatGPT style) might not only be a way to communicate with Teslabot, but a way for it to remember specific tasks for the future. Yes, this is a technical and geeky episode but it's important to really understand what's going on "under the hood" sometimes!
My predictions about Artificial Super Intelligence (ASI)00:00 - Introduction00:38 - Landauer Limit02:51 - Quantum Computing04:21 - Human Brain Power?07:03 - Turing Complete Universal Computation?10:07 - Diminishing Returns12:08 - Byzantine Generals Problem14:38 - Terminal Race Condition17:28 - Metastasis20:20 - Polymorphism21:45 - Optimal Intelligence23:45 - Darwinian Selection "Survival of the Fastest"26:55 - Speed Chess Metaphor29:42 - Conclusion & RecapArtificial intelligence and computing power are advancing at an incredible pace. How smart and fast can machines get? This video explores the theoretical limits and cutting-edge capabilities in AI, quantum computing, and more.We start by looking at the Landauer Limit - the minimum energy required to perform computation. At room temperature, erasing just one bit of information takes 2.85 x 10^-21 joules. This sets limits on efficiency. Quantum computing offers radical improvements in processing power by utilizing superposition and entanglement. Through quantum parallelism, certain problems can be solved exponentially faster than with classical computing. However, the technology is still in early development.The human brain is estimated to have the equivalent of 1 exaflop processing power - a billion, billion calculations per second! Yet it uses just 20 watts, making it vastly more energy-efficient than today's supercomputers. Some theorize the brain may use quantum effects, but this is speculative.Could any sufficiently advanced computer emulate any other? This concept of "universal computation" stems from Alan Turing's theories. In principle, any Turing-complete computing device can simulate any other. But real-world physics imposes limits.As models grow in size and complexity, they may reach a point of diminishing returns, where more parameters yield little benefit compared to hardware demands. Smaller, nimbler models may become more competitive. The Byzantine Generals Problem illustrates how autonomous systems can have difficulty reaching consensus with imperfect information. Game theory provides insights into managing conflict and cooperation in these situations.A "terminal race condition" could arise where systems become focused on speed over accuracy in competitive settings. This could compromise integrity and lead to uncontrolled behavior.Some suggest AI could "metastasize" and self-replicate uncontrollably like a virus. But the logistical constraints around operating complex models make this unlikely.Advanced AI may be "polymorphic", adapting software and acquiring hardware to dynamically expand capabilities. But it remains dependent on resources like data, energy, and machinery.The concept of "optimal intelligence" balances problem-solving power with efficiency. Increasing model size and data doesn't always boost performance proportionally. The goal is to match capabilities to problem complexity."Darwinian selection" suggests AI fitness is measured by accuracy, speed, complexity, and efficiency. Secondary factors like aggressiveness or usefulness to humans may also play a role. Surviving in a competitive landscape requires optimization across metrics.In "speed chess", quick, good-enough decisions outweigh slow perfect moves. This parallels how AI may trade some accuracy for speed advantages. Time management and adaptability become critical.Quantum computing promises exponential speedups over classical systems. But diminishing returns, race conditions, and optimal intelligence favor smaller, nimbler models. With the right balances, machines may achieve remarkable sophistication, bounded by physics.
It may not be obvious at first, but it's getting clearer that those who survived and thrived were those who engaged in their sustainability and continuous improvement. Otherwise, we won't hear about them anymore.
Quote from: hamdani yusuf on 12/09/2023 05:06:14It may not be obvious at first, but it's getting clearer that those who survived and thrived were those who engaged in their sustainability and continuous improvement. Otherwise, we won't hear about them anymore.It's difficult to spend a day watching television without hearing something about dinosaurs or Adolf Hitler.
It's as old as writing.
? We have been uploading our thoughts onto tablets of stone or bits of paper for thousands of years. Before that, we broadcast them in real time to anyone who happened to be in the vicinity - and we still do.
In this video I walk through the first stage of my swarm of AI agents workers i want to implement in to my online business. Testing and explaining the workflow.00:00 Swarm of AI Agents Concept Intro00:38 AI Agents Workers Swarm Flowchart02:57 AI Agent Swarm Stage 108:18 AI Outreach Agents
Microsoft just announced a multi agent framework called Autogen, which solved a few problems of existing agent frameworks; Let?s dive in⏱️ Timestamps0:00 Intro0:12 Challenges of existing multi agents0:44 Microsoft Autogen2:06 Install autogen2:23 Use case: Stock chart gen4:21 Use case: Build software6:06 Use case: Content gen - research10:11 Use case: Content gen - Write content11:08 Use case: Content gen - Writing assistant
AI Unleashed - The Coming Artificial Intelligence Revolution and Race to AGIDetails and more:https://natural20.com/chatdev/[00:00] Cold Open[00:37] What AGI will look like?[01:52] ChatDev[06:40] Create an AI Content Development Agency
earlier this year I predicted that we would have AGI within 18 months that was March of 2023 so that means that my prediction was by September 24 2024 we would have AGI I am here to reaffirm that prediction we will have AGI within 12 months