0 Members and 2 Guests are viewing this topic.
Not necessarily. It would weigh more, and take more space.
Quote from: hamdani yusuf on 11/11/2024 16:29:03Not necessarily. It would weigh more, and take more space.Most dogs weigh less than most humans, take up less space, and can run faster. A camera tripod is much more stable than a human hand, There is an old saying that if you grow up on a farm, you learn to do absolutely everything, but not very well. That's humans in general. Surgical robots can have more or fewer hands than a human, operating in the same space but doing a much better job.
The reason for building humanoid robots is for versatility. Working environments have been optimized for median human bodies. If they are expected to do everything that human individuals can do, then having the same physical form factor is an obvious starting point.
Many people have used the GSM-Symbolic paper as evidence that large language models (LLMs) do not reason. Despite its popularity, the paper shows many flaws, which I will cover in this video. I evaluate the paper by connecting it to the wider body of evidence and relevant AI concepts. It fails to show that models don't reason and ironically instead provides data that support the idea that bigger models generalize better.Sidenote: I use 'reasoning' and 'generalization' interchangeably, but the technically correct terminology is 'generalization'. Reasoning requires test-time compute. See the chapter 'Benign overfitting' for the difference.Correction: In the chapter 'Concept: Filler Tokens' I mentioned the recursive structure of autoregressive models. This is however not relevant, since the filler tokens add computation in a parallel manner not in a sequential (recursive) manner. Consider that in the paper, each non-sensical clause adds "filler" tokens to the input. Every filler token can "store computation" by being processed by each layer. This thus adds parallel computation. Contrast this with training a model to output filler tokens in its output as a CoT which would add sequential computation.Apple's Paper: https://arxiv.org/abs/2410.05229Paper similar to Apple with opposite conclusions: https://arxiv.org/abs/2405.00332Anthropic Mechanistic Interpretability Paper: https://arxiv.org/abs/2301.05217The empirical evidence on filler-tokens is quite weak, so take it with a grain of salt. The theoretical idea of adding parallel computation is still intriguing though and explained in this article: https://www.lesswrong.com/posts/oSZ2x...Paper on the lottery-ticket hypothesis (used for benign overfitting): https://arxiv.org/abs/1803.03635Timestamps:00:00 - Introduction00:57 - Paper's Claims01:41 - Dataset02:09 - Concept: Benign Overfitting04:43 - Related Work04:58 - Concept: Mechanistic Interpretability05:26 - Concept: System 1 & System 206:13 - Contradictory Paper08:05 - Results: Accuracy drop08:55 - Results: Variance10:03 - Results: Adding complexity12:10 - Concept: Autoregressive Models14:03 - Results: Adding noise15:10 - Concept: Filler Tokens15:56 - Conclusion
My goal here is to introduce model based learning and show how language understanding merged with gameplay AI strategies recently. From early chess engines to modern language models. We examine key breakthroughs in game-playing AI?TD-Gammon, AlphaGo, and MuZero?and their contribution to current large language model architectures. Special focus on the convergence of Monte Carlo Tree Search (MCTS) with neural networks, and how these techniques transformed into today's chain-of-thought reasoning.SUPPORT this work: / artoftheproblem Timestamps:00:00 intro01:00 definition of reasoning03:57 intuition06:35 MCTS07:40 AlphaGO09:37 World Models10:36 MuZero12:45 Chain/Tree of Thought14:03 RL on Reasoning15:41 ARC AGI Test
Summary:MIT research indicates artificial superintelligence (ASI) may be imminent. AI dramatically accelerates scientific discovery, particularly in materials science, compressing the innovation cycle and raising concerns about job displacement. Deep learning's application, especially GNNs, enables rapid material design, resulting in a surge in new discoveries and patents. However, this progress highlights the need for scientists to reskill, with one researcher stating their education felt "worthless." The observed efficiency gains and potential for AI to effectively outsource tasks to scientists suggest ASI's arrival is approaching, promising unprecedented technological and societal change.
00:00 Q star 2.001:54 Some Key Terms04:32 ARK AGI05:13 Francois Chollet10:36 Test Time Compute
with one researcher stating their education felt "worthless."
Would you really trust a learned paper by someone who can't spell "disappointing"?
Chapters:0:00 Elon Musk AGI Timeline0:26 Figure x BMW1:41 Giveaway2:20 Flux.1 Release4:17 1m Token Qwen5:02 ElevenLabs Agents5:48 Pokemon GO Data7:01 V4 Release8:11 Gemini Remembers9:05 OpenAI Voice Update9:33 Open-Source o110:04 Quantum Google11:21 GPT-4o Update
Quote from: alancalverd on 22/11/2024 16:22:04Would you really trust a learned paper by someone who can't spell "disappointing"? Perhaps English is not his first language.
Quote from: hamdani yusuf on 23/11/2024 11:59:44Quote from: alancalverd on 22/11/2024 16:22:04Would you really trust a learned paper by someone who can't spell "disappointing"? Perhaps English is not his first language. All the more reason to use a dictionary. First imppresions count.
Last week, DeepMind?s Demis Hassabis said that AI might be able to solve problems that quantum computers were supposedly necessary for. Indeed he said that classical systems ? AI run on conventional computers ? can model quantum systems. Sounds like an innocent claim but is certain to upset a lot of quantum computing researchers. Hassabis bases his argument on the surprising success of Alphafold.
The Matrix is a groundbreaking AI model capable of generating infinite, high-quality video worlds in real time, offering unmatched interactivity and adaptability. Developed using advanced techniques like the Video Diffusion Transformer and Swin-DPM, it enables seamless, frame-level precision for creating dynamic, responsive simulations. This innovation surpasses traditional systems, making it a game-changer for gaming, autonomous vehicle testing, and virtual environments.🔍 Key Topics Covered: The Matrix AI model and its ability to generate infinite, interactive video worlds Real-time applications in gaming, autonomous simulations, and dynamic virtual environments Revolutionary AI techniques like Video Diffusion Transformer, Swin-DPM, and Interactive Modules 🎥 What You?ll Learn: How The Matrix AI redefines video generation with infinite-length, high-quality simulations The transformative impact of real-time interactivity and domain generalization in AI-driven worlds Why this breakthrough is a game-changer for industries like gaming, VR, and autonomous systems 📊 Why This Matters: This video uncovers a groundbreaking AI innovation that merges real-time interactivity with infinite video generation, setting the stage for a future of responsive and immersive virtual environments. DISCLAIMER: This video delves into The Matrix AI model, highlighting its advancements, capabilities, and potential to revolutionize technology and industry through innovation.
Highlights of #nvidia ( #nvda stock ) Founder and CEO Jensen Huang speaking at AI Summit India. Highlights include how Nvidia dominated AI computing once #openai released #chatgpt , why Moore's Law no longer works, the generative AI breakthroughs Jensen Huang expects now that NVIDIA Blackwell is in production, and much more.Timestamps for this Nvidia AI Summit Supercut:00:00 Moore's Law is Dead - The Generative AI Era05:56 NVIDIA Blackwell Data Center Accelerators10:26 NVIDIA Generative AI Scaling 4x Per Year13:47 NVIDIA AI Agents & Omniverse for Robots
6 Ways AI Could Go WrongArtificial Intelligence is advancing at a pace faster than anyone could have previously predicted. Legislators across the planet race to keep up and protect us from what they refer to as ?Nightmare Scenarios? - Here are 6 of those situations.-- VIDEO CHAPTERS --00:00 Intro02:52 Predictive Policing06:02 Elections09:12 Social Scoring14:57 Nuclear Weapons18:32 Critical Sectors24:12 Optimist?s Take25:15 CreditsCorrection:1:19 We misspelled "Python" here - oops!
The REAL Reason People Are Scared of AI
Quote from: hamdani yusuf on 28/11/2024 11:36:58The REAL Reason People Are Scared of AIPeople are scared of people. AI is just one more weapon for bad people to use against others.
Every successful thing needs to be torn down and rebuilt. In more of an intimate conversation than a lecture, Huang relates his experience as an engineer, entrepreneur, and innovator. Find out how he meets the ever-constant challenge of re-invention as co-founder and CEO of NVIDIA.
"Innovation needs a lot of experimentation, experimentation needs exploration, explorations will result in failures. If you do not have tolerance for failures, you wont succeed" - Jen-Hsun Huang
00:02 Introduction to the Entrepreneurial Thought Leader Lecture Series at Stanford01:49 Jensen Wang, co-founder of NVIDIA and generous donor to the school of engineering.06:35 Having a unique perspective shapes vision and opportunities09:02 NVIDIA CEO's vision for 3D graphics technology in gaming industry13:23 Importance of perspective in entrepreneurial decisions15:37 Perspective and vision matter in shaping the trajectory of a company.20:03 Ignoring customers can be necessary for business innovation21:57 Transitioning NVIDIA to programmable 3D graphics processors26:04 Passion is essential for building a company28:00 Reinventing the company through programmable shaders and taking big risks32:00 Resource allocation and economic decision-making33:57 Culture of innovation and risk-taking at NVIDIA38:06 Importance of risk-taking and flexibility in entrepreneurship39:55 Utilizing GPU for computational graphics beyond traditional graphics applications43:39 Equal pay and share are fair and simple.45:38 CEOs and leaders need to be comfortable with ambiguity.49:13 Incorporating NVIDIA with minimal funding51:11 VCs invest in great people with a large market vision54:52 Reinventing the company every 10 years is a necessary and challenging process57:02 Survival is important: Cash is always king1:00:52 Start a company based on passion, not money1:02:56 Unique perspective and perseverance are key to success
Jen-Hsun at 40:02 saw the world of 2024 in 2011. This is why his company has a 2 trillion dollar evaluation today. Incredible.