0 Members and 90 Guests are viewing this topic.
Nvidia's Director of AI Jim Fan introduces the concept of the Physical Turing Test and explains how simulation at scale will unlock the future of robotics. Learn about digital twins, digital cousins, and digital nomads in this groundbreaking talk from AI Ascent 2025.
Superstar... every time I watch and listen to Jim Fan my own world models gets updated. This is great work and appreciate nvidia sharing their vision and building in public
This was such a funny video. Very well done. It made the point with great humor. Loved the hackathon room disaster. The robot dog slipping on the banana peel. The Cheerios spilling all over. "it correctly identifies to the milk. I would give that an A- minus." LOL. All of this is so relatable. It brings these concepts down to a level people can relate to and understand. Physics simulation becomes relatable.
Absolutely brilliant? every time I listen to Jim Fan, my entire perspective expands a little more.Huge respect to NVIDIA for sharing this journey so openly.To anyone reading this?may you always stay curious and never stop growing.
TIMELINE00:00 Introductions10:01 The State of AI15:38 Scaling RL
In part 2 of our conversation with Wes Roth we talk about AI hype vs. reality of AI agents, China's role in AI open source, export controls, and the internal struggles at major tech companies navigating the future of AI.00:00 - Why AI Agents Plateau While Humans Catch Up00:48 - Welcome and Introduction to the Episode01:14 - Shift from AI Idealism to Tech Nationalism02:00 - Debating Export Controls and the China AGI Threat03:57 - Hypocrisy in U.S. Tech Policy on AI and Chips05:03 - U.S. vs China: Research Talent and Geopolitical Tensions06:13 - How Export Controls Led to Huawei?s Rise07:22 - Comparing AI Risk to Nuclear War: Critiques of Eric Schmidt08:18 - Scale AI?s Pivot and Motivated Reasoning in AI Policy09:06 - Chamath, Groq, and Talking Their Book in AI10:30 - The Alexander Wang DeepSeek Misinformation Loop12:16 - DeepSeek?s True Costs and Research Philosophy14:19 - Why Open Source from China Threatens U.S. AI Labs16:22 - Sam Altman on Fast Followers and Open Sourcing Tensions17:42 - China?s Strategy: Undermine U.S. AI with Open Source18:01 - Motivations Behind China?s Industrial AI Strategy19:29 - U.S. Tech Companies and Profit Distribution Models21:31 - Why Google and OpenAI Want to Own the Full Stack23:12 - Firebase Studio vs Cursor and Windsurf25:00 - Microsoft?s Missed Opportunity with VS Code and GitHub27:13 - Span of Control and Why Big Tech Struggles to Innovate29:01 - Google's Internal AI Politics and DeepMind's Role30:56 - How AI Ethics Delayed Google?s Progress31:59 - Google DeepMind?s Rise Post-ChatGPT33:01 - From Research Collaboration to Commercial Secrecy33:43 - Can Google Make the Leap From Search to AI?35:13 - Why Google Might Need a Wartime CEO36:11 - Apple Testimony and Search Competition Signals37:22 - Google's Cost Problem With AI-Powered Search38:47 - Is Google's Gemini Search Better Than OpenAI's?40:07 - Deep Research Use Cases and Future of Search42:22 - Creative Research, Mental Health, and Google's LLM Depth45:30 - The Power of AI Summarization with Research Citations46:30 - Are We Close to Realistic AI-Generated Meeting Avatars?49:42 - AI Agents vs. Long-Term Context and Human Supremacy50:40 - Why AI Agents Haven?t Yet Replaced Headcount51:36 - Wrap-Up and Tease for Part 2
HUGE AI breakthrough: Absolute Zero Reasoner deep dive. Self-improving AI that learns with no data! #ai #aitools #ainews #llm00:00 Absolute Zero intro0:50 Traditional methods of training AI models4:00 Absolute Zero algorithm5:01 How Absolute Zero Reasoner works7:19 Types of training tasks9:00 How good is Absolute Zero10:47 Tavus12:11 Adding Absolute Zero to existing models13:01 Interesting findings15:43 Uhoh?.16:50 Ablation study18:15 More interesting findings
Google AlphaEvolve explained. This AI system makes endless discoveries and breakthroughs. #ai #aitools #ainews #agi0:00 AlphaEvolve intro1:21 How AlphaEvolve works3:46 Evolution and natural selection5:36 AlphaEvolve architecture8:30 Matrix multiplication breakthrough 10:26 Data center breakthrough11:39 ChatLLM and DeepAgent12:48 TPU design14:00 FlashAttention breakthrough14:50 Improving Gemini models15:37 New math breakthroughs18:46 Things to note
It works based on Pareto principle.
QuoteIt works based on Pareto principle. which means, of course, that it is wrong 20% of the time.
LTX Video 13B installation tutorial & review. Free, fast, uncensored AI video generator. #ai #aivideo #aitools 0:00 LTX-Video 13B intro0:42 LTXV 13B specs and performance1:47 How and where to use LTXV 13b2:53 LTXV tests7:03 Keyframe tests9:52 How to install LTXV 13B14:21 How to use LTXV 13B image to video17:10 How to use LTXV upscaler19:00 How to use LTXV keyframesNewsletter: https://aisearch.substack.com/Find AI tools & jobs: https://ai-search.io/Support: https://ko-fi.com/aisearch
On April 17, 2025, the MIT Shaping the Future of Work Initiative and the MIT Schwarzman College of Computing welcomed Arvind Narayanan, Professor of Computer Science at Princeton University, to discuss his latest book, "AI Snake Oil: What Artificial Intelligence Can Do, What It Can?t, and How to Tell the Difference," co-authored with Sayash Kapoor.The presentation was followed by a discussion with Daron Acemoglu, MIT Institute Professor and Co-Director of the Shaping the Future of Work Initiative, along with audience Q&A.00:00 Opening Remarks (Asu Ozdaglar and Daron Acemoglu)05:45 Presentation (Arvind Narayanan)27:00 Fireside Chat (Arvind Narayanan and Daron Acemoglu)43:45 Audience Q+A
Tencent, Alibaba, and ByteDance just launched powerful new AI tools that are shaking up the entire tech world. From real-time image generation with Hunyuan Image 2.0 to all-in-one AI video editing with VACE and ByteDance?s advanced vision-language model Seed1.5-VL, these releases outperform major players like OpenAI and Google in multiple benchmarks. With innovations in AI drawing, video generation, and multi-agent research systems, China is rapidly leading the next wave of artificial intelligence breakthroughs.🔍 What?s Inside:00:35 Tencent releases Hunyuan Image 2.0 - https://wtai.cc/item/hunyuan-image-2-002:36 Alibaba launches VACE - https://github.com/ali-vilab/VACE06:00 ByteDance unveils Seed1.5-VL and DeerFlow 06:40 Seed1.5-VL - https://arxiv.org/abs/2505.0706209:00 DeerFlow - https://deerflow.tech/🎥 What You?ll See:How Tencent?s AI draws and updates visuals instantly from text, sketch, or voiceHow Alibaba?s VACE edits and animates full videos from a single promptWhy ByteDance?s AI agents are now automating entire research pipelines📊 Why It Matters:China?s top tech firms just released AI systems that outperform OpenAI, Google, and Anthropic in key areas like real-time image generation, AI video editing, and multimodal reasoning?signaling a major power shift in the global AI race.
Quotewhich means, of course, that it is wrong 20% of the time.But the overall efficiency is increased. So it's worth the try.Entrepreneurs take chances with much lower probability than that.
which means, of course, that it is wrong 20% of the time.
Watch all the biggest announcements from Nvidia's keynote address at Computex 2025 in Taipei, Taiwan.0:00 Intro0:08 Nvidia Building 6G on AI2:13 Grace Blackwell NVL724:52 Grace Blackwell GPU8:00 NVLink Fusion9:36 DGX Spark AI Computer11:42 DGX Station Super Computer13:00 Nvidia RTX Pro Server14:34 Nvidia AI Robotics16:40 Omniverse Digital Twin19:19 Conclusion
SOURCESRobust Agents Learn Causal World Models : https://arxiv.org/abs/2402.10877This video has produced quit the discussion in the comments. I appreciate that, but I can't respond to all of them, so I'll respond to the main arguments here:1) "AI Agents Do Work". This comes down to the definition. I laid out the four factor definition. If you think an LLM searching the internet is an agent, then sure, they do work and are useful. But that's not a huge productivity boost. Summarizations of multiple website isn't the AI revolution that so many CEOs are gushing about.2) "Humans are bad at causal inference. They are agents, and so this disproves your claim." To clarify, my argument is in regards to the often advertised, high flying definition of AI Agents, as those that will act autonomously, without supervision, to automate large swaths of our intellectual labor. Humans, relative to this, are actually bad agents as well. You can already get cheap, outsourced people-as-agents internationally, and yet the SWE labor market hasn't been transformed. In fact, 10-20 years ago, many in the US thought all software engineering would be outsourced internationally, since it was an order of magnitude cheaper. Why pay 500K a year for a Google engineer when you can do this? Well, because reliability, expertise and experience matter and it's worth the money. In other words, companies are willing to pay a ton for an engineer to get things right. Said differently, to understand and decide with respect to an accurate causal model of the environment in which they operate is worth the price. So when AI Agents, who don't understand the causal environment well (because they mostly only model correlations), are available and cheap, they won't displace engineers. They must be much better before they get widespread adoption.
Is 2025 the year of AI agents? Will reasoning models allow agents to solve challenging open problems? From software engineering to web task automation, it has been claimed that agents will solve challenging open problems. Unfortunately, current agents suffer from many shortcomings that reduce their utility in real-world tasks ? look no further than Rabbit R1 and the Humane Pin. In this talk, we will explore how current agents fall far short of their claimed performance in the real world and understand best practices for improving agent evaluation. Learn how to avoid known pitfalls and build AI agents that actually matter.Recorded live at the Agent Engineering Session Day from the AI Engineer Summit 2025 in New York.
(0:00) The Besties welcome Sergey Brin!(0:40) Sergey on his return to Google, and how an OpenAI employee played a role!(5:58) AI's true superpower and the next jump(12:23) AI robotics: humanoids and other form factors(17:07) Future of foundational models and open-source(19:59) Human-computer interaction in the age of AI(31:09) Partner shoutouts: Thanks to OKX, Circle, Polymarket, Solana, BVNK, and Google Cloud!
Demis Hassabis is the CEO of Google DeepMind. Sergey Brin is the co-founder of Google. The two leading tech executives join Alex Kantrowitz for a live interview at Google's IO developer conference to discuss the frontiers of AI research. Tune in to hear their perspective on whether scaling is tapped out, how reasoning techniques have performed, what AGI actually means, the potential for an intelligence explosion, and much more. Tune in for a deep look into AI's cutting edge featuring two executives building it.Chapters:00:00 Intro & Welcome01:30 Frontier Models Headroom 03:00 Scale vs Algorithm Debate 04:30 Data-Center Demand & Chips 06:00 DeepThink Reasoning Paradigm 08:00 Defining & Timing AGI 11:00 AlphaEvolve Self-Improvement 13:30 Why Brin Came Back to Google15:30 Project Astra & Visual Agents 18:30 Smart Glasses Lessons from Google Glass 21:30 Veo 3 & Training-Data Quality 24:00 Lightning Round (Web, AGI Date) 26:30 Are We Living in a Simulation?
On this episode, Zvi Mowshowitz joins me to discuss sycophantic AIs, bottlenecks limiting autonomous AI agents, and the true utility of benchmarks in measuring progress. We then turn to time horizons of AI agents, the impact of automating scientific research, and constraints on scaling inference compute. Zvi also addresses humanity?s uncertain AI-driven future, the unique features setting AI apart from other technologies, and AI?s growing influence in financial trading.You can follow Zvi's excellent blog here: https://thezvi.substack.comTimestamps: 00:00:00 Preview and introduction 00:02:01 Sycophantic AIs 00:07:28 Bottlenecks for AI agents 00:21:26 Are benchmarks useful? 00:32:39 AI agent time horizons 00:44:18 Impact of automating research00:53:00 Limits to scaling inference compute 01:02:51 Will the future go well for humanity? 01:12:22 A good plan for safe AI 01:26:03 What makes AI different? 01:31:29 AI in trading
CHAPTERS ⤵ 00:00 - AI News & Research HighlightsKick things off with the latest breakthroughs and stories in artificial intelligence.06:52 - Google Beam: 3D AI Video Chat Is HereExplore Google's futuristic AI-first 3D communication platform.08:37 - AI Controversy: Darth Vader?s Voice Sparks SAG-AFTRA BacklashWhy Fornite's AI-generated Darth Vader is causing a stir in Hollywood.10:14 - AI Traffic Tech Cuts Crashes Without Creating New RisksA study reveals how AI can improve road safety?without side effects.11:42 - TED Talk: The Dangers of AI and How to Avoid ThemYoshua Bengio breaks down catastrophic AI risks and a safer future.15:20 - Fixing Broken QR Codes with Deep LearningHow AI brings unreadable QR codes back to life using super-resolution.18:45 - Consciousness & the Brain: All Senses Lead to OneNew research uncovers deep brain links to sensory integration and awareness.21:45 - SaaS Redefined: Self-as-a-Service ExplainedA fresh take on identity, data, and digital autonomy.23:08 - Reader?s Theory of AwarenessA deep dive into a compelling framework for understanding consciousness.25:47 - Meet the Agent Orchestrator EraWhy AI is evolving from tools to fully orchestrated agents.26:52 - Neuralink & Intelligence: What Would Algernon Say?A thought-provoking take on brain tech, ethics, and classic sci-fi.SOURCES ⤵ @googledeepmind @TED @fortnite