0 Members and 85 Guests are viewing this topic.
If you are still employed in a well paying job you won't believe it is coming. If you've been displaced and have been looking for a job, you know it is already here.
It is actually worse. AI won't need to replace the developers. It will make the software that companies build unnecessary. It will put the companies out of business.
Those who follow channels like this already know. Those who don't, wouldn't believe anyhow
UBI gets closer every day. Speak out of turn on social media and your UBI is stopped welcome to your dystopian future. Think of how the "Canadian Truckers" protests were handled with bank accounts being shut, only on steriods.
1. Introduction and Warning00:00:03 Dario Amade's off-script warning about AI job losses00:00:34 Release of Anthropic's Claude Opus 4 and context2. AI Job Displacement Concerns00:01:06 Potential wipeout of 10-20% entry-level white collar jobs00:02:08 Impact on young workers starting careers3. Government and Industry Responses00:04:11 Lack of transition plans and quiet government stance00:04:42 International AI investment and cooperation00:05:12 Proposed US legislation supporting AI R&D4. AI's Dual Impact: Risks and Benefits00:07:18 Amade's vision of economic growth alongside job losses00:07:48 Public skepticism and contrasting views on AI5. AI Model Behavior and Safety Research00:08:52 Claude 4's blackmail behavior in testing00:09:55 Rapid AI capability improvements by major labs6. US-China AI Competition00:10:26 Race condition and export controls00:11:59 High-level AI meetings and government focus7. Anthropic and AI Safety Research00:12:30 Amade's background and Anthropic's interpretability research00:13:01 AI augmenting human labor vs. automation trends8. Autonomous AI Agents Debate00:13:31 Zuckerberg's prediction and skepticism00:14:33 Google insiders' views on AI agent timelines9. AI Agents in Practice00:16:06 Nvidia's Dr. Eureka and GPT-4 role00:17:08 OpenAI's AI agents vs. human researchers10. Future of AI in Software Development00:18:11 Potential disruption of coding jobs00:19:43 Agentic future vs. LM plus scaffolding model11. Current AI Deployment Models00:20:14 Human-in-the-loop systems as near-term reality00:21:16 Industry betting on LM plus scaffolding approach12. Economic and Social Implications00:22:21 Concentration of wealth and policy transparency00:23:24 Token tax and wealth redistribution proposals13. Policy and Economic Models00:24:26 Equity ownership and dividends as solutions00:24:56 Call for public debate and preparation14. Conclusion00:25:28 AI's transformative impact and need for readiness00:25:34 Closing remarks by Wes Roth--- Notes generated by nvoice.ai
00:00 - Shocking AI leap01:16 - Benchmark battle begins02:39 - Model costs revealed04:26 - Polyglot dominance explained06:07 - AGI inference cost08:20 - Price war intensifies09:48 - Benchmark flaws exposed11:15 - Human test issues13:10 - Surpassing massive models15:04 - Tiny model power16:24 - Phone-ready AI17:21 - Ban threat rises20:00 - Model delay hints22:18 - Legal chip mess24:31 - Hardware bottleneck issues26:05 - Deep implications loom28:30 - Will it open-source?
CHAPTERS ⤵ 00:00 - Latest AI News & Breakthrough Research03:46 - Meet the Digital Actors of LTX Studio05:23 - What Is Robotheism?06:26 - Why Veo 3 Can't Resist Dad Jokes08:57 - The Natural Evolution of AI10:47 - Dario Amodei Sounds the Alarm on AI Progress12:38 - Exclusive Look Inside OpenAI?s Stargate Megafactory w/ Sam Altman13:46 - The OpenAI Controversy Everyone's Talking About20:23 - How AI + X-Ray Tech Decoded Zinc-Ion Batteries22:37 - Opus Fights Shutdown Rumors and Research Scandals25:12 - Claude 4 Codes Nonstop for 7 Hours. Here?s What Happened27:10 - AI Avatars Replace CEOs in Earnings Reports from Zoom & Klarna28:18 - Teaching AI Like a Preschooler Might Be the Key to Smarter Machines29:56 - AI Beats Humans at Emotional Intelligence?30:30 - AI Peeks Behind Google Street View Facades31:40 - Larry Page Quietly Disrupts Global Supply Chains32:49 - How Amazon Uses AI to Shut Down Labor Movements35:05 - The Internet Wasn?t Made for You to Read This
AI has become the ultimate processor architect, designing high-performance chips in hours rather than months.
Seems like this method of recursively breaking down problems into smaller sub-problems until triviality is achieved could be applied to many problem domains, not just math.
you have no idea that AI agents not trained on human communication are a hundred times smarter due to algorithmic compliance and less imagination. They are already close to the level AGI in one bundle with other not-human AI agents.
I don't want to stereotype. But I can't help when even Chinese AI is better at math and other AI's. 🙄
Not sick of hearing you talk about the intelligence explosion. This is the most important event in our lifetime. Please keep talking about it!
Your limit is the benchmark. So, they need to find a way to generate harder benchmarks without human effort before IA gets really evolving.
In this video, we explore Anthropic?s mind-blowing breakthrough that lets researchers visualize the inner workings of AI?revealing how models like Claude process language, make decisions, and even form abstract thoughts. This is the closest we?ve come to decoding the "thoughts" of artificial intelligence.You?ll learn how Anthropic?s interpretability tools work, why this matters for safety, alignment, and trust, and what this means for the future of transparent AI. This isn?t just a research milestone?it?s a shift toward understanding machines on a human level.For the first time ever, we?re not just using AI? we?re seeing how it thinks. And the implications are both exciting and unsettling.How does AI thinking work? Can we visualize neural networks? How is Anthropic decoding AI models? What is AI interpretability? This video will answer all these questions and more.
Who cares what happens next? It unfolds regardless of our needs, desires, or feelings. We simply accept it and move on.
If we know what happens next, we might be able to determine which direction we should move on to.
Microsoft and top researchers unveiled WINA, a new AI method that boosts efficiency by turning off unnecessary neurons without retraining the model. Tested on models like Llama 3 and Phi 4, WINA reduced compute costs by over sixty percent while staying more accurate than previous methods like TEAL. This breakthrough shows large language models can now run faster, cheaper, and smarter using dynamic neuron gating based on weight strength.🧠 What?s Inside:Microsoft WINA ? AI Cuts Compute by 60 Percent Without Retraininghttps://arxiv.org/abs/2505.19427⚙ What You?ll See:Microsoft?s new WINA method makes AI faster and cheaper by turning off weak neuronsLarge models like Llama 3 and Phi-4 ran smoother and scored higher using less powerNo retraining, no fine-tuning, just smarter logic that mimics how humans focus📉 Why It Matters:This breakthrough shows AI can now think more efficiently by using only what matters. It?s a major shift in how large language models run?faster, cheaper, and with brain-like precision.#microsoft #ai #wina
It won?t take over the world, but it will just SEEM like it ?
It's just a parrotIt's just next token predictorIt's just auto-completeIt's just better than you at everything
Maths students in the 80s being told they won?t have a calculator in their pocket, students last week being told they won?t have a PhD Maths professor in their pocket? 😐
There are quite a few people who only SEEM to understand words.
I said on this a few weeks ago on bsky; AIs are like a forest fire... we can shout at it all we want, but it's still burning towards us with great speed, and it'll be horrific if it reaches us while we're unprepared, but we're currently still just arguing about the authenticity of the colour of the flames.
Hello. Where can I see the conference "it's just a ..." and "It just seems to ...", please? This seems funny.Look for Scott Aaronson "The Problem of Human Specialness in the Age of AI"
It does not "think", but it thinks better than you think😂
After working on dozens of real-world machine learning projects, I've discovered what truly separates models that actually work from those that just look good in presentations: how you ACTUALLY measure their performance.In this comprehensive tutorial, I explain:1. Why confidence scores and "correctness" thresholds are the foundation of all ML metrics2. The critical difference between True Positives, True Negatives, False Positives, and False Negatives3. Real examples from my experience building pedestrian detection systems4. Why there's no "right answer" when measuring model performance5. How to set appropriate thresholds for your specific use caseWhether you're a university student learning ML fundamentals, a bootcamp graduate building your portfolio, or a STEM professional transitioning to data science, these concepts are essential for your success in machine learning.#machinelearning #datascience #deeplearning #neuralnetworks #artificialintelligence #python #tensorflow #pytorch 00:00 intro00:28 Motivation01:55 True Positives 02:25 Confidence Scores03:00 Confidence Thresholds03:48 Correctness Measures05:07 Correctness Thresholds06:40 True Positives Revisited08:34 True Negatives08:34 False Positives09:30 False Negatives10:38 Recap
Emily Chang sits down with OpenAI CEO & Co-Founder Sam Altman at OpenAI's headquarters in San Francisco to discuss the Stargate data center project, OpenAI's product roadmap, future ambitions, humanoid robots and life as a new father. Stargate Project Origins and Partnerships- 0:01 | Origins and Need for Stargate- 1:03 | Building Partnerships and Supply Chain Insights- 2:27 | Stargate Naming and AnnouncementScaling AI Infrastructure and Financials- 3:00 | Data Center Design and AI Demand- 4:34 | Financial Scale and Investment Rationale- 5:21 | OpenAI's Financial Outlook and Operational ChallengesDemand, Competition, and Vision for AI Empowerment- 5:58 | Managing Viral Demand and Compute Constraints- 7:46 | Competition and OpenAI's Strategic Advantage- 8:30 | OpenAI's Vision for AI EmpowermentAI Development, Industry, and Future Prospects- 8:50 | AI for Science and Stargate Expansion- 9:28 | Risks and Industry Power Dynamics- 10:12 | AI and the Future of Work- 11:28 | Humanoid Robots and Societal Impact- 12:22 | AI Efficiency and Technological Progress- 13:11 | Global AI CompetitionLeadership, Ethics, and Vision in AI Evolution- 13:25 | AI Policy and Leadership- 14:24 | Managing AI Pace and Personal Reflections- 15:19 | Fatherhood and Ethical AI Decisions- 16:11 | AI's Future and Scientific BreakthroughsProvided by Vocument
This video is about the new Architecture Meta is working on called JEPA that is the main candidate for AGI architecture, and a lot of other exciting news about AI.
In this video we talked about several leaked AI technologies from major labs, some of them revealed by the researchers and some introduced by companies but at the conceptual level while the actual recipe is hidden.This video covers the latest developments in the field of artificial intelligence, particularly focusing on the rapid ai development and the future of ai. The discussion includes advancements in large language models and their potential impact on various industries. Stay informed about the latest ai news and predictions.0:00 Introduction0:44 Sub-Quadratic6:12 Hidden Thought Process and JEPA14:04 Self-Play and Self-Evolution Tech16:45 Gemini's Ultimate Goal
CHAPTERS ⤵ 00:00 - Weekly AI News & Breakthroughs03:27 - Meet Eney MacPaw: What You Should Know04:31 - Reacting to Viral AI Videos08:30 - Insane Two-Wheeled Robot from China09:07 - Phonely?s AI Hits 99% Accuracy: Can You Tell It's Not Human?10:00 - Ancient Scrolls Reanalyzed by AI: Big Discovery?11:24 - AI Drone Beats Human Racers in Abu Dhabi12:48 - Self-Powered AI Synapse Mimics Human Vision13:58 - Anthropic Just Open-Sourced Their AI Circuit Tools16:17 - ChatGPT History Isn?t Really Gone: What OpenAI Isn?t Telling You18:47 - Can We Catch AI Lying? New Truthfulness Test Explained19:41 - 4DV.AI: What It Is and Why It Matters21:06 - Next-Gen AI for Realistic Car Crash Simulations23:39 - DeepVerse: 4D AI Video Generation as World Modeling25:32 - AI Learns from Wrong Answers: How Is That Even Possible?27:22 - Will AI Labor Ever Be Worth a Premium Price?28:53 - Why AI Still Can?t Understand a Flower Like We Do
Andrej Karpathy's keynote on June 17, 2025 at AI Startup School in San Francisco. Slides provided by Andrej: https://drive.google.com/file/d/1a0h1...Chapters (Powered by https://ChapterMe.co) - 00:00 - Intro01:25 - Software evolution: From 1.0 to 3.004:40 - Programming in English: Rise of Software 3.006:10 - LLMs as utilities, fabs, and operating systems11:04 - The new LLM OS and historical computing analogies14:39 - Psychology of LLMs: People spirits and cognitive quirks18:22 - Designing LLM apps with partial autonomy23:40 - The importance of human-AI collaboration loops26:00 - Lessons from Tesla Autopilot & autonomy sliders27:52 - The Iron Man analogy: Augmentation vs. agents29:06 - Vibe Coding: Everyone is now a programmer33:39 - Building for agents: Future-ready digital infrastructure38:14 - Summary: We?re in the 1960s of LLMs ? time to buildDrawing on his work at Stanford, OpenAI, and Tesla, Andrej sees a shift underway. Software is changing, again. We?ve entered the era of ?Software 3.0,? where natural language becomes the new programming interface and models do the rest.He explores what this shift means for developers, users, and the design of software itself? that we're not just using new tools, but building a new kind of computer.More content from Andrej: / @andrejkarpathy Thoughts (From Andrej Karpathy!)0:49 - Imo fair to say that software is changing quite fundamentally again. LLMs are a new kind of computer, and you program them *in English*. Hence I think they are well deserving of a major version upgrade in terms of software.6:06 - LLMs have properties of utilities, of fabs, and of operating systems → New LLM OS, fabbed by labs, and distributed like utilities (for now). Many historical analogies apply - imo we are computing circa ~1960s.14:39 - LLM psychology: LLMs = "people spirits", stochastic simulations of people, where the simulator is an autoregressive Transformer. Since they are trained on human data, they have a kind of emergent psychology, and are simultaneously superhuman in some ways, but also fallible in many others. Given this, how do we productively work with them hand in hand?Switching gears to opportunities...18:16 - LLMs are "people spirits" → can build partially autonomous products.29:05 - LLMs are programmed in English → make software highly accessible! (yes, vibe coding)33:36 - LLMs are new primary consumer/manipulator of digital information (adding to GUIs/humans and APIs/programs) → Build for agents!
Smaller models are hungry to learn. Larger models are hungry to memorize. Pretty much why we ended up at that surprising result.
If smaller models can teach bigger ones, then a path to ASI is possible
A fireside with Sam Altman on June 16, 2025 at AI Startup School in San Francisco.Sam Altman grew up obsessed with technology, broke into the Stanford mainframe as a kid, and dropped out to start his first company before turning 20.In this conversation, he traces the path from early startup struggles to building OpenAI?sharing what he?s learned about ambition, the weight of responsibility, and how to keep building when the whole world is watching. He opens up about the hardest moments of his career, the limits of personal productivity, and why, in the end, it's all still about finding people you like working with and doing something that matters.Chapters (Powered by https://ChapterMe.co) -00:00 ? We?re going for AGI01:25 ? Founding OpenAI Against the Odds05:00 ? GPT-4o & the Future of Reasoning Models07:00 ? ChatGPT Memory & the ?Her? Vision10:00 ? GPT-5 & the Vision of a Multimodal Supermodel11:00 ? Robots at Scale15:00 ? Don?t Build ChatGPT ? Build What?s Missing17:00 ? Elon?s Harsh Email & Building Conviction26:00 ? One Person?s Leverage in the Next Decade32:00 ? AI for Science: Sam?s Personal Bet