0 Members and 6 Guests are viewing this topic.
Sam Altman and the OpenAI team demonstrated the new GPT-5 Reasoning Model, which will be free for all ChatGPT users starting today (Aug . 0:00 Intro by Sam Altman1:06 ChatGPT-5 Explained1:45 ChatGPT-5 Pricing and Availability2:35 Building a Physics Model in ChatGPT-55:01 Building a French Language Learning App in ChatGPT-57:57 ChatGPT Voice Improvements9:24 Building a 3D Video Game in ChatGPT-5
Testing ChatGPT-5 and comparing it to ChatGPT 4o and other older models. This is a pretty substantial setup up.
How AlphaGeometry combines logic and intuition.Timestamps:0:00 - What's surprising1:33 - Solve without AI7:10 - Where AI comes in12:48 - Grant's comments
At a private dinner in San Francisco, Sam Altman dropped a bombshell: the next CEO of OpenAI might not be human. In this video, we break down everything behind that claim ? from the messy rollout of GPT-5 and why companies still love it, to trillion-dollar data center plans, brand-new AI hardware with Jony Ive, and brain-computer interfaces that could let you talk to ChatGPT with just your thoughts. We?ll also cover Altman?s feud with Elon Musk, his warnings about an AI bubble, privacy battles, and the looming disruption of Gen Z jobs.🦾 What You?ll See:? Sam Altman?s shocking claim about an AI CEO? GPT-5 backlash, new modes, and enterprise adoption? OpenAI?s trillion-dollar data center vision? Jony Ive?s secret AI hardware project? Brain-computer interfaces that talk to ChatGPT? The bitter feud between Sam Altman and Elon Musk? Privacy, encryption, and the AI bubble warnings? The brutal impact on Gen Z jobs and the future of work🚨 Why It Matters:OpenAI isn?t just releasing models anymore ? it?s building the foundation of a world where AI isn?t a tool, it?s a participant.
Qwen Image Edit review & installation tutorial. Free & open-source. How to use Qwen Image in ComfyUI with low VRAM. Qwen Image Edit vs Flux Kontext dev #ai #aitools #aiart #ainews0:00 Qwen Image Edit intro0:58 Official demos4:48 Qwen Image Edit vs Flux Kontext dev6:17 Color correction and deblur8:00 Ultra-zoom8:52 Photo restoration9:42 Character model sheet10:24 Text editing12:13 Removing watermarks13:32 DataImpulse14:49 Style transfer16:56 How to use Qwen Image Edit online17:58 How to install Qwen Image Edit in ComfyUI24:16 How to use Qwen Image Edit with low VRAM
Yann LeCun is the chief AI scientist at Meta. He joins Big Technology Podcast to discuss the strengths and limitations of current AI models, weighing in on why they've been unable to invent new things despite possessing almost all the world's written knowledge. LeCun digs deep into AI science, explaining why AI systems must build an abstract knowledge of the way the world operates to truly advance. We also cover whether AI research will hit a wall, whether investors in AI will be disappointed, and the value of open source after DeepSeek. Tune in for a fascinating conversation with one of the world's leading AI pioneers.Chapters: 00:00 Introduction to Jan LeCun and AI's limitations01:12 Why LLMs can't make scientific discoveries05:40 Reasoning in AI systems: limitations of chain of thought10:13 LLMs approaching diminishing returns and the need for a new paradigm16:29 "A PhD next to you" vs. actual intelligent systems21:36 Consumer AI adoption vs. enterprise implementation challenges25:37 Historical parallels: expert systems and the risk of another AI winter29:37 Four critical capabilities AI needs for true understanding33:19 Testing AI's physics understanding with the paper test37:24 Why video generation systems don't equal real comprehension43:33 Self-supervised learning and its limitations for understanding51:10 JEPA: Building abstract representations for reasoning and planning54:33 Open source vs. proprietary AI development58:57 Conclusion
16:38 "Everything is systems. If one part is no longer the bottleneck of the system, it's another part that becomes the next bottleneck." Intelligence is just one part of the system, which has been the bottleneck in the past.
In this video, Igor discusses GPT-5's rocky launch and shares some tips on how to get the most out of GPT-5, plus he shows you how to get the legacy models of ChatGPT back like GPT-4o and o3. He covers new releases from Google Gemini, Midjourney, Claude and more. It's a surprisingly packed week, enjoy!Chapters:00:00 What?s New?00:46 GenAI Traffic Share Update01:41 GPT-5 Discussion and Testing11:06 Bubble AI12:55 Claude Memories15:20 Gemini Memories15:52 Claude Code News19:24 Kitten TTS21:10 Grok 5 Announcement21:51 Midjourney HD Video22:10 Veo 3 API22:33 Learning in Gemini23:55 Google Jules24:14 LumaLabs Video Editing24:31 Matrix Game 2.0
why they've been unable to invent new things despite possessing almost all the world's written knowledge.
AI loves to act confident, but when you ask real questions it usually falls apart. That?s where Elysia comes in. Built by Weaviate, this brand-new open-source Python framework is rewriting the rules of agentic RAG systems. Instead of blind searches and vague answers, Elysia shows you its full decision tree, adapts how it displays your data, and even learns from your feedback to get smarter every time you use it. With features like chunk-on-demand, personalized feedback datasets, dynamic data displays, and multi-model routing, it?s one of the most ambitious open-source AI projects we?ve seen. And the best part?it?s free, transparent, and ready to run today.🦾 What You?ll See:? Why traditional RAG systems fail most of the time? How Elysia uses decision trees for transparent reasoning? Seven adaptive data display modes for cleaner results? Feedback-driven personalization that improves performance over time? Smarter chunk-on-demand document handling? Multi-model routing for cost and efficiency? How to set up and run Elysia in minutes? Why this could be the new standard for agentic AI systems🚨 Why It Matters:Elysia isn?t just another framework?it?s a blueprint for how future AI systems could think, reason, and adapt in real time. With transparency, adaptability, and personalization at its core, this project shows that the next wave of AI won?t just be bigger?it?ll actually be smarter.#AI #Elysia #Agent
In this video I will look at why LLMs hallucinate. LLMs hallucinate not because they?re ?broken,? but because today?s training and accuracy-only evaluations incentivize guessing. This is based on a new research from OpenAI. TIMESTAMP00:00 Hallucinations in Language Models00:48 How Language Models Work02:26 The Issue with Next Word Prediction02:50 Evaluation Mechanisms and Their Flaws04:11 Proposed Solutions to Mitigate Hallucinations07:16 Observations and Claims from OpenAI's Paper
It's crazy AI compression is still not the standard!