0 Members and 130 Guests are viewing this topic.
Better for whom? Survival of what? The inventors of AI? Fascists? Communists? Please state your objectives clearly.
getting physics into genAI seems to be one of the next big hurdles. so many really tough vision nuts have been cracked in the last two years that it seems reasonable to think, and hope, that this problem will soon join that list.
Cockroaches, as I have reminded you many times, are likely to survive when homo "sapiens" has bombed or eaten itself into extinction because some priest or politician wanted it so.
So you're not taking about the survival of humans, but of something else. So am I, and one proven candidate already exists.
Big News: Nvidia Just Gave Robots Human-Like Learning AbilitiesHave you ever wondered if robots could learn like humans? Nvidia just made a groundbreaking leap in AI and robotics, giving machines human-like learning abilities. This could change everything?from how we interact with technology to the future of automation. But how does it work, and what does this mean for the future? Let?s dive in.Nvidia?s latest breakthrough allows robots to learn from their environment, adapt in real-time, and even improve their performance without constant reprogramming. Using advanced AI models, these robots can now understand tasks the way humans do, making them smarter and more efficient. This innovation could revolutionize industries like manufacturing, healthcare, and even home assistance.What makes this development so special? Unlike traditional robotics, which rely on pre-programmed instructions, Nvidia?s approach enables machines to learn from experience. This means robots can now handle unpredictable situations, solve problems creatively, and even collaborate with humans more seamlessly. The potential applications are endless?imagine robots that can assist surgeons, manage warehouses, or even care for the elderly.Could this be the next step toward true artificial general intelligence (AGI)? With Nvidia pushing the boundaries, the line between human and machine learning is blurring faster than ever. How will this impact jobs? Will robots eventually outlearn humans? What industries will benefit the most? This video explores all these questions and more?don?t miss it.How does Nvidia?s AI work? What is human-like learning in robots? Can robots learn like humans now? How will Nvidia?s breakthrough change robotics? What industries will benefit from AI robots?
Genspark?s new AI tool called Super Agent can plan trips, analyze data, generate videos, and even make real phone calls to book restaurants or services. Powered by a mixture-of-agents system combining multiple language models and specialized tools, it handles real-world tasks like managing dietary restrictions, generating cooking tutorials, and creating South Park?style videos. Positioned as a direct competitor to Manus, Super Agent from China stands out with its built-in voice call capabilities and everyday practicality.🔍 Key Topics: Genspark?s launch of Super Agent, a powerful all-in-one AI assistant Real-world features like phone call booking, travel planning, and video generation How Genspark is challenging existing AI agents like Manus with real usability 🎥 What You?ll Learn: How Super Agent uses multiple large language models and over 80 toolsets What makes its phone-calling ability a breakthrough in everyday AI tasks Why Genspark?s AI is gaining global attention for solving real-life problems 📊 Why It Matters: This video explores how Genspark?s Super Agent is setting a new standard for autonomous AI tools by combining voice interaction, real-world decision-making, and multi-task automation in one powerful system.DISCLAIMER: This video covers the latest in AI assistants, real-world automation, and how new Chinese innovations like Super Agent are changing how we interact with intelligent tools.
INSANE AI NEWS: Skyreels A2, DreamActor M-1, Lumina-mGPT, Alibaba VACE, Hi3DGen, Meta Mocha, AnimeGamer, Midjourney V7 & more #ai #ainews #aitools #agi #aivideo Create Ultra-realistic Moving Photo Avatars using Wondershare Virbo: https://bit.ly/4igHTlg0:00 Intro0:45 Hi3DGen 5:25 Skeleton estimation8:03 Infinite anime game11:00 Skyreels A214:13 Dream Actor M1 AI acting19:59 Wondershare Virbo21:41 Easy Control free image to Ghibli26:06 Lumina-mGPT open source 4o image generator29:53 Meta Mocha AI animator33:06 GPT5 and o4-mini34:54 Video segmentation37:38 Runway Gen4 and Midjourney V739:34 Alibaba VACE is out
Just like Disney cartoons, but with faces that look more like real people with no real interest in the plot.
Meta just released llama4 out of nowhere. Here is an overview and first thoughts.
I?m joined by Ras Mic to explain Model Context Protocol (MCP). Mic breaks down how MCPs essentially standardize how LLMs connect with external tools and services. While LLMs alone can only predict text, connecting them to tools makes them more capable, but this integration has been cumbersome. MCPs create a unified layer that translates between LLMs and services, making it easier to build more powerful AI assistants.Timestamps:00:00 - Intro02:26 - The Evolution of LLMs: From Text Prediction to Tool Use07:39 - MCPs explained10:59 - MCP Ecosystem Overview13:47 - Technical Challenges of MCP15:05 - Conclusion on MCP's Potential15:48 - Startup Ideas for Developers and Non-Technical UsersKey Points:? MCP (Model ContextProtocol) is a standard that creates a unified layer between LLMs and external services/tools? LLMs by themselves are limited to text prediction and cannot perform meaningful tasks without tools? MCP solves the problem of connecting multiple tools to LLMs by creating a standardized communication protocol? The MCP ecosystem consists of clients (like Tempo, Windsurf, Cursor), the protocol, servers, and services1) What are MCPs and why should you care?MCPs are NOT some complex physics theory - they're simply STANDARDS that help LLMs connect to external tools and services.Think of them as universal translators between AI models and the tools they need to be truly useful.This is HUGE for making AI assistants actually capable!2) The Evolution of LLMs: From Text Prediction to Tool UseStage 1: Basic LLMs can only predict text? Ask ChatGPT to send an email? "Sorry, I can't do that"? They're glorified text predictors (if I say "My big fat Greek..." it knows "wedding" comes next)? Limited to answering questions, not DOING things3) The Current State: LLMs + ToolsStage 2: LLMs connected to tools? Companies like Perplexity connect LLMs to search engines? This makes them more useful but creates problems? Each tool = different "language" the LLM must learn? Connecting multiple tools = engineering NIGHTMAREThis is why we don't have Jarvis-level assistants yet! 4) Enter MCPs: The Game-ChangerMCPs create a UNIFIED LAYER between LLMs and external services.Instead of your AI speaking 10 different "languages" to use 10 different tools, MCPs translate everything into ONE language.Result? LLMs can easily access databases, APIs, and services without massive engineering headaches.5) The MCP Ecosystem ExplainedThe MCP system has 4 key components:? MCP Client: User-facing apps like @tempoai, Windsurf, Cursor? Protocol: The standardized communication method? MCP Server: Translates between client and services? Service: The actual tool (database, search engine, etc.)Brilliant move by Anthropic: SERVICES must build MCP servers!6) Why This Matters For BuildersFor technical folks:? Opportunity to build tools like MCP app stores? Easier integration between services? Less engineering headachesFor non-technical folks:? Watch closely as standards evolve? When standards finalize, new business opportunities will emerge? Think of MCPs as Lego pieces you'll stack to build powerful AI appsNotable Quotes:"LLMs by themselves are incapable of doing anything meaningful... The only thing an LLM in its current state is good at is predicting the next text." - Ross Mike"Think of every tool that I have to connect to make my LLM valuable as a different language... MCP, you can consider it to be a layer between your LLM and the services and the tools." - Ross Mike
what so sad about these start up is... the next super intelligence frontier model can do these agent task alone and lot faster
Quote from: alancalverd on 06/04/2025 09:23:06Just like Disney cartoons, but with faces that look more like real people with no real interest in the plot.They will get better over time.
Quote from: hamdani yusuf on 06/04/2025 11:51:38Quote from: alancalverd on 06/04/2025 09:23:06Just like Disney cartoons, but with faces that look more like real people with no real interest in the plot.They will get better over time.Define "better".
Quote from: alancalverd on 03/04/2025 11:32:10So you're not taking about the survival of humans, but of something else. So am I, and one proven candidate already exists.Cockroaches can't survive in the outer space. Tardigrades can. You seemed to choose a wrong candidate.
https://en.wikipedia.org/wiki/Kardashev_scaleStarting from a functional definition of civilization, based on the immutability of physical laws and using human civilization as a model for extrapolation, Kardashev's initial model was developed. He proposed a classification of civilizations into three types, based on the axiom of exponential growth:A Type I civilization is able to access all the energy available on its planet and store it for consumption.A Type II civilization can directly consume a star's energy, most likely through the use of a Dyson sphere.A Type III civilization is able to capture all the energy emitted by its galaxy, and every object within it, such as every star, black hole, etc.Under this scale, the sum of human civilization does not reach Type I status, though it continues to approach it. Extensions of the scale have since been proposed, including a wider range of power levels (Types 0, IV, and V) and the use of metrics other than pure power, e.g., computational growth or food consumption.[2][3]
5 years ago, nobody would have guessed that scaling up LLMs would as successful as they are. This belief, in part, was due to the fact that all known statistical learning theory predicted that massively oversized models should overfit, and hence perform worse than smaller models. Yet the undeniable fact is that modern LLMs do possess models of the world that allow them to generalize beyond their training data. Why do larger models generalize better than smaller models? Why does training a model to predict internet text cause it to develop world models? Come deep dive into the inner working of neural network training to understand why scaling LLMs works so damn well.
Do you really think that cockroaches are better than humans?