0 Members and 138 Guests are viewing this topic.
Should MoA be the default for Open Source now?
I have a copy of "The Next Hundred Years" written by Brown, Bonner and Weir in the 1950s. They made the same prediction, of "the end of work", as did the Luddites, and there's no reason to believe it now either.
A fine example is the washing machine and vacuum cleaner. Neither actually displaced the housewife: what happened was that expectations of cleanliness rose to increase her workload!
Today we take a look at my Mixture of Predictive Agents (MoPA) architecture. This will use the wisdom of many theory to try to predict a outcome by taking the avg of all outputs from many llm models.00:00 MoPA Architecture02:37 Hubspot03:41 MoPA Python Code10:47 Testing the MoPA System13:41 Final Thoughts
True AGI will combine formal and informal methods. People are already combining these tools in this way. M-x Jarvis in our time, but evolving Open Source is critical to delivering real value.1. Empirical argument that induction must be capable of emerging deductive and formal systems.2. Decoding to a less restricted but less consistent informal systems and then re-encoding to formal can identify new consistency.3. Formal systems can be used to induce coherence in informal systems, accelerating the search for new formal coherence.4. Both logic and metalanguage can naturally emerge by generalizing logical dependence and stripping away semantics5. If a self-model is exposed, the metalanguage capability implies self-programming capability.TIMESTAMPS00:00 Intro00:34 Deduction02:30 Formal Systems05:58 Inducing Deduction11:04 Spectral Reasoning15:38 Recursive Computation22:32 Online Learning25:34 Limitations27:36 Remaining Work31:21 Non-Problems34:14 Doing it Wrong37:06 Open Source: Part Deux
The interesting irony with symbolic reasoning these days is that the big LLMs all trained on it? yet, it?s just stuck in there unable to really add value unless someone asks an LLM about symbolic reasoning, and even more ironically, it may not even produce an accurate response? so why should a person go through all the work to learn things if the things he/she learns can?t actually produce something valuable in and of themselves? Which leads me to my next point: all books are just reading machines that need people to operate them. But what if a book could read itself? What if books could read each other? You might get the platonic representation hypothesis, right? So, If knowledge is power, then what does that say about intelligence? Active inference is the way to go.
Liquid neural networks, spiking neural networks, neuromorphic chips. The next generation of AI will be very different.0:00 How current AI works04:40 Biggest problems with current AI9:54 Neuroplasticity11:05 Liquid neural networks14:19 Benefits and use cases15:08 Bright Data16:22 Benefits and use cases continued21:26 Limitations of LNNs23:03 Spiking neural networks26:29 Benefits and use cases28:57 Limitations of SNNs30:58 The future
How far can we scale 'artificial' intelligence and 'artificial-world' realism? We can see for ourselves the latest video models, like Gen-3 from Runway, promising a new era of entertainment and gaming, but is there any limit on what more data can do? And what about the intelligence of models: no end in sight or diminishing returns? I will go over Claude 3.5 Sonnet and draw on some recent big interviews to make the argument that we need to trust our own experiences and judgement, rather than naively listen to the words of AGI lab CEOs.
People are saying AI is overhyped, but the actual research being released says otherwise. We've had groundbreaking paper after groundbreaking paper being released this year. If anything, the pace has increased. It's just that you don't see it in the large models, yet. It takes some time to be implemented and tested since it's risky for the big players to make big changes. There is also a lot of money invested in the old technologies. Expect a lot of new players entering that are not tied to legacy architectures and hardware. Things are about to get wild.
00:00 intro01:05 an AI company from 2016 is researching video games03:27 Google DeepMind AlphaGo05:27 OpenAI and ASI06:36 DeepMind's SIMA11:52 NVIDIA's "foundation agent"17:53 Sam Altman on "lack of data"25:23 Agent Foundation Model. Microsoft, Stanford & UCLA35:44 Ilya "Superintelligence is within reach"
The Harvard-Google DeepMind collaboration has created an artificial neural network that can control a virtual rat's movements in an ultra-realistic physics simulation, mimicking how biological brains coordinate complex behaviors. This groundbreaking virtual rat brain model provides unprecedented insights into the neural mechanisms underlying motor control, cognition, and neurological disorders. By combining advanced machine learning techniques with high-fidelity simulations, this breakthrough paves the way for transformative progress in neuroscience, robotics, and our understanding of biological intelligence.
Scientists have developed a mind-reading AI that uses brain scans to recreate images seen by people and monkeys with impressive accuracy. A new social media app called noplace, popular with Gen Z, blends old-school customization with modern AI technology, focusing on text-based interactions. Additionally, websim.ai is an innovative tool that generates entire fake websites from user prompts, showcasing the potential and creativity of advanced AI models.
Exploring Microsoft?s Open Source GraphRAG for Advanced Query SummarizationDiscover Microsoft?s groundbreaking GraphRAG, an open-source system combining knowledge graphs with Retrieval Augmented Generation to improve query-focused summarization. I?ll guide you through setting it up on your local machine, demonstrate its functions, and evaluate its cost implications.00:00 Introduction to GraphRAG and Its Cost Issue00:44 Understanding Traditional RAG01:46 Limitations of Traditional RAG02:22 Introduction to GraphRAG02:39 Technical Details of GraphRAG05:46 Setting Up GraphRAG on Your Local Machine06:22 Running the Indexing Process12:00 Running Queries with GraphRAG14:26 Cost Implications and Alternatives
In this video we're going to answer just how good Large Language Models (LLMs) like ChatGPT 4o, Claude 3.5, and Google's Gemini are at mathematics. I'll cite some of the results from the literature using databases such as GSM8k and MATH, and we'll see several math examples along the way. References below. 0:00 How to measure AI at math?0:56 GSM8k and GSM-Hard2:44 The MATH Database4:43 ChatGPT 4o vs Gemini vs Claude 3.5 Sonnet6:13 My Linear Algebra Exams8:32 Computational EnginesReferences and Citations:*GSM8k (including graphic at 1:10 ) https://paperswithcode.com/sota/arith...*GSM-Hard stats found in here: https://arxiv.org/abs/2406.07394*Google Deepmind paper citing MATH database: https://arxiv.org/pdf/2406.06592*I first saw the question about the smallest integer here: https://x.com/ericneyman/status/18041...*Math Olympiad level problems (5:30): https://arxiv.org/abs/2406.07394*Stats for Claude 3.5: https://www.anthropic.com/news/claude...*Image of two calculators at 2:30 shared via CC-BY-SA 3 original here: https://www.wikidata.org/wiki/Q166882...
Meet MultiChat - Multiple AI Models in ONEHave you ever wondered how to seamlessly interact with multiple AI models at once? Meet MultiChat, the ultimate tool for leveraging the power of various AI models in a single, unified platform. Whether you're a developer, researcher, or just curious about AI, MultiChat simplifies the process, making AI more accessible and user-friendly.MultiChat brings together the strengths of different AI models, allowing you to switch between them effortlessly. Imagine having the versatility of multiple AIs to tackle various tasks, from generating creative content to solving complex problems. MultiChat is designed to enhance your productivity and expand your capabilities, all within one interface.This innovative tool is perfect for anyone looking to explore the potential of artificial intelligence. With MultiChat, you can experiment with different models and find the best fit for your needs without the hassle of managing separate applications. It's an all-in-one solution that saves you time and effort.How can multiple AI models be used together? What are the benefits of using MultiChat? How does MultiChat improve productivity? Why choose MultiChat for your AI needs? How does MultiChat work? This video will answer all these questions. Make sure you watch all the way through to not miss anything.
Generative A.I is the nuclear bomb of the Information Age. If the Internet doesn?t feel as creative or fun or interesting as you remember it, you?re not alone. The so-called ?Dead Internet Theory? explains why. The rise of artificially generated content killed the internet. How did this happen? Why? And? Is it still possible to stop it?00:00 Intro01:50 Dead Internet Theory11:41 Unforeseen Consequences
i don't think hollywood is over; hollywood as we know is over.
It will give power in the hands of indie producers.Just imagine what kind of memes awaits for us!
I'm an electronic music producer and we spectated the same situation in 2000s when $1 billion cost mixing consoles in a special buildings were replaced by $200 cost software at your bedrooms. In a 2010 you had complete symphonic orchestra under your fingertips.And now you have even singing software which sounds extremely natural for just $150 and free LLM which can generate unlimited lyrics in seconds.Yes, lots of jobs will disappear.Some people (who explore new technologies and opportunities) will become very famous which wouldn't be possible for them without these technologies.It's just how nature and the evolution works: people invent new tools and competition becomes more hard.The difference is just in speed of changes. It took a centuries at the ancient times, but it takes just a year at present.It's just a life ...