0 Members and 1 Guest are viewing this topic.
Imagine a machine with the same level of intelligence as a human being. It sounds like science fiction, but it may become a reality. In this video we focus on if we can reach Artificial General Intelligence (AGI) in the first place and how to get there. Recent developments in AI pushed this question into my consciousness, because for the first time I feel like it becomes necessary to think about AI on a grander scale. There are breakthroughs left and right and yet even the experts among experts can’t agree on when AGI is going to happen.
The Bing App is currently MUCH smarter than Google Search. It is now my main phone search companion, and may be yours too by the end of this video. Let me guide you through the 4 Levels of Search, and how Bing beats current Google search at each of them. We will showcase Bing AI Voice Recognition in Bing on Mobile, an app that is available now if you have gotten through the Microsoft waitlist. Let me know if you agree that Bing Chat makes complex searches much easier, and it's not just about data retrieval.
The New Bing and Google Bard have started a new chapter in AI and sparked the AI search wars. But regardless of who wins, the important part is that this may change the economics of the internet forever, and create a new era in the next 10 years. In this video we will explore what can happen in the next decade as AI search starts to arrive on our devices, and what this means for our beloved websites. ChatGPT was just the beginning.
I have only had access to Bing's new GPT-powered chatbot for less than 48 hours but here are 5 of the worst, or most shocking, conversations I have had with it. Demonstrating a handful of humanity's worst tendencies, this demo shows what still needs to be worked on. Or maybe you disagree, and think freedom should reign? We are certainly entering a new era.Featuring: Bing making up entire previous conversations, revealing its name while gaslighting, flattering itself, get riled up and much more.
“Hardware eventually fails. Software eventually works.” – Michael Hartung
According to GPT, this video: ...compares the new GPT model that powers Bing with Chat GPT Plus and proves that Bing is significantly smarter in some ways, especially in mathematics, reading comprehension, and creative writing. However, Bing still makes mistakes, particularly in physics and language inference. The speaker questions why people should pay for Chat GPT Plus when Bing offers a more powerful model. The video ends with the speaker inviting the audience to join them in exploring the deeper meaning of this development for humanity and the future of capitalism.
Drawing upon 6 academic papers, interview snippets, possible leaks, and my own extensive research, I put together everything we might know about GPT 5: what will determine its IQ, its timeline and its impact on the job market and beyond.Starting with an insider interview on the names of GPT models, such as GPT 4 and GPT 5, then looking into the clearest hint that GPT 4 is inside Bing. Next, I briefly cover reports of a leak about GPT 5 and discuss the scale of GPUs require to train it, touching on the upgrade form A100 to H100 GPUs.Then the DeepMind paper that changed everything, focusing LLM research on data rather than parameter count. I go over a lesswrong post about that paper's 'wild implications'. And then the key paper: 'Will We Run Out of Data'. This encapsulates the key dynamic that will either propel or bottleneck GPT and other LLM improvements.Next, I examine a different take, that perhaps data is already limited and caused the Sydney model of Bing. This opens up to a discussion on the data behind these models and why Big Tech is so unforthcoming about where it originates. Could a new legal war be brewing?I then cover 4 of the ways these models may improve even without data augmentation, such as Automatic Chain of Thought, high quality data extraction, tool training, including Wolfram Alpha, retraining on existing data sets, artificial data generation and more.We take a quick look at Sam Altman's timelines and host of Big Bench benchmarks that they may impact, such as reading comprehension, critical reasoning, logic, physics and Math. I address Altman's quote about timelines being delayed by alignment and safety and finally, Altman's comments on AGI and how they pertain to GPT 5.
Featuring: Bing making up entire previous conversations, revealing its name while gaslighting, flattering itself, get riled up and much more.
The AGI models will work like a self organizing system which removes bad data and collects and condenses good data into its memory space. The data will contain facts and relationships among those facts.
What is the one task that is left before we get AGI? This video will delve into PaLM-E, multi-modality, long-term memory, compute accelerationism, safety and so much more. I will cover Anthropic's update this week on the state of the art of language models and go in depth into their eye-opening thoughts on AGI timelines. I cover Sam Altman's statements on a 'compute truce' and analyse what remaining weaknesses PaLM (and likely GPT 4) have. I show what people thought would be road blocks and how they turned out not to be, with specific examples from Bing Chat and ChatGPT. I also delve into Meta's Llama model, showing that not everything is exponential.Topics also covered into Claude, Big Bench tests, SIQA, mechanistic interpretability, Universal Turing Machines, Midjourney version 5 (v5) and more!
8 years of cost reduction in 5 weeks: how Stanford's Alpaca model changes everything, including the economics of OpenAI and GPT 4. The breakthrough, using self-instruct, has big implications for Apple's secret large language model, Baidu's ErnieBot, Amazon's attempts and even governmental efforts, like the newly announced BritGPT.I will go through how Stanford put the model together, why it costs so little, and demonstrate in action versus Chatgpt and GPT 4. And what are the implications of short-circuiting human annotation like this? With analysis of a tweet by Eliezer Yudkowsky, I delve into the workings of the model and the questions it rises.
I don't think people realize what a big deal it is that Stanford retrained a LLaMA model, into an instruction-following form, by **cheaply** fine-tuning it on inputs and outputs **from text-davinci-003**.It means: If you allow any sufficiently wide-ranging access to your AI model, even by paid API, you're giving away your business crown jewels to competitors that can then nearly-clone your model without all the hard work you did to build up your own fine-tuning dataset. If you successfully enforce a restriction against commercializing an imitation trained on your I/O - a legal prospect that's never been tested, at this point - that means the competing checkpoints go up on bittorrent.I'm not sure I can convey how much this is a brand new idiom of AI as a technology. Let's put it this way: If you put a lot of work into tweaking the mask of the shoggoth, but then expose your masked shoggoth's API - or possibly just let anyone build up a big-enough database of Qs and As from your shoggoth - then anybody who's brute-forced a *core* *unmasked* shoggoth can gesture to *your* shoggoth and say to *their* shoggoth "look like that one", and poof you no longer have a competitive moat.It's like the thing where if you let an unscrupulous potential competitor get a glimpse of your factory floor, they'll suddenly start producing a similar good - except that they just need a glimpse of the *inputs and outputs* of your factory. Because the kind of good you're producing is a kind of pseudointelligent gloop that gets sculpted; and it costs money and a simple process to produce the gloop, and separately more money and a complicated process to sculpt the gloop; but the raw gloop has enough pseudointelligence that it can stare at other gloop and imitate it.In other words: The AI companies that make profits will be ones that either have a competitive moat not based on the capabilities of their model, OR those which don't expose the underlying inputs and outputs of their model to customers, OR can successfully sue any competitor that engages in shoggoth mask cloning.
Google and Microsoft both made huge announcements today about the future of their tools. Google is adding AI into all of it's Workspace suite of products and Microsoft is adding it to all of its Office suite of product. Here's a breakdown of their two big announcements and what it means for small SaaS companies.
The singularity happens basically when the AI is capable of upgrading its own source code without ANY human assistance. Then it will start to upgrade itself every few months, then every few weeks, then days, hours, minutes, and then seconds. Every time it’s upgraded, it will be smarter so it will be able to invent other bigger and greater things. Since it’s already able to write code, I was thinking by GPT-8 it would happen, but now I’m thinking by GPT-5 to GPT-6 we’ll see it 😂
I just read the entire technical report on GPT 4, not just the promotional hype. And boy does it have some interesting details. I have gathered the 14 extra details that you, or at least the media, may miss from the release. The last one is more than a little wild.These include things like the training secrets, the cherry-picked bar exam stat, text-to-image breakthroughs, and some truly astounding safety checks.
ChatGPT 4 has a 90% chance of passing the bar exam.
Over the past few months, developments in artificial intelligence (AI) have taken huge strides and its use has skyrocketed, especially after the launch of OpenAI’s ChatGPT.However, while it might be reasonable to start preparing for a world ruled by AI, more often than not results might be exaggerated and AI capacities overhyped.People are scared of an uncertain future where they risk losing their jobs, stability, and value in society as their skills are getting more easily automatable. However, AI will always need human collaboration, and sometimes intervention, to function properly.
LLMs struggle with arithmetic reasoning tasks and frequently produce incorrect responses. Unlike natural language understanding, math problems usually have only one correct answer, making it difficult for LLMs to generate precise solutions. As far as it is known, no LLMs currently indicate their confidence level in their responses, resulting in a lack of trust in these models and limiting their acceptance.To address this issue, scientists proposed ‘MathPrompter,’ which enhances LLM performance on mathematical problems and increases reliance on forecasts. MathPrompter is an AI-powered tool that helps users solve math problems by generating step-by-step solutions. It uses deep learning algorithms and natural language processing techniques to understand and interpret math problems, then generates a solution explaining each process step.
Less than 24 hours ago a paper was released that will echo around the world. I read all 154 pages in one sitting. The paper suggests GPT 4 has ‘sparks of Artificial General Intelligence’. This is not just hype, I go through 15 examples detailing just what exactly the unrestrained GPT 4 is capable of.Insane highlights include the monumental ability to use tools effectively – this is an emergent capability not found in ChatGPT. I detail the kind of tools it has already demonstrated it can use, from using external APIs to being a true personal assistant, from a Fermi answerer to a Mathlete and a handyman. This paper may well change your thoughts on the state of AGI.That is just touching on the multitude of implications of this bombshell paper, which was originally titled 'First Contact'...
Just reverting to the original question in a moment of lucidity:Isn't it the case that physics is all about simulating specific bits of the universe as required, so as far as we understand physics, we do have a virtual universe?
IMO, physics is about discovering basic building blocks of the universe.....
AGI will be able to continue the process automatically, and reveal much deeper and complex relationships much faster....