0 Members and 3 Guests are viewing this topic.
What "limit"? Are they claiming >100% efficiency?
If you're so highly-paid researcher at a well-funded AI startup you can afford to retire in 3-5 years. Fact is, most people these days can't even save enough money by the time they are 65 to retire nevermind in just a few years from now. We are going to need some sort of bridge to a post-labour society where we don't end up with tens of millions of starving people who are economically worthless because of AGI/ASI.
So the "limit" only applies to silicon. Fair enough.
Most of common people don't even consider the achievement of type I civilization as a goal in their lifetime.
Imagine a world where humanity masters every planetary resource available to it?our first step on the famous Kardeshev scale of technological advancement. How distant is that step? Will we even become a true Type-1 civilization, and how can we get there?
In this video, we'll discuss the tell-tale signs that always happen before a financial crisis hits. And I'll show you how to spot the telltale signs and prepare for the inevitable.
It?s fair to say that few people in tech are positioned to have a bigger impact on the future than Sam Altman. As the CEO of OpenAI, Altman and his team have overseen monumental leaps forward in machine learning, generative AI and most recently LLMs that can reason at PhD levels. And this is just the beginning. In his latest essay Altman predicted that ASI (Artificial Super Intelligence) is just a few thousand days away. So how did we get to this point? In this episode of our rebooted series "How To Build The Future," YC President and CEO Garry Tan sits down with Altman to talk about the origins of OpenAI, what?s next for the company, and what advice he has for founders navigating this massive platform shift.Apply to Y Combinator: https://ycombinator.com/applyChapters (Powered by https://bit.ly/chapterme-yc) - 0:00 Coming up0:43 Intro: Is this the best time to start a tech company?6:27 How Sam got into YC10:53 The early days of YC Research12:49 Getting the first OpenAI team together17:13 Why scaling was considered heretical 21:42 Conviction can be powerful 26:15 Commercializing GPT-428:53 What drew Sam to create Loopt30:24 Learning from platform shifts33:15 Tech incumbents are unaware of what is happening with AI34:08 Sam's recommended startup path36:56 Reflecting on the OpenAI drama39:58 What startups are building with current models44:16 Outro: Advice for early founders + final thoughts
Neuroscientist Dr. Heather Berlin explains why AI will never be conscious in its current form. AI will eventually be able to DO much of what we can do, but it will never BE what we can be. But as we merge with AI via neural implants, when do we stop being human?Dr. Heather Berlin is a neuroscientist, a clinical psychologist, and an associate clinical professor of psychiatry and neuroscience at the Icahn School of Medicine at Mount Sinai in NY. She explores the neural basis of impulsive and compulsive psychiatric and neurological disorders with the goal of developing novel treatments. She is also interested in the brain basis of consciousness, dynamic unconscious processes, and creativity.Heather hosts the Nova series "Your Brain" on PBS, where she explores the latest research on the neural basis of consciousness. She previously hosted "Science Goes to the Movies" on PBS and Discovery Channel's "Superhuman Showdown." She makes regular appearances on "Startalk" with Neil DeGrasse Tyson, and has appeared on the BBC, History Channel, Netflix, National Geographic, and TED. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx
So another failure to define consciousness. Why do people worry about a word that nobody understands?
If you can't define something, you can't claim to understand it. Even if you think you have, you can't tell anyone because they won't know what you are talking about.So before you (or anyone else) discusses consciousness, you must define it, at least for the purpose of the current discussion.
Defining consciousness as the core concept in the universal terminal goal using only the requirements from the phrase and some basic knowledge of computational process.For summary, in a simple phrase, consciousness can be defined as the capacity to pursue goals.
https://arxiv.org/abs/2203.14465Generating step-by-step "chain-of-thought" rationales improves language model performance on complex reasoning tasks like mathematics or commonsense question-answering. However, inducing language model rationale generation currently requires either constructing massive rationale datasets or sacrificing accuracy by using only few-shot inference. We propose a technique to iteratively leverage a small number of rationale examples and a large dataset without rationales, to bootstrap the ability to perform successively more complex reasoning. This technique, the "Self-Taught Reasoner" (STaR), relies on a simple loop: generate rationales to answer many questions, prompted with a few rationale examples; if the generated answers are wrong, try again to generate a rationale given the correct answer; fine-tune on all the rationales that ultimately yielded correct answers; repeat. We show that STaR significantly improves performance on multiple datasets compared to a model fine-tuned to directly predict final answers, and performs comparably to fine-tuning a 30? larger state-of-the-art language model on CommensenseQA. Thus, STaR lets a model improve itself by learning from its own generated reasoning.
When machines are better, faster, cheaper, and safer: This creates an "economic agency paradox"?a death spiral of deflation and collapsed consumer demand. The solution may be tokenomics.
In 2014, during an interview with Charlie Rose at the TED Conference, Google cofounder Larry Page made headlines with an unusual revelation. Page didn't point to traditional charitable foundations or heirs when asked about his thoughts on legacy and philanthropy. Instead, he floated the idea of leaving his wealth to Elon Musk, the tech visionary behind Tesla and SpaceX.Page's reasoning? Musk's bold mission to colonize Mars and "back up humanity." In the interview, Rose referenced past comments Page had made about this idea, asking for clarification.
For Page, Musk's ambitious goals aligned with his belief that companies, when run effectively, could drive revolutionary change. He elaborated, saying, "Lots of companies don't succeed over time. They usually miss the future ... I try to focus on that: What is the future really going to be? And how do we create it?"
Musk's growing concerns about the dangers of artificial intelligence clashed with Page's more optimistic outlook. At Musk's birthday party that year, the two had a heated debate about AI. Page allegedly called Musk a "speciesist" for prioritizing human interests over other forms of intelligence.
In this profound keynote, Vector co-founder Geoffrey Hinton explores the philosophical implications of artificial intelligence and its potential to surpass human intelligence. Drawing from decades of expertise, Hinton shares his growing concerns about AI's existential risks while examining fundamental questions about consciousness, understanding, and the nature of intelligence itself.Geoffrey Hinton is one of the founding fathers of deep learning and artificial neural networks. He was a Vice President and Engineering Fellow at Google until 2023 and is Professor Emeritus at the University of Toronto. In 2024 Hinton was awarded the Nobel Prize in Physics. Key Topics Covered:? The distinction between digital and analog computation in AI? Understanding consciousness and subjective experience in AI systems? Evolution of language models and their capabilities? Existential risks and challenges of AI developmentTimeline:00:00 - Introduction03:35 - Digital vs. Analog Computation14:55 - Large Language Models and Understanding27:15 - Super Intelligence and Control34:15 - Consciousness and Subjective Experience41:35 - Q&A Session