0 Members and 2 Guests are viewing this topic.
What "limit"? Are they claiming >100% efficiency?
If you're so highly-paid researcher at a well-funded AI startup you can afford to retire in 3-5 years. Fact is, most people these days can't even save enough money by the time they are 65 to retire nevermind in just a few years from now. We are going to need some sort of bridge to a post-labour society where we don't end up with tens of millions of starving people who are economically worthless because of AGI/ASI.
So the "limit" only applies to silicon. Fair enough.
Most of common people don't even consider the achievement of type I civilization as a goal in their lifetime.
Imagine a world where humanity masters every planetary resource available to it?our first step on the famous Kardeshev scale of technological advancement. How distant is that step? Will we even become a true Type-1 civilization, and how can we get there?
In this video, we'll discuss the tell-tale signs that always happen before a financial crisis hits. And I'll show you how to spot the telltale signs and prepare for the inevitable.
It?s fair to say that few people in tech are positioned to have a bigger impact on the future than Sam Altman. As the CEO of OpenAI, Altman and his team have overseen monumental leaps forward in machine learning, generative AI and most recently LLMs that can reason at PhD levels. And this is just the beginning. In his latest essay Altman predicted that ASI (Artificial Super Intelligence) is just a few thousand days away. So how did we get to this point? In this episode of our rebooted series "How To Build The Future," YC President and CEO Garry Tan sits down with Altman to talk about the origins of OpenAI, what?s next for the company, and what advice he has for founders navigating this massive platform shift.Apply to Y Combinator: https://ycombinator.com/applyChapters (Powered by https://bit.ly/chapterme-yc) - 0:00 Coming up0:43 Intro: Is this the best time to start a tech company?6:27 How Sam got into YC10:53 The early days of YC Research12:49 Getting the first OpenAI team together17:13 Why scaling was considered heretical 21:42 Conviction can be powerful 26:15 Commercializing GPT-428:53 What drew Sam to create Loopt30:24 Learning from platform shifts33:15 Tech incumbents are unaware of what is happening with AI34:08 Sam's recommended startup path36:56 Reflecting on the OpenAI drama39:58 What startups are building with current models44:16 Outro: Advice for early founders + final thoughts
Neuroscientist Dr. Heather Berlin explains why AI will never be conscious in its current form. AI will eventually be able to DO much of what we can do, but it will never BE what we can be. But as we merge with AI via neural implants, when do we stop being human?Dr. Heather Berlin is a neuroscientist, a clinical psychologist, and an associate clinical professor of psychiatry and neuroscience at the Icahn School of Medicine at Mount Sinai in NY. She explores the neural basis of impulsive and compulsive psychiatric and neurological disorders with the goal of developing novel treatments. She is also interested in the brain basis of consciousness, dynamic unconscious processes, and creativity.Heather hosts the Nova series "Your Brain" on PBS, where she explores the latest research on the neural basis of consciousness. She previously hosted "Science Goes to the Movies" on PBS and Discovery Channel's "Superhuman Showdown." She makes regular appearances on "Startalk" with Neil DeGrasse Tyson, and has appeared on the BBC, History Channel, Netflix, National Geographic, and TED. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx
So another failure to define consciousness. Why do people worry about a word that nobody understands?
If you can't define something, you can't claim to understand it. Even if you think you have, you can't tell anyone because they won't know what you are talking about.So before you (or anyone else) discusses consciousness, you must define it, at least for the purpose of the current discussion.
Defining consciousness as the core concept in the universal terminal goal using only the requirements from the phrase and some basic knowledge of computational process.For summary, in a simple phrase, consciousness can be defined as the capacity to pursue goals.
https://arxiv.org/abs/2203.14465Generating step-by-step "chain-of-thought" rationales improves language model performance on complex reasoning tasks like mathematics or commonsense question-answering. However, inducing language model rationale generation currently requires either constructing massive rationale datasets or sacrificing accuracy by using only few-shot inference. We propose a technique to iteratively leverage a small number of rationale examples and a large dataset without rationales, to bootstrap the ability to perform successively more complex reasoning. This technique, the "Self-Taught Reasoner" (STaR), relies on a simple loop: generate rationales to answer many questions, prompted with a few rationale examples; if the generated answers are wrong, try again to generate a rationale given the correct answer; fine-tune on all the rationales that ultimately yielded correct answers; repeat. We show that STaR significantly improves performance on multiple datasets compared to a model fine-tuned to directly predict final answers, and performs comparably to fine-tuning a 30? larger state-of-the-art language model on CommensenseQA. Thus, STaR lets a model improve itself by learning from its own generated reasoning.
When machines are better, faster, cheaper, and safer: This creates an "economic agency paradox"?a death spiral of deflation and collapsed consumer demand. The solution may be tokenomics.
In 2014, during an interview with Charlie Rose at the TED Conference, Google cofounder Larry Page made headlines with an unusual revelation. Page didn't point to traditional charitable foundations or heirs when asked about his thoughts on legacy and philanthropy. Instead, he floated the idea of leaving his wealth to Elon Musk, the tech visionary behind Tesla and SpaceX.Page's reasoning? Musk's bold mission to colonize Mars and "back up humanity." In the interview, Rose referenced past comments Page had made about this idea, asking for clarification.
For Page, Musk's ambitious goals aligned with his belief that companies, when run effectively, could drive revolutionary change. He elaborated, saying, "Lots of companies don't succeed over time. They usually miss the future ... I try to focus on that: What is the future really going to be? And how do we create it?"
Musk's growing concerns about the dangers of artificial intelligence clashed with Page's more optimistic outlook. At Musk's birthday party that year, the two had a heated debate about AI. Page allegedly called Musk a "speciesist" for prioritizing human interests over other forms of intelligence.
In this profound keynote, Vector co-founder Geoffrey Hinton explores the philosophical implications of artificial intelligence and its potential to surpass human intelligence. Drawing from decades of expertise, Hinton shares his growing concerns about AI's existential risks while examining fundamental questions about consciousness, understanding, and the nature of intelligence itself.Geoffrey Hinton is one of the founding fathers of deep learning and artificial neural networks. He was a Vice President and Engineering Fellow at Google until 2023 and is Professor Emeritus at the University of Toronto. In 2024 Hinton was awarded the Nobel Prize in Physics. Key Topics Covered:? The distinction between digital and analog computation in AI? Understanding consciousness and subjective experience in AI systems? Evolution of language models and their capabilities? Existential risks and challenges of AI developmentTimeline:00:00 - Introduction03:35 - Digital vs. Analog Computation14:55 - Large Language Models and Understanding27:15 - Super Intelligence and Control34:15 - Consciousness and Subjective Experience41:35 - Q&A Session
The government is unironically the best candidate for job automation...
Relax guys the Chinese will make the same thing for less...there no need for the hype of price...relax bro...
If you take into consideration that you were going to fire somebody with a PhD level background you do not have to pay their insurance anymore you do not have to pay double taxes you don't have to worry about sick days you don't have to show up and it's going to be a tax write-off 100% for the AI as a business expense so $2,000 a month * 12 is 24,000 a year I guarantee you there is no PhD human working for that price. And now the company gets another tax write-off if that's not corporate manipulation I don't know what is
Generative AI has already changed how several white-collar jobs operate ? tools like Cursor are redefining coding, and LLMs are being used in everything from consulting to law ? but things might be about to get stranger still.Google CEO Sundar Pichai has said that in the not too distant future, intelligence will become abundant like air, and will be essentially free for all. ?We?re already seeing costs (of intelligence) coming down,? he said at a Carnegie Mellon University event. ?So if you look at the cost per token of our flagship models over the last 18 months, what used to cost $4 per million tokens now costs us just 13 cents. This trend is going to continue. You?re going to have intelligence too, just like air, too cheap to meter,? he added.Pichai shared a graph showing the costs of Google Models over time. In February 2023, Google?s state-of-the-art Palm Model cost $4 to generate a million tokens. In contrast, its latest model ? which is exponentially more capable than Palm ? costs just $0.13 to generate a million tokens. Pichai expects this increase in quality coupled with a fall in prices to continue, which would mean that ?intelligence? ? as provided by modern LLMs ? would be essentially free.
This isn't the same as past technological advancements. When there is something that can replace humans in EVERY capacity and do a better job, there won't be "new jobs" emerging for humans.
What "new jobs" will be created? Someone enlighten me. From my perspective a CEO will be able to run an international conglomerate all from their desktop, just like playing a video game. Companies will become 1-man shows. The AI will do all the hard stuff and the human will reap the rewards. Capitalism.There will be a higher demand for AI engineers and data engineers. So there will be jobs but high skill jobs which the vast majority of the working class won?t be able to get unfortunately.And those won't last forever either. That's a transition job. Eventually AI will take that over too.
The rapid development of AI represents a unique technological revolution, fueled by significant financial investments and groundbreaking innovations. Unlike past technological changes, AI's growth is outpacing our understanding and regulatory capabilities, with the potential to transform industries, economies, and human existence in unpredictable ways. In this talk, Peachy unveils the importance of preserving human nature in a tech-centric world. Peachy capped 25-years in marketing communications as Ogilvy & Mather Philippines' Chief Executive. Though the years, she assumed regional and global creative director roles, then shifted gears and became Chief Strategist, driving cross-discipline integration. As Managing Director, she oversaw the digital transformation of the Advertising discipline. She stood as Senior Industry Fellow at the De La Salle University College of St. Benilde and Professorial Lecturer at the University of the Philippines. Hyper Island called and she became its Master?s Programme Director. She still consults for the Philippine government, private corporations and speaks about Human Centered Design, Transmedia Storytelling and the Ethics of Customer Experience. As Managing Director of Hyper Island APAC, she drives their mission of building people for the unknown. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.