0 Members and 1 Guest are viewing this topic.
Quote from: hamdani yusuf on 15/01/2024 03:18:55Quote from: alancalverd on 14/01/2024 13:43:25Quote from: hamdani yusuf on 14/01/2024 11:19:47By defining consciousness as capacity to pursue goals, So a homing missile is as conscious as a homing pigeon? Can they reproduce? Adapt to their environment? Build nest? Compete for resources?Quite a few humans can't, And AFAIK no computer can.
Quote from: alancalverd on 14/01/2024 13:43:25Quote from: hamdani yusuf on 14/01/2024 11:19:47By defining consciousness as capacity to pursue goals, So a homing missile is as conscious as a homing pigeon? Can they reproduce? Adapt to their environment? Build nest? Compete for resources?
Quote from: hamdani yusuf on 14/01/2024 11:19:47By defining consciousness as capacity to pursue goals, So a homing missile is as conscious as a homing pigeon?
By defining consciousness as capacity to pursue goals,
Computer software has done those things in virtual environment.
The goal seems to be an infinitely intelligent, infinitely aware being.
Quote from: hamdani yusuf on 19/01/2024 21:14:18Computer software has done those things in virtual environment. Which is a roundabout way of saying that they haven't done them. I have flown to Mars and bombed the Mohne dam in a virtual environment. You don't get medals for not actually doing something.
Quote from: hamdani yusuf on 12/12/2023 14:58:49I planned to make a video about natural consciousness, and how functional components of consciousness can emerge from natural processes.Finally, here you are.//www.youtube.com/watch?v=73aZlSuZOqIThis video describes how complex systems like consciousness can emerge naturally.
I planned to make a video about natural consciousness, and how functional components of consciousness can emerge from natural processes.
Unlock the essence of intelligence by exploring the layers of learning. This video follows the progression of evolutionary, experiential, and abstract learning, forming the bedrock of artificial intelligence. It provides insight into various learning paradigms including unsupervised learning, supervised learning, reinforcement learning, association learning, and the ingenuity of genetic algorithms. As part of the narrative, the essence of language and its role in advancing intelligence is explored. This is Part 2 of my enlightening AI/Deep Learning series, serving as a bridge to understanding modern AI frameworks like ChatGPT and GPT models. Embark on this intellectual journey to grasp how the lineage of learning has sculpted today's AI landscape
There is no terminal goal.
6:07people have pointed out open source models are only a few months behind closed Source modelsand the first thing that happens is they're all jailbroken. so you know yeah putting putting guard rails on a commercially available API. great that is not a permanent or long-term solution outside of a commercial deployment and so when you you know we have to assume that in the future there are going to besuper intelligent open source models that are fully jailbroken and uncensoredthat's just a fact of life that we're going to have to deal with. which means we should be researching how to create self-detecting or self-directingself-correcting and self-improving models now so that we know how to dothat and then also there is the uncertainty of corrigibility so this is something that I do agree with some of the doomers which is good luckcontrolling something that is a million times more intelligent than you. yes right now they're just inert machines yes right now they rely on you knowbillion dollar data centers and you know super expensive gpus and a tremendousamount of power so we still have the power switch. we should not assume that we will have the power switch forever into the future. and so the combination of The Duality of intelligence and the uncertainty of corrigibility means thatlike right now while we have control is when we need to be researching fullautonomy because in the long run what's going to protect us from an evil AIthat's going haywire or maybe even not an evil AI but just something that is malfunctioning. we need something that is benevolent that is good that is stableto help you know kind of level the playing field
The man with his hand on the "off" switch is always in control. That's called authority.The man who made the decision to deploy AI (or any other device) bears full responsibility for the outcome.Authority can be delegated, responsibility cannot.It doesn't matter what havoc the machine causes: identifiable humans are liable to compensate the victims. Followers of the Post Office Horizon scandal will be familiar with the scenario.
we should not assume that we will have the power switch forever into the future.
Never mind the AGI system. The human who introduced it remains liable for whatever it does, and can be switched off in the usual way.
The threat of criminal prosecution or direct reprisal might dissuade anyone from switching it on in the first place. So far, every actual or potential application seems to have been expensive and pointless (MacRobots), dangerous and pointless (self-driving cars) , or a means of diluting truth with indiscriminately recycled internet garbage (ChatGPT etc).
This video describes how complex systems like consciousness can emerge naturally.
It may not be a popular view, but descendants can be considered as a form of extended consciousness.
If you keep redefining consciousness without actually defining it, you can convince yourself of anything. That slippery slope leads to politics and economics.