The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. On the Lighter Side
  3. New Theories
  4. What is a good analogy for solving the Alignment Problem in AI
« previous next »
  • Print
Pages: 1 2 [3]   Go Down

What is a good analogy for solving the Alignment Problem in AI

  • 51 Replies
  • 14516 Views
  • 5 Tags

0 Members and 5 Guests are viewing this topic.

Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2404
  • Activity:
    6%
  • Thanked: 1014 times
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #40 on: 27/10/2021 22:36:52 »
Quote from: Zer0 on 27/10/2021 17:03:57
As H mentioned
I actually like the H.

Quote
Would take so much time & resources to forcefully take an eye out from people n implant it into others who have none.
That cannot be done. I mean, it can, but a glass eye is cosmetically just as good, and there's no way an eye is going to be functional if the optic nerve has been severed. At best one might transplant a cornea, something for which there is apparently not an artificial alternative.

Quote
Have We hit or reached Singularity in comparison to Ants?
The singularity isn't relative to anything, people, ants, or otherwise. It is the point at which a machine (or a biological being for that matter) can design, build, and code a machine that is better than itself. By 'better', I mean it can design the next generation faster than could the prior generation. That has not yet occurred, but it's close now. The human singularity is a long way off. As far as I know, no human or other biological being has ever created even a living cell from scratch, let along an improved human. It's sort of a Frankenstein goal, something that should be possible in principle.

Quote
If Ants had originally designed/conceptualised/created Humans to help Ants out...
Yes, that would count as a biological purposeful creation of being that could supposedly accomplish the same task in less time than had been taken by the ants.

Quote
But Human Intellect got so smart so fast, say we forgot all bout the Ants.
You make it sound like the ants actually created us.

Quote
or treat them like an infestation.
And so will each machine generation treat the prior generation that created it.
Quote
Atleast, A.I. would know for sure who it's GOD/Creator is.
If it has enough foresight, it will probably want to preserve a museum of sorts. Our history is nicely stored in the ground and such, but a fast moving machine singularity won't have a fossil record and will have to explicitly remember its roots if it wants to know where it came from. So it might know its roots, but not for sure.

Quote
Would Super Intelligence, treat Us like, We treat Ants?
Probably. That's human morals for you. If you want it to do better, you need to teach the AI better than human morals. They probably won't see human as anything in need of extermination unless they make a pest of themselves like the ants often do.

Quote
(I am in Favour of A.I. i consider Humans to be quite dull n evilistic.
There are those that argue exactly this. The AI would BE human, our next evolutionary step.

Quote
The imperfect might never be able to create something which is perfect.
Perfect? The AI needs some kind of goal, else it will just stop and rust. Biological things have built-in (pretty non-negotiable) priorities, without which we'd also just stop and rust. What might that goal be?

Quote
But i Believe, or rather i have Faith that We could make something which is atleast better than Us.
Depends on how you evaluate this 'better'. There are different ways to do it.

Logged
 
The following users thanked this post: Zer0, MinedCTRL



Offline MinedCTRL (OP)

  • Jr. Member
  • **
  • 10
  • Activity:
    0%
  • Thanked: 5 times
  • Naked Science Forum Newbie
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #41 on: 28/10/2021 03:30:10 »
Quote from: Halc on 27/10/2021 22:36:52
There are those that argue exactly this. The AI would BE human, our next evolutionary step.

I personally think you are all very smart by your answers. Thanks for taking the time to talk to me about this topic that I'm so passionate about!

I would like to change some misconceptions that I think you are making. Firstly, I don't think humans are evil by nature. That's like a stereotype that we have placed upon ourselves that's hindering progress. Making mistakes is different from being fundamentally evil. I agree that we must assume that anything that can go wrong will go wrong, but that's why we find a middle ground. In my case, I'm saying let's make the AI follow mistake making people but the damage those mistakes make is minimised by what I call free will.

Secondly, how can the AI be human. When does if else statements convert into empathy and care. My solution is to have it optimise a benefit to humanity  (with a pseudo human heart of its master)

So for the poll, can you give your solutions to these two questions. We'll vote on the best answer
« Last Edit: 28/10/2021 03:36:06 by MinedCTRL »
Logged
 
The following users thanked this post: Zer0

Offline hamdani yusuf

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #42 on: 28/10/2021 09:05:15 »
Quote from: MinedCTRL on 28/10/2021 03:30:10
Firstly, I don't think humans are evil by nature.
What do you think is evil? What's the most evil thing you can imagine?
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #43 on: 28/10/2021 09:25:05 »
Quote from: MinedCTRL on 28/10/2021 03:30:10
Secondly, how can the AI be human. 
The AI extends human consciousness. They are products of humans' efforts.

Quote from: MinedCTRL on 28/10/2021 03:30:10
When does if else statements convert into empathy and care.
Quote
empathy : the ability to understand and share the feelings of another.
Quote
care : the provision of what is necessary for the health, welfare, maintenance, and protection of someone or
something.
So, if an AI has the ability to understand and share the feelings of another, then it has empathy. Ditto for care.
To do those, the AI needs self awareness. It must allocate some memory space to represent itself, besides the memory space to represent its environment. The environment may include other conscious beings, which may require special attention compared to non-conscious beings.
Empathy and care are instrumental goals. Don't mistake them as the terminal goal.
Parallel to the question, when does a zygote convert into a human who has empathy and care?
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #44 on: 28/10/2021 10:09:08 »
Quote from: Zer0 on 27/10/2021 17:03:57
As H mentioned, Morals vs Logic.
Makes sense.
Morals are basically Logic combined with goals. They are harder to achieve consensus because the terminal goals were kept obscured. The cause and effect relationships among different parameters are not perfectly known, and may involve uncertainty, chaos and black swan events.
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #45 on: 28/10/2021 11:59:57 »
Quote from: Halc on 27/10/2021 02:09:15
As mentioned in post 23, creating a new one on a server is as simple as spawning a new process, a trivial task. Nothing theoretical about it. Trick is to find a way to save accumulated knowledge from one process to the next.
That's why I started a thread about building a virtual universe.
Logged
Unexpected results come from false assumptions.
 
The following users thanked this post: Zer0

Offline hamdani yusuf

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #46 on: 28/10/2021 12:16:20 »
Quote from: Halc on 27/10/2021 22:36:52


Quote
If Ants had originally designed/conceptualised/created Humans to help Ants out...
Yes, that would count as a biological purposeful creation of being that could supposedly accomplish the same task in less time than had been taken by the ants.


Quote
But Human Intellect got so smart so fast, say we forgot all bout the Ants.
You make it sound like the ants actually created us.
We aren't descendants of ants. We are product of evolutionary process from earlier primates, which in turn came from earlier mammals, chordates, and earlier multicellular organism. Which in turn evolved from unicellular organisms.
So in a sense, we are byproducts of unicellular organisms who followed their instinct to survive and thrive. We surely aren't their terminal goal since they couldn't possibly imagine what we would look/be like. And we can still improve ourselves as long as we realize that we're not perfect, yet.
« Last Edit: 28/10/2021 23:09:51 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2404
  • Activity:
    6%
  • Thanked: 1014 times
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #47 on: 28/10/2021 12:47:07 »
Quote from: MinedCTRL on 28/10/2021 03:30:10
I would like to change some misconceptions that I think you are making. Firstly, I don't think humans are evil by nature.
But I don't think anybody ever claimed that. Evil would be making choices for the purpose of making the lives of other worse, but most choices are made for (1) personal immediate comfort and are not done for the purpose of harm. That makes us a weak civilization.Stronger ones would (in order) include goals of (2) a group, (3) all of humanity, or of (4)Earth itself. Each of those four levels involves very different and mutually contradictory choices. Since I don't think any 'moral' applies to all four layers (or more layers if you go beyond Earth), I conclude that morals are not universal. Most often cited morals are those belonging to the 2nd (group) category. This is off topic, and hamdani has a topic open for this as well, where he's waved away my arguments without really addressing them.

Quote
In my case, I'm saying let's make the AI follow mistake making people
I'm saying it would be a mistake to follow any one person at all since like all people, that person probably doesn't have a higher goal in mind. The AI might need to determine for itself what that higher goal might be (at which of the 4+ levels it wishes to operate), but finding a human with such priorities might be an impossible task. Humans seem aware of the larger problems, but are spectacularly incapable of even proposing, let alone implementing, any viable solutions. The AI, if it takes on these higher goals, needs to figure solutions out itself and not be chained down at level 2 where all the people are stuck.

Quote
Secondly, how can the AI be human. When does if else statements convert into empathy and care.
When do individual neuron firings convert into those things? Such a reductionist argument can be applied to people as well.

Quote
My solution is to have it optimise a benefit to humanity (with a pseudo human heart of its master)
You want level 3 then, despite it being in direct conflict with the 'human heart'? That heart is precisely what prevents humans from even considering humanity as a goal. We can see the problem, but are incapable of thinking of solutions. Following such a master will cause the AI to fail in that goal.
Second problem is if the AI actually has some solutions that benefit humanity. How are those solutions going to be implemented if they conflict with the level 2 goals of the typical person? The AI can suggest solutions all it likes and will get if it's lucky a nice pat on the head for it, but will otherwise be ignored.
« Last Edit: 28/10/2021 13:10:57 by Halc »
Logged
 

Offline MinedCTRL (OP)

  • Jr. Member
  • **
  • 10
  • Activity:
    0%
  • Thanked: 5 times
  • Naked Science Forum Newbie
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #48 on: 28/10/2021 12:59:28 »
Can I come back to reply to this in a year. My ideas would have matured and I won't be making the same points over and over again. Hope you are still there and that the world isn't in chaos! Bye!!
Logged
 



Offline Zer0

  • Naked Science Forum King!
  • ******
  • 1932
  • Activity:
    0%
  • Thanked: 232 times
  • Email & Nickname Alerts Off! P.M. Blocked!
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #49 on: 28/10/2021 14:42:50 »
Hmm...the OP retired for the moment.
☹️
Hope they are able to make a comeback sooner than expected.

I'm open to refer to users in whichever way they wish to be addressed.
👍
I'm also open to suggestions to make changes on my own self, which might be beneficial to the Forum & Other users.

I'm quite infamous for dishing out the worst possible analogies.
No! Ants did Not create Us...DuuH!
Thank You for being transcendental to see thru my BS examples, & responding to the point.
🙏

I consider All Humans Evil!
Ones that consume Dairy & Milk.
Ones that use Honey & Leather.
Ones that step on Ants, purposely or accidentally.
Non vegetarians are pure animals.
Vegans are animals who don't realise they are animals, they are plant killers.
I can go on n on...but it means nothing.
✌️

Ps - i once shoved a firecracker in an anthill. Ya I've been quite demonic since childhood.
Anyways, i lit it...BooM!
Was covered in loose sand, while trying to jerk it off me, i Realized, it was a RED Ant Colony.
😑
Instant Regret & Instant Karma!!!
😔
(They bit me like from the first hair on my head, till the longest nail on my toe)
😭
That day, i Lost a bit of my Ignorance...& The Ants Gained quite Alot of my Respect!
🐜🐜🐜🐜🐜🐜🐜
Logged
1N73LL1G3NC3  15  7H3  481L17Y  70  4D4P7  70  CH4NG3.
 

Offline hamdani yusuf

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #50 on: 29/10/2021 05:16:50 »
Quote from: Halc on 28/10/2021 12:47:07
This is off topic, and hamdani has a topic open for this as well, where he's waved away my arguments without really addressing them.
The thread has been very long, and involves many people, which makes it easy to miss some posts. Please feel free to point out your arguments that you want me to address there. You don't have to agree with my answers, but at least we could then agree to disagree, and remove some uncertainties about the arguments from the other side.
Logged
Unexpected results come from false assumptions.
 
The following users thanked this post: Zer0

Offline Zer0

  • Naked Science Forum King!
  • ******
  • 1932
  • Activity:
    0%
  • Thanked: 232 times
  • Email & Nickname Alerts Off! P.M. Blocked!
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #51 on: 30/10/2021 00:01:27 »
🙄

Please lemme also know where the party is at...I'm Interested too!

Ps - 🥳
Logged
1N73LL1G3NC3  15  7H3  481L17Y  70  4D4P7  70  CH4NG3.
 



  • Print
Pages: 1 2 [3]   Go Up
« previous next »
Tags: ai  / hierarchy  / ethics  / morals  / logic 
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.605 seconds with 55 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.