1
New Theories / Re: What is a good analogy for solving the Alignment Problem in AI
« on: 28/10/2021 03:30:10 »There are those that argue exactly this. The AI would BE human, our next evolutionary step.
I personally think you are all very smart by your answers. Thanks for taking the time to talk to me about this topic that I'm so passionate about!
I would like to change some misconceptions that I think you are making. Firstly, I don't think humans are evil by nature. That's like a stereotype that we have placed upon ourselves that's hindering progress. Making mistakes is different from being fundamentally evil. I agree that we must assume that anything that can go wrong will go wrong, but that's why we find a middle ground. In my case, I'm saying let's make the AI follow mistake making people but the damage those mistakes make is minimised by what I call free will.
Secondly, how can the AI be human. When does if else statements convert into empathy and care. My solution is to have it optimise a benefit to humanity (with a pseudo human heart of its master)
So for the poll, can you give your solutions to these two questions. We'll vote on the best answer
The following users thanked this post: Zer0