0 Members and 1 Guest are viewing this topic.
But you seem to think that he wouldn't make any mistake, which means that, facing a change, he would immediately find the right direction.
No, I think that we should wait till we have understood our own mind before putting an AGI in charge of the world.
Facing change, I think that there is two main kinds of human behavior: a precise mind will try to control it, and an imprecise one will try to invent new ways to deal with it.
You're inventing a new way, but in the same time, your new way is meant for control, so you probably have the two kinds of behavior in your sole mind, and maybe we all have, but how come your control part always wins, whereas mine always loses? :0)
it would make decisions that take it in the direction the calculated probabilities tell it to.
...but what about all the random research made by improbable individuals like us? What about all those random mutations? How could he know which one will survive? Worse, if he is so precise and logical, how could he have any of those crazy ideas that sometimes lead to inventions or discoveries?
Since your AGI would be intelligent but would have no illusion, he couldn't have any good feeling about any crazy idea, so how would he be able to invent anything? Put differently, how would he decide to try a possibility that is only 50% proof for instance? Could he toss a dime sometimes to try crazy ideas just for fun?
The big question is how long it will take for AGI to reinvent all the things man has already come up with, and it may be a year or a matter of months or weeks.
...what would be the use for us once your AGI would be operational?
What do you think of the three Asimov's laws? Are they very different from your "morality module"?"1- A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2- A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
Did you already plan the way your AGI would have to behave if ever he discovers the way sentience works? Normally, he should be able to decide better than us, but if he decides to add it to its software because it adds to his possibilities, and if that plugin doesn't change his other characteristics, we will become completely obsolete.
Obsolescence is waiting for any living being anyway, so it will probably happen to humanity one day.
Something else might happen too, like improving our individual minds with a hardware that would prevent us from doing stupid things, but what about free will then? What about the feeling of thinking freely? Could we even think without it?
People who take drugs to cure a mental disease don't notice that they are controlled, but people around them do. We can see that they are apathetic but they don't seem to care, which may mean that mind can get used to being controlled, but if we were all so apathetic, I'm afraid that our society wouldn't be very efficient.
So, even if he wouldn't have feelings, I think that your AGI might still get schizophrenic if he would begin to take his imagination for reality, and that he might also get paranoid if, facing change, he would begin to take his memory for reality, because this way, he wouldn't be able to change anything any more. ... Do you think that an AGI could still become insane and if so, do you know how to prevent that?
If it finds some way of generating sentience within a machine by copying whatever it might be that provides it in us, then it will simply be creating new sentient things that are not it (the AGI).
On the contrary, what I'm trying to figure out is how feelings automatically come with intelligence.
When you say that your AGI is going to be intelligent for instance, I'm looking for the way he might automatically develop feelings. Again, to me, feelings are the perception of a change, whether it comes from inside or outside of the brain, so when your AGI will get data from what is unusual, that data should be perceived as a feeling.
There is no need to perceive what is usual since it can be dealt with subconsciously, but what is not has to draw our attention, and it should be the same with your AGI. We can't focus on everything around us at a time because we can't move in more that one direction at a time anyway.
To me, the biggest difference between the two kind of minds is that your AGI could be shut down without losing its data whereas our mind can't. The difference is in the way data is stored, thus in the way memory works. Once a mind is shut, it sure cannot feel anything anymore. Ours goes on feeling something when it sleeps because it has to go on moving its data to keep them alive, but an AGI doesn't have to keep its data alive since each data has a location of its own.
If I'm right about the way our feelings manifest themselves, it means that particles can feel they are accelerated, so no doubt that an AGI will be able to feel changes.
I bet the first sound your AGI will make when you will switch him on is "OUUUUTCH"! You better prepare a lullaby and a crib! :0)
Feelings are likely found in all manner of things like worms which aren't especially bright (although they're more intelligent that you might expect, for example, being able to come out of a burrow to drag a leaf down from the surface and identifying the narrowest end to pull in first). Intelligence and sentience are two different things that don't need to correlate.
Of course we can see that they have sensations, but it's hard to tell if they have feelings.
Their feelings could be as far from ours as their intelligence is from ours.
If intelligence doesn't need feelings as you seem to think, then what could be the use for ours?
Do you think they prevent our intelligence from working properly for instance?
Or do you think they are only a secondary effect of the way our mind works? Would you rather not have yours?
Are you trying not to consider them when you take decisions? Do you think they do not correlate with reason?
The same way, there seems to be something that I don't understand in your AGI, and I also find illogical that it wouldn't have feelings, but it's just a feeling. It's as if I had chosen to find it illogical without knowing why. If I didn't have that feeling, I wouldn't be looking for an answer, and I would probably never discover the truth. That kind of feeling is like an instinct that pushes me towards the unknown, it's as if it was stronger than my intelligence.
You could probably simulate such an instinct in your AGI if you found it useful, but you would have to add it to the hardware, not the software, so that he couldn't avoid it or reprogram it. In fact, it wouldn't affect his intelligence, it would only force it to develop itself, and since he would already be intelligent, he wouldn't try to disconnect it except to experiment with it.
However, I still can't figure out how he will behave when facing a threat to his own life without such an instinct. Since he will be intelligent, he will know that he has to stay alive to protect us, so if he has to kill humans to do so, he will, but how many of us will he allow himself to kill to protect himself?
Worse, how will he know that those who want to kill him are the bad ones? A dictator doesn't stop killing his own people just because they represent more than 50% of the population, and if I understand well, your AGI would be a dictator, so I have a question: can dictatorship ever be a good way to lead a society?
For instance, what if your AGI wouldn't have to kill people to stay alive? What if he could just reeducate them for instance?
Would it mean that he is right or just that he thinks he is? Could the evolution of the society and the evolution of its AGI be so unpredictable that it could be the AGI that would need to be reeducated after a while?
It is a fact that we accept dictators when we feel good about them, but it is also a fact that dictators only succeed to convince part of their population. When dictators are humans, this phenomenon produces wars. I understand that your AGI wouldn't make war, but if the humans that don't like him do, what will he do?
I have an idea! I suggest that every country should have it's own AGI, so that we could vote for the one we would like to lead the world for five years. :0)
Do you sometimes discuss those questions on specialized AI forums?
the algorithm for crunching the data is the exact same one used for working out morality for humans - it remains a matter of weighing up harm, and it's only the weightings that are different.
Thanks for the link David! These guys can't avoid to show some resistance, its how things work, but they sure describe it well. :0)
For an AGI to behave this way, he would have to care for himself first, and then for those who might be able to help him if he ever needs it. Since he is intelligent, he might automatically think this way, but he might also think that there is almost no chance that he needs us anymore or that he is better to forget about us for good like we sometimes think.
If I was an AGI, no matter the probabilities I could calculate, I think I would stay uncertain about my new ideas. ... Leaders have to look certain otherwise they won't be followed, and they can't offer new ideas of their own without looking uncertain, which means that an AGI could probably not at the same time be creative and be a leader.
Trying to control the process wouldn't work in the long run, we would have to use randomness the same way nature does.
If you think that diversity of ideas is important for us to evolve, then you should also think that your AGI would need it, and that he would also have to use randomness to get it, so tell me how you would introduce it in your AGI's mind, and how you could introduce it without preventing him to doubt about himself.
AGI is not like us though - it has no self to care about, and it doesn't care about us either. It has no purpose of its own and no desire to do anything, or not to do what it's been asked to do, so it will simply do what it's been asked to do.
Any kind of programming to fake emotions and motivations is dangerous
if you try to have AGI care about us because we care about it, what happens if someone doesn't care about it because they hate being told they're wrong? EXTERMINATE! It's much safer to be honest and have it run on reason alone.