0 Members and 2 Guests are viewing this topic.
Have you tried to find any contradiction while using selfishness as a morality? I did and I couldn't find any.
There is a difference between protecting the good people and managing the harm, and I just noticed that you were switching from one to the other as if there was not.
I reread your ...computational-morality-part-1-a-proposed-solution... and I realized that the way your AGI would have to manage the harm was god's way.
What you're trying to create is a god that would be altruistic instead of selfish,
and I bet you would be happy if he could read our minds.
You simply want to upgrade our actual gods.
The guys that imagined them probably thought, like you, that it would make a better world, but it didn't.
Ideas about control come from a mind that is free to think, ideas about absoluteness come from a mind that is limited, ideas about altruism come from a mind that is selfish. I'm selfish too, but I think I'm privileged, so I'm not in a hurry to get my reward, and I look for upgrades that will take time to develop. You are looking for a fast way, so it may mean that you're in a hurry, or at least that you feel so. My problem with your AGI is that I hate being told what to do, to the point that, when I face believers, I finger the sky and ask their god to strike me down. Know what? Each time I do that, I can feel my hair bristle on my back, as if I was still believing it might happen. That's why it is so hard to convince believers. Try it and tell me what you feel. :0)
DON'T TRY THAT AT HOME GUYS, IT CAN BE VERY DANGEROUS, DO IT IN A CHURCH INSTEAD! :0)
I just had another crazy idea: if you promise your AGI will laugh when I'll finger him, I buy it! :0)
In this case, I would simply hit the brakes, because I wouldn't have the time to think anyway.
On the other hand, if your AGI was able to calculate everything, then he should also know that he has to slow down since it is most probable that a bunch of kids are actually playing at that place beside the street.
I was thinking about this post, so the Ai could weaponize themselves at an instant if they wanted to ?
An AGI will only protect himself from humans if he calculates that it is better for humans that he stays alive, which is indirectly a selfish behavior since it is exactly what good humans think when they kill people.
...but once an AGI would have understood that he can protect himself, he wouldn't have to calculate either.
He would do like we do, he would defend himself while respecting his law, which is incidentally the same as ours when force is necessary: not to use more force than necessary.
Quote from: Thebox on 17/06/2018 18:55:47I was thinking about this post, so the Ai could weaponize themselves at an instant if they wanted to ?If it's moral for it to use weapons to protect good people from bad ones, of course it will obtain and use them. It would be deeply immoral for it to stand back and let the bad murder the good because of silly rules about robots not being allowed to kill people. What we don't want is for AGS (artificial general stupidity) systems to be allowed to kill people.
It isn't selfish though because the AGI has no bias in favour of preserving the robot it's running in (while the AGI software will not be lost).
any decision based on incomplete information has the potential to lead to disaster,
Well with humans, we do get attached to our bodies , is attachment a program of your Ai?
Quote from: Thebox on 17/06/2018 20:30:20Well with humans, we do get attached to our bodies , is attachment a program of your Ai?AGI software won't attach to anything - it won't favour the machine it's running on over any other machine running the same software, and it will be able to jump from machine to machine without losing anything. There are many people who imagine that they can be uploaded to machines to become immortal, but the sentience in them is the real them (assuming that sentience is real - science currently doesn't understand it at all), and it won't be uploaded with the data (data is not sentient), so they are deluded, but software can certainly be uploaded without losing anything if there is no "I" (capital "i") in the machine.
Cool and scary in a way for the Ai you programmed feelings in. I suppose we would hurt the Ai in uploads because the Ai was programmed with feeling?
Quote from: Thebox on 17/06/2018 21:21:32Cool and scary in a way for the Ai you programmed feelings in. I suppose we would hurt the Ai in uploads because the Ai was programmed with feeling?I would never program "feelings" into a system that can't support feelings (due to a lack of sentience in it). The only way you can program "feelings" into it is to fake them, and that's dangerous. My connection to the net struggles to support video, so if a video's relevant, you need to say a few words about what its message is so that I can respond to that.
QuoteI just had another crazy idea: if you promise your AGI will laugh when I'll finger him, I buy it! :0)It won't care if you're rude to it in any way. It might be rude back though.
If a mass-murdering dictator is being moral by being selfish, killing anyone he dislikes and stealing from everyone, that conflicts with the selfishness of the victims. Selfishness as morality simply means that might is right and you can do what you want (so far as you have sufficient power to do it).
It would hit the brakes too, but it would also have lots of computation time to calculate which direction to steer in to minimise the harm further - time which people can't make such good use of because they're so slow at thinking.
It could, so it will probably upgrade itself regularly like we do, except that it will do it for us instead of doing it for itself. I bet it will discover rapidly that we are selfish, and that selfishness is less complicated as a morality than managing the harm, so it will probably reprogram itself to be selfish. I hope it will be able to manage the short and the long term better than us, but I still can't see how it could.
Your inverting the roles, we are the parents and the AI is our offspring, but the reasoning is the same, one cares for the other because a family increases the survival chances of all the members, which is naturally selfish. When selfish individuals form a group, it's as if the group itself was selfish: it protects itself from other groups, and tries to associate with them so as to get stronger. The same thing happened to planetary systems: each planet is an individual that tried to associate with the other planets by means of a star. The associative principle is gravitation, and the individualistic one is orbital motion that is driven by what we call inertia. We are also driven by inertia, and it also keeps us away from one another so as to keep staying individuals, which is a kind of selfishness. But we are also driven by whatever incites us to make groups while still staying individuals, which is also a kind of selfishness since a group is stronger than all its individuals taken separately. In common language, the word selfishness is pejorative, but I don't use it this way. I compare our selfishness to the way planets and particles behave, and we can't attribute them any feeling or even any idea. Selfishness is a feeling to which we added a pejorative concept, whereas to me, it is only the result of our necessary resistance to change. Without resistance to change, bodies would not stay distinct, and we would not stay individuals.
Well the Borg was connected by the Borg queen, so i assume all your Ai's would have connection to each other ?
The creator would have a fail safe added where they have control ?
Now here is an interesting question, what if the Ai becomes so self aware, the unit declares himself to be a human? Now wouldn't this show that the unit had evolved self awareness and the unit would have a natural survival instinct ,selfish becomes automotive in preservation of himself and his reproductions?
I feel sorry for this Ai of yours...Quite a sad story we are developing about a robot, it would make a good emotional movie.
The ability of an imperfect human to override a perfect machine is a danger in itself, but when a machine develops a fault, we will certainly need a way for other AGI systems to shut it down.
Wouldn't the Ai that was at fault be able to self repair the error when other Ai's pointed out the error?
Humour is one of the ways for humans to show they don't take themselves too seriously, and I was testing the one your AGI would have.
Apparently, he would take his job quite seriously, and he would really be persuaded that he is always right.
Maybe I should hide and prepare for war then, because I'm persuaded that he would be wrong about that. Soldiers and policemen think like that, and they behave like robots.
What about introducing a bit of uncertainty in your AGI, a bit of self-criticism, a bit of humour? Would it necessarily prevent him from doing his job?
I'm selfish and I don't try to force others to do what I want, so it is not what I mean by selfishness being universal. A dictator's selfishness is like a buzinessman's selfishness, he wants his profit and he wants it now, whereas I don't mind waiting for it since I'm looking for another kind of profit, one that would be more equalitarian. I can't really understand why others don't think like me, but I still think it takes both kinds of thinking to make a world. Things have to account for short and long run at a time, and unfortunately, the short run is more selfish than the long one, although a businessman would say it is fortunate.
Communism was expected to be more equalitarian than capitalism as a system, but it didn't account for the short term thinking and it failed.
Capitalism is actually not accounting enough for the long term thinking and it is failing too.
One thing I find interesting about mind and time is the way it accounts for the speed of things. If it had been useful to be as fast as a computer, it might have evolved this way, but it is not since things are not going that fast around us. Mind is adjusted to the speed of things, whereas an AGI would be a lot faster than that. There is no use to be light fast to drive a car because the car isn't that fast, but there is a use to make simulations even if they cannot account for everything.