0 Members and 3 Guests are viewing this topic.
You should make a movie with your thoughts.
I was only pointing to the possibility that intelligence could be a natural outcome of any natural evolution. If it is so, then artificial intelligence is also natural, and it can thus not predict its evolution even if it tries to control it. Control would then only be an illusion created by the way mind works. Mind would then only be able to accelerate its own evolution, not control it.
I'm actually making a reality show out of them, and you're in! :0)
I understand you, but what if the Ai unit was so smart it could ask its creator for upgrades ? Effectively creating the future as the Ai deems fit?
Come on,now I am curious , what is your documentary about?
It's not really a documentary, it's a public discussion available for free at https://www.thenakedscientists.com/forum/index.php?topic=73258.200 :0)
If a fascist was making lots of fascist friends, that might not be good for him (or for them, or for anyone else), so there may be an argument for blocking that,
AGI will work along with people's instincts as much as morally is acceptable
do we end up taking drugs to be happy?
Is getting excited about new experiences just as pointless?
Who knows
There is only one correct morality, and whichever one that is, ..... that's the one I want to put in it.
the best advice simply can't be ignored
Making friends is a response from our instinctive selfish behavior, so we can't feel bad about that whatever the kind of friends we make. What might make us feel bad then depends on what the AGI will do, which depends on what he thinks the whole group will feel, not only some individuals. That's why I was saying that while caring about the welfare of others instead of caring for himself first, he would only care for the survival of the specie as a whole, not for individuals or smaller groups.
What if the unit was so smart, the unit knew how to manipulate the stock market ? The unit over a period of time would not only rule the world, but would also have most of the worlds finances.
Quote from: Thebox on 14/06/2018 23:02:00What if the unit was so smart, the unit knew how to manipulate the stock market ? The unit over a period of time would not only rule the world, but would also have most of the worlds finances.Nothing wrong with that - it would share out the spoils fairly. However, AGI will eliminate the stock market by creating perfect companies as a part of world government, wiping out all the opposition and removing the ability of people to earn money eternally out of mere ownership where the rewards aren't justified by the work done. I already have plans to wipe out all the banks by using AGI.
What might make us feel bad then depends on what the AGI will do, which depends on what he thinks the whole group will feel, not only some individuals. That's why I was saying that while caring about the welfare of others instead of caring for himself first, he would only care for the survival of the specie as a whole, not for individuals or smaller groups.
That's what religions thought they were doing too when trying to control our instincts, and history shows that they were only working for the survival of their own group. In the case of your AGI, his own group would be the people that would obey him, and the others would be prosecuted. After a while, history would probably show that the AGI was only working for the welfare of his own group, and that his reign would have produced nothing but zombies.
That's interesting, because it means that your AGI wouldn't know either.
Didn't you say that relativists kind of ignored your advice? :0)
the Ai would know to protect the minority equal to the majority unless the Ai had good reason not too, such as really bad apples.
I was comparing the AGI's morality to the religious one, and I found that they were the same, and the religions were not protecting people from other religions, only from their's, so how could an AGI work differently?
AGI will be working for people based on morality (harm management). Religions work on a similar basis, but with warped moralities caused by them being designed by imperfect philosophers, though to be fair to them, they didn't have machines to enable perfect deep thinking without bias.
AGI's job is to protect the good first
AGI will be able to access a lot of information about the people involved in situations where such difficult decisions need to be made. Picture a scene where a car is moving towards a group of children who are standing by the road. One of the children suddenly moves out into the road and the car must decide how to react. If it swerves to one side it will run into a lorry that's coming the other way, but if it swerves to the other side it will plough into the group of children.
and I realized that the way your AGI would have to manage the harm was god's way.
What you're trying to create is a god that would be altruistic instead of selfish, and I bet you would be happy if he could read our minds.
David's AGI wouldn't have to wait for upgrades from his creators, he would upgrade himself all by himself.
Quote from: From David at LessWrongAGI will be able to access a lot of information about the people involved in situations where such difficult decisions need to be made. Picture a scene where a car is moving towards a group of children who are standing by the road. One of the children suddenly moves out into the road and the car must decide how to react. If it swerves to one side it will run into a lorry that's coming the other way, but if it swerves to the other side it will plough into the group of children.In this case, I would simply hit the brakes, because I wouldn't have the time to think anyway. Anybody can jump in front of a car without the car even having the time to brake though, and no software on the car could prevent the collision either. If you have the time to think, then you also have the time to stop. On the other hand, if your AGI was able to calculate everything, then he should also know that he has to slow down since it is most probable that a bunch of kids are actually playing at that place beside the street.
The selfishness for the Ai would be programmed in a sense of self preservation ,only needing the selfish basics to maintain ''himself''.
I was thinking about this post, so the Ai could weaponize themselves at an instant if they wanted to ?
Quote from: Thebox on 17/06/2018 15:57:25 The selfishness for the Ai would be programmed in a sense of self preservation ,only needing the selfish basics to maintain ''himself''. An AGI will only protect himself from humans if he calculates that it is better for humans that he stays alive, which is indirectly a selfish behavior since it is exactly what good humans think when they kill people. We don't have to calculate anything to protect ourselves when we are attacked, because our selfishness is instinctive, but once an AGI would have understood that he can protect himself, he wouldn't have to calculate either. He would do like we do, he would defend himself while respecting his law, which is incidentally the same as ours when force is necessary: not to use more force than necessary. That law is not only instinctive, it is natural. Particles don't explode when they don't have to, they only do when the external force exceeds their internal one.