0 Members and 3 Guests are viewing this topic.
Grumpy attitude? I simply don't need to comment on things that are right, so it's a continual stream of posts pointing out errors. You'll get the same from AGI, and it won't be grumpy either.
Quote from: David Cooper on 13/06/2018 23:52:29Grumpy attitude? I simply don't need to comment on things that are right, so it's a continual stream of posts pointing out errors. You'll get the same from AGI, and it won't be grumpy either.We were talking about the discussion Box sometimes have with Bored Chemist David, not with you.
Isn't thanking someone, empathy in return? Consideration for letting them know they were appreciated?
On the point about AGI making us obsolete, it's only purpose is to help us - if it makes us obsolete, it has failed.
We thank only people from our own group, or from associated groups, not from opposed ones. We never care for people that might attack us or attack our group later on.
Quote from: David Cooper on 12/06/2018 22:57:22On the point about AGI making us obsolete, it's only purpose is to help us - if it makes us obsolete, it has failed. It may not have failed if artificial intelligence happens to be the next evolution step though. One day or another, we will understand how mind works, and we will be able to reproduce it artificially. The purpose of mind was to help us survive as a specie, so the purpose of an artificial mind will simply become to help us survive as an artificial specie. Meanwhile, an AGI may still be useful to rule us, but I'm not satisfied with the kind of morality you want to give it, and I wouldn't give it the capacity to invent new things either until we know exactly how our own mind proceeds. As you justly said, evolution is a process that takes time, but you seem to be so sure of your AGI that you would introduce it in no time if you could. You could try, but I think it would be safer to introduce it progressively. You seem to be afraid that ill-intentioned people win the race though, and unfortunately, one can certainly not win a race while taking more time than others.
What if the Ai modular was firstly presented to the public in a trial period, running a company/business?
David's AGI will not work like that, he will only work for the survival of the whole specie. Knowing that groups are often attacking each other, he will thus prevent us from building some, so he might also prevent us from making friend, or even from building families.
He will then be trying to replace our two most important instincts, which is exactly what religions have tried to do since the beginning without success.
The purpose of mind was to help us survive as a specie, so the purpose of an artificial mind will simply become to help us survive as an artificial specie.
Meanwhile, an AGI may still be useful to rule us, but I'm not satisfied with the kind of morality you want to give it,
...and I wouldn't give it the capacity to invent new things either until we know exactly how our own mind proceeds. As you justly said, evolution is a process that takes time, but you seem to be so sure of your AGI that you would introduce it in no time if you could. You could try, but I think it would be safer to introduce it progressively. You seem to be afraid that ill-intentioned people win the race though, and unfortunately, one can certainly not win a race while taking more time than others.
Because surely if the Ai modular failed this mediocre task for such a unit, then the reality would be there is no hope the Ai modular could run a world?
Quote from: Thebox on 14/06/2018 22:11:22Because surely if the Ai modular failed this mediocre task for such a unit, then the reality would be there is no hope the Ai modular could run a world?If it can't handle mediocre tasks, it isn't close to being AGI, and anything less than AGI is uninteresting (unless it can outdo humans at some specialised task, in which case we would only put it in charge of that task and not try to use it to run the world).
Maybe he could work as a judge at the court for a while, so that we could see if his morality works.
What if the unit was so smart, the unit knew how to manipulate the stock market ?
Quote from: Thebox on 14/06/2018 23:02:00What if the unit was so smart, the unit knew how to manipulate the stock market ? There is no other way to win at a chance game than to cheat, thus a computer that is programmed not to cheat would not win.
It is cheating if you use privileged information to make money at the stock market or at any chance game, so if the AI knows about things we don't know that will influence the market, he shouldn't be allowed to play. Apart chance, that's the only way to make money with chance games, so if you know anybody that regularly wins at the stock market, call the police. :0)
If I am correct, and of course, I think I am :0), an AGI could not predict the evolution of the society more than he could predict the one of the stock market, so the only way for him to control it would be to cheat, which means taking decisions that he knows the outcome, which means preventing society from evolving in any other direction than the one he chose. If the evolution of species had been controlled by David's AGI, it is easy to understand that we would not be here to talk about it, because the best direction to take would have been to develop anything but humans. :0)
Then, he better get armed before humans from other planets that have evolved freely discover he has done so! :0)