0 Members and 1 Guest are viewing this topic.
The only thing I don't like is not to be able to move freely because the system doesn't permit it, and that's a bit what I feel when I discuss with you: I feel there is no place for me in the system you want to develop. What's the use for living if the computer always finds better solutions than you do, and if your only pleasure is to find solutions?
We haven't heard very often from chess masters lately. They are probably looking for a game that can beat the computers, like programming them for instance, but what will happen when computers will be able to program themselves?
There just hasn't been another game on the same level as KacnapoB vs. KapnoB, and no computer beating a top player
Would you like me to calculate that for you ? 1/3 A coin has 3 sides not two and you thought it was 50/50?
All apart of thinking ! Absolute is knowing. I know that absolutely the coin will land in a gravity environment. The side it lands is irrelevant unless you are betting.
Ai can't do what I just did David, he could never have NAi, I am unique and individual.
Quote from: David Cooper on 09/06/2018 20:40:50There just hasn't been another game on the same level as KacnapoB vs. KapnoB, and no computer beating a top playerWell, a smart chess player would just ''pull the plug'' and say your move smart ass, work that out.
Quote from: Thebox on 09/06/2018 20:46:26Quote from: David Cooper on 09/06/2018 20:40:50There just hasn't been another game on the same level as KacnapoB vs. KapnoB, and no computer beating a top playerWell, a smart chess player would just ''pull the plug'' and say your move smart ass, work that out.That would be a bad loser rather than a smart player.
As I said before, there is no need for it to force you to have the best life possible - you are entitled to make lots of mistakes, but not when they damage other people. AGI should warn you though if you're going to do serious damage to yourself by making a bad decision, although you'll be able to decide for yourself how bad that damage is allowed to get before you're warned about it. Given that making mistakes and having a bad time can be looked back on as a good time in the form of an adventure that gives you a tale to tell, this should not be eliminated. From bad decisions can come a lot of excitement.
Loser? How did I lose when I won? The natural intelligence and logic to beat a machine that was programmed to beat you in a game by being programmed by you every possible move and solution , is to turn it off . The computer has no answer in reply to the logical solution to beating the computer. The computer is not a living thing , the computer does not understand compromise.
Just to add, a human player is not playing a computer, a human player is playing the entirety of science put into that computer.
p.s What do you mean bad? So if terminator was after you, you would consider it bad to beat him by cheating a little?
If the machine plays the same game as you, it will kill you and claim victory.
There is no excitement for me to look for an answer if I know the computer already knows. When I need information from the past, I google, and if ever I could google information from the future, I would feel completely useless, and I would probably look for a way to suicide myself without the computer being able to calculate it and prevent me from doing so. I might not feel like that if I was born in such a world, as in Orwell's 1984 for instance, but I still can't understand how you can imagine yourself liking it, except imagining the AGI is you.
That is interesting David that you would put kill commands into the Ai programming. I could effectively turn the power back on to the computer, I have not killed it because it is not alive to begin with.
Do you feel human life is less worth than a robots downtime?
See David, there is no logic in killing something that dies anyway.
It wouldn't need to kill you to win by cheating - it could simply tie you up,
AGI will get rid of most of the horrors of the world and free us up to have more fun.
enjoy pushing back the barriers of what you think can be done
My only fun is to find a problem and resolve it, and you say your AGI would be able to do the same much more efficiently, so where would be my fun exactly?
And where would be yours with no problem to address either and no more AGI to improve since he would be a lot better than you at programming?
How could I enjoy pushing back barriers that the AGI could push ten times as fast?
Your problem would be finding a problem,
No one else will care if you achieve something, other than benefiting from your work, so all you can hope for is sufficient financial reward to be able to make up for lost time.
I'm talking about things that AGI won't do. AGI won't climb a mountain for the fun of it, or build a boat and cross an ocean in it in search of adventure, or take part in R2AK. Where is the real satisfaction in spending years working on problems in physics or artificial intelligence when all the time you're doing that work you're aware that you're missing out on real living?
I can't understand people that don't like what they do while still doing it.
Robots are actually replacing humans at doing what they don't like to do, so one day or another, we won't have to work as a living, but as a pleasure. Will there still be wars when that time will come? Probably not between humans since we don't really like making war. Will there still be too much pollution? Probably not since women don't make as many babies when they can do something else, and it is even possible that they won't have to make the babies anymore if we can find an artificial way. If that time comes, then we might not need an AGI to rule us, but I'm still interested in the way a computer would be able to outperform humans at inventing the future. If that would happen some day, then humans will become obsolete, and their only use will be to go on living in case the computers face a change that they don't succeed to overcome.
We do research simply because the future is uncertain, so we try to adapt in advance to the changes that might happen. If we think a comet might hit the earth some day, we try to find a way to deal with it in advance. It doesn't mean that we will find it, it simply means that we are able to try. David thinks that since we are trying, we will automatically find it, but it is a wish, not a fact. What happens is that, when we finally get where we wanted to go, we can forget about the problems we had, and we can then think that the road was easy. It sometimes happens to be easy, but most of the time, it is not.