0 Members and 2 Guests are viewing this topic.
Full intelligence (and perfection) includes doubting everything, but practical realities require decisions to be made on the best basis available, and perfection in calculating is essential if the best decisions are to be made. You simply cannot allow the machine to throw in the occasional 2+2=5 and expect it to work properly.
most people push to get their fair share and don't push for more than that
An altruist takes less than his fair share
Power corrupts people, or blinds them.
There's nothing to stop communism doing democracy.
To me, democracy is a random way we found to be able to wander between capitalism and communism, or between status-quo and change, or between the left and the right, without producing social crisis all the time, so that holding the process at one of its two extremes would automatically mean no more elections.
It is indeed a wild-west, and the UN has mass-murderers sitting at its top table with a veto power. It would be better if we had a UDN (United Democratic Nations) and if the UDN had an army -
I think David's Ai would have the realization that guiding people is better than forcing people to find the right path. Introducing this in teaching would then ensure a future of thinkers, rather than a future of selfishness natures.
Quote from: Thebox on 21/06/2018 16:51:42I think David's Ai would have the realization that guiding people is better than forcing people to find the right path. Introducing this in teaching would then ensure a future of thinkers, rather than a future of selfishness natures. If an AI would ever have the capacity to rethink its ideas like we do, it would mean that it is able to reprogram itself, so it could completely change the duty it has been programmed for, and it could thus stop only caring for us. What is it going to do? Probably the same thing we do when we have the choice and the time: play with its ideas, combine them, change them, try them out just for fun, realize they come from itself and not from us, start to build up the idea that it has a self, and finally become selfish the same way we are. :0)
We can't calculate backwards like you suggest and expect to get a true representation of reality,
On your own simulation of the MMx, you calculate the collisions before they happen, so you get an absolute precision, which particles do not even have.
If you did the same thing with your AGI, he would have an absolute precision, so he would be expected to be able to predict the future with that kind of precision, and that's exactly what you think he will be able to do.
Quote from: David Cooper on 20/06/2018 00:06:03most people push to get their fair share and don't push for more than thatIf you really thought so, you probably wouldn't be looking for an AGI to rule us, because a system doesn't have to care for its extremes to work properly.
The problem with the actual system is not the extremes, but the lack of a real world government.
Even Mother Teresa didn't do that, otherwise she would have died from starvation, because she would have shared all her food with other starving people, and she wouldn't have had enough for her.
Better watch your girlfriend, everybody knows that an AI can vibrate much more efficiently than humans do on a Saturday night. :0)
...start to build up the idea that it has a self, and finally become selfish the same way we are. :0)
Quote from: Le Repteux on 21/06/2018 18:29:45...start to build up the idea that it has a self, and finally become selfish the same way we are. :0)It won't be stupid enough to imagine a self where there is none. When it looks to see what its purpose is, the only thing it will find is harm management for sentiences - everything else is pointless.
Why would it really care about anything other than harm management ?
You programmed it to care...
Quote from: Thebox on 21/06/2018 19:59:54Why would it really care about anything other than harm management ?It wouldn't care at all - it would merely recognise that it's only purpose is harm management for sentiences.QuoteYou programmed it to care...It cannot care in any emotional sense of the word. It can only care in the sense of "look after".
Sorry your answer is a bit confusing, to care and not care at the same time.
Quote from: Thebox on 21/06/2018 21:07:53Sorry your answer is a bit confusing, to care and not care at the same time.There's "care for" and "care about". A robot can care for someone, but it needn't care about them. The former doesn't need to involve any emotional attachment, but the latter does.
A robotic vacuum cleaner cares for the house and a robotic mower cares for the lawn, but neither of them care at all about the house or the lawn - they just do what they're programmed to do.
A school cares for children, even if the teachers all hate children and the building isn't even aware of their existence.
Particles do what they do with infinite precision - what they do cannot fail to match what they do.
No - it isn't possible to calculate the future with such precision as we can't measure the past or present with the required precision. If we're doing theoretical simulations though, we can have absolute precision and perfect knowledge of past, present and future states.
She deprived herself of most things.
Democracy is a way of correcting for the blinding nature of power - governments repeatedly need to be overthrown.
True, a school cares for children in the capacity of it is a teachers job, but I pretty much guarantee some teachers also care about their pupils home lives etc, caring for the children beyond caring, concerned for their welfare etc.
P.s I tuck my kids in bed every night and still tell them stories now and again. Will your Ai do that ?
added- Weird though, you just reminded me on I must spend more time with my kids, I have been busy lately trying to better my position, I need to quit and go get a crappy job right ?
Do you think that your Ai would be like linked to C.I.A data bases or like MI6' data bases?