0 Members and 1 Guest are viewing this topic.
Or realistic.
Of course humans can make mistakes, however humans learn from their mistakes. Surely a world constitution devised by people with intellect could set a precedence to follow ?
How does an Ai know right from wrong?
It is programmed , so who sets the standard?What says these standards are objective without their own mistakes?
QuoteIntelligence is the most efficient creative process, and it doesn't rely on randomness. If you're trying to make something better, you make experimental changes in different directions and then push further and further in the directions which pay off.That remains true only if you have not totally gone down the wrong path.
Intelligence is the most efficient creative process, and it doesn't rely on randomness. If you're trying to make something better, you make experimental changes in different directions and then push further and further in the directions which pay off.
It will be a trial and error process that will have to run for generations. Such a process is impossible to control so nobody should feel controlled during that time. At the end, everybody should easily be able to respect the rules that would have been developed during the process.
Mutations also lead to useful advances for the species that succeed to evolve instead of disappearing, and they are random.
That's also what happens with species, their evolution is necessarily guided, otherwise a lion could become a tree, and in one generation.
And those experimentation are necessarily random, otherwise they wouldn't be new since they would come from the same algorithms.
I didn't follow your idea because I couldn't see how particles could do that. To me, it would simply have been a more complicated fudge solution.
...so that late detection was welcome.
An AGI would be maximizing altruism, and humans are maximizing selfishness: it's not what I would call the same rules.
I know that people who would get born in a polluted environment would get used to it, and that they wouldn't regret the past.
I don't regret not having known my grand parents' time for instance, and I'm not even sure I would have liked it.
I don't know for an AGI, but for me, stupidity always seems to belong to others, and good people always seem to belong to my own group. At 94, my mom is slowly losing her mind capacities, and she still thinks I'm the one that loses his. We can't observe our own stupidity, we can only deduce it from observing others. It's a relative phenomenon that transforms into resistance when things change, stupidity then often transforms into aggressiveness, and then it is easier to observe our own one the same way we can observe our own resistance to acceleration.
I already suggested David to prepare two AGIs, one that would defend change and the other continuity, so that we could change them after five years if we feel that things must change. That would give us the feeling not to be controlled, and in my opinion, it would be better for the evolution of society, because it would create more diversity, which is the common characteristic of all the evolutions.
By calculating how much harm different courses of action would cause. If you continually follow policies that reward population growth, don't be surprised if quality of life goes down and the environment is systematically trashed.
And how do you avoid going down the wrong paths? You follow the paths that are most likely to succeed first. It's by randomly selecting paths and ignoring how likely they are to lead to something useful that you reduce your success rate.
There is no diversity in being right.
In reality though, the two AGI systems would agree with each other on every issue because they're designed to produce the best decisions possible based on the available information, so there would be no conflict between them.
For humans, that kind of decision depends on how they feel
It is all about producing proofs as to which morally acceptable courses of action are likely to be best, and when intelligent machines are producing better numbers for this than any humans are able to do, the humans lose the argument every time (unless they agree with the machines). You can only go against the advice of the machines so many times before you learn that you'd do better to trust them - going against them will lead to lower quality of life every time.
Blind evolution is inherently stupid, even though it can produce intelligence if you give it long enough.
That's also what happens with species, their evolution is necessarily guided, otherwise a lion could become a tree, and in one generation.Quote from: David Cooper on 07/06/2018 20:54:17It's only guided afterwards, and there are a lot of losers where it goes the wrong way.
It's only guided afterwards, and there are a lot of losers where it goes the wrong way.
The primary selection mechanism of such evolution is death. With intelligent evolution, we avoid all that wastage.
Real particles always move at the finest level of granularity (which most likely means little jumps of the quantum leap variety). In a program, we only use rough granularity to reduce the amount of processing that needs to be done, but nature always dose the full thing without trying to compress the calculations (and it isn't even doing any calculations).
I don't think so - they'll hate the selfish people who landed them in that situation.
Stupidity is the norm. We're pouring money down the drain on fake education to qualify people for jobs that shouldn't exist because they do more harm than good. By maintaining astronomical amounts of fake work (which makes everyone poorer), we increase the "need" for all manner of services (roads, airports, high-speed rail, concrete prisons for workers to waste their lives in, etc.) to support people in their counterproductive work, and then we wonder why quality of life is going down. But they don't learn - you show them the mistakes they're making, but they just go on and on making them regardless, and millions starve to death every year because of this.
Ostensible, feelings sometimes play no part in decision making .Example : I am feeling tired should I continue to finish this work off tonight?Well it as got to be handed in tomorrow A.M so I must regardless of feelings.
The thought process can reduce options to a 50/50 choice option. However, a real intelligent unit, would not be happy it was 50/50, he would demand absolute.
What would be intelligent in this case is toss a dime instead of calculating the risk. It would be a lot faster and as efficient. We don't do that because we don't always feel to move, and the dime might force us to. If we don't feel to move then, we don't, and if we do, we do whatever comes to our mind.
At a guess, about 99% of the general population have AI compared to the 1% who have real intelligence and are self aware. The AI section of the world being clueless and following anything they are told.
My personal Ai tells me as a priority, would be to devise a sufficient plan to reduce the population. Thereafter a reduction would be implemented and birth control. I would remove all free rights to just populate without consideration for the future. I would employ application for permission to have children. Applicant couples being ''screened'' before approval.
How fast do you think your Ai would spot an error?
Would he see it as being an error?
You think Mr Trump is not a good Ai unit?
What if your Ai created an error , then by trying to fix it made a bigger error?
What if he just kept making the error worse?
The thought process can reduce options to a 50/50 choice option. However , a real intelligent unit, would not be happy it was 50/50, he would demand absolute.
It means that if we were all AGIs, we would all think the same. I can't but imagine billions of clones replacing us after your AGI will have grabbed the reins. :0)
That's without accounting for the uncertainty margin when the odds are close to 50/50. Once elected, one of the AGIs could then be programmed to change something, and the other not to change anything.
After a while of that process, humans may accept AGIs, but not in the beginning.
The first lesson from Evolution is that we will never know what's coming next, and thinking that we know just because we are intelligent is wishful thinking.
The second lesson is that we were lucky to get selected, and thinking that intelligence is a natural outcome is hubris thinking.
...there might be not need for an AGI to lead us anymore.
It is guided by the environment after the fact, and by the mutations before the fact, which is exactly what happens with intelligence if we consider that ideas can mutate.
Individuals that are not selected are not lost in the process, they have to live for the specie to have the time to transform,
and it's the same for ideas, we have plenty of them in the mind that don't change while we are developing new ones.
]Go take a look at the online patent office, and you will see that very few of them make sense. The reason they are kept is the same as for mutations: it may happen that they mutate again, and that the new mutation gets selected.
Nature can't be absolutely precise either, that's what quantum effects mean. It gets more and more precise going down inside the particles, but it cannot apply that precision backwards to larger scales instantaneously.
People that are part of the same group automatically feel that their palls are less stupid than the members of the other groups, and they even feel that their leader is intelligent. That's bad news, but that's how things work,
and an AGI could do nothing about that,
Quote from: Thebox on 06/05/2018 10:14:38At a guess, about 99% of the general population have AI compared to the 1% who have real intelligence and are self aware. The AI section of the world being clueless and following anything they are told. Yo @Thebox I think you're confusing something important here...The fact that a lot of people may - without being aware of it - part of what is being called "artificial intelligence", does not mean in any way that you or me are essentially robotic sex slaves or russian trolls... I do agree however that the distinction between artificial and human intelligence is poorly understood by many of us! But get this: If its possible to weaponize "artificial intelligence" then it should also be possible to weaponize human intelligence! tk
Quote from: Thebox on 07/06/2018 22:27:18The thought process can reduce options to a 50/50 choice option. However , a real intelligent unit, would not be happy it was 50/50, he would demand absolute.If AGI calculates that it's 50:50, it's 50:50 - that probability is as absolute as it gets.
Why would we want to become AGI?
Strange uncertainty units then, I am glad I am human as I can have absolute answers. P=1
Quote from: Thebox on 09/06/2018 12:35:16Strange uncertainty units then, I am glad I am human as I can have absolute answers. P=1Have you tested that against a tossed coin? Can you predict with absolute certainty which side will end up on top each time?