Naked Science Forum

Non Life Sciences => Technology => Topic started by: evan_au on 12/03/2016 09:00:37

Title: Is this a sign of the technological singularity?
Post by: evan_au on 12/03/2016 09:00:37
A series of computer vs world champion matches are playing out with the game of Go (http://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol). 
At the time of writing, three games out of five have been played, and the score is 3-nil in the computer's favor.

But what caught my eye is the following statement (http://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/alphago-wins-game-one-against-world-go-champion):
Quote
AlphaGo’s programmers insist that it now studies mostly on its own, tuning its deep neural networks by playing millions of games against itself.

This suggests that the program does not have any human opponents strong enough (or fast enough) to teach it, so it trains itself by playing against itself.

Is this a symptom of the hypothetical technological singularity (http://en.wikipedia.org/wiki/Technological_singularity) - where humans are outstripped by computers?
Title: Re: Is this a sign of the technological singularity?
Post by: chris on 12/03/2016 10:41:41
Scary prospect isn't it!
Title: Re: Is this a sign of the technological singularity?
Post by: puppypower on 12/03/2016 13:02:00
Another way to look at this is, the computer is sort of using trial and error when it plays itself in chess; self learning. It is generating random game scenarios, and then relying on its speed, to test all the possible options for all the connected events, governed by the logic of chess. 

Say you have a situation that is not as well characterized in terms of logic. For example, say we have a game scenario where we don't know all the pieces of the game, nor do we know how all the pieces are able to move. A random approach would be useless, since it would need to randomly add extra pieces and add now these might or might not move. This may work if the speed gets extreme, so all the wasted motion is less obvious. Computer speed can sort of dazzle dazzle the untrained eye, even if the approach is not all that efficient.

The human brain works differently from computers in a very fundamental way, based on hardware. When neurons are at rest; before firing, the membrane of the neuron contains stored energy and entropy potential. This is due to the segregation of sodium and potassium cations on opposite side of the neuron membrane. These two ions would prefer blend if left to their own devices; increase entropy and lower potential. But the neuron forces them to segregate, lowering entropy and increasing energy potential.

The rest memory; neuron, is designed to be at a high dual potential. They need to fire to lower energy and increase entropy. Time heals all wounds because neural memory has the innate potential to lower its energy; move toward a state of rest, and increase entropy; alter the memory. The human brain is naturally creative. Our current cultural memories are stored in rest neurons that want to lower energy and increase entropy. Time changes all things.   

Computer memory is designed to be stable for long term storage. Computer memory is not supposed to change on its own. With computer memory, the programming provides the logic to alter the memory using hardware. With living memory, the hardware is its own reason to fire and alter memory. The brain's hardware and software is merged into firmware, with this firmware, defined by the natural laws of physics that energy and entropy. The brain's truth has to be consistent with physical laws. This is called natural instinct. Humans have free will so we can try to this more like the computer; long term dogma and change only based on programed logic of the status quo. Computers supersede that already. 

Say we designed computer memory to be more analogous to neurons. We would need to design computer memory to be unstable memory (UM). If you allowed UM to sit for a week or less, it will begin to spontaneously change, as the semi-conductor material lowers chemical potential, flipping the binary switches. This changes bits and bytes and therefore the very content of the memory. We may begin to see noise in the pics but also some useful creative affects. If you wanted this to be less random, you would need to forms the memory in such as way the flux of changing potential; higher to lower, is based on data priority. This is easier said than done.

Next, we would have a secondary memory, composed of old fashion stable memory (SM). The computer will periodically compare the two memories (UM) and SM) and through logic, cherry pick whatever changes have some logical use. This is added to the SM. Then the computer rewrites the UM in the image of new SM. You do this at high speed to create a flux of consciousness in the computer.

The human brain has long term (SM) and short term memory (UM). The short term memory stays in flux based on real time changes. This is compared to the standards of long term SM and interpreted, which then rewrites the UM for further processing in real time as the environment changes.
Title: Re: Is this a sign of the technological singularity?
Post by: evan_au on 13/03/2016 09:51:07
Quote from: PuppyPower
test all the possible options for all the connected events, governed by the logic of chess
There are an average of 30 legal chess moves at any point in a Chess game; there are around 40 moves in a game, for a total of 4030 possible games.
This is far more than can be analyzed by a computer. So computers typically look a limited number of moves ahead (less than a human grandmaster).

But Go is even more demanding - there are far more legal moves at any point in the game, and more moves in the game.

Quote
Say we designed computer memory to be more analogous to neurons.
This Go implementation is based on Artificial Neural networks. These are able to recognize patterns when they are trained on enough examples.

Quote
This is called natural instinct.
This is apparently a major difference between Chess and Go. A Chess grandmaster will be able to tell you why he made a particular move (or tell you why you made a bad move). And Chess grandmasters are a vital part of the programming of a chess computer.

But a Go grandmaster can't really explain why a particular move is good - it just "feels" right. This makes experts less useful in programming a Go computer. And why it has taken computers much longer to reach Grandmaster status in Go.

Part of the goal of the project is to develop an analog of this "natural instinct" in a computer.
 
Quote
We would need to design computer memory to be unstable memory (UM). If you allowed UM to sit for a week or less, it will begin to spontaneously change
Computers already have unstable memory (http://spectrum.ieee.org/computing/hardware/drams-damning-defects-and-how-they-cripple-computers) - a data center with 10,000 CPUs can expect around 100 software crashes per day due to memory errors.
Sometimes a memory error causes no problems - if the software immediately overwrites that memory cell. But our software today is not very error tolerant - an error in a cell containing a software address will cause severe problems!
Title: Re: Is this a sign of the technological singularity?
Post by: guest39538 on 13/03/2016 09:55:48
A series of computer vs world champion matches are playing out with the game of Go (http://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol). 
At the time of writing, three games out of five have been played, and the score is 3-nil in the computer's favor.

But what caught my eye is the following statement (http://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/alphago-wins-game-one-against-world-go-champion):
Quote
AlphaGo’s programmers insist that it now studies mostly on its own, tuning its deep neural networks by playing millions of games against itself.

This suggests that the program does not have any human opponents strong enough (or fast enough) to teach it, so it trains itself by playing against itself.

Is this a symptom of the hypothetical technological singularity (http://en.wikipedia.org/wiki/Technological_singularity) - where humans are outstripped by computers?

In my opinion, NO, 

I have no idea how the programming works for the software, but at a guess I presume that all the ''moves'' possible are programmed into the computer , so we are just playing ourselves really without any mistakes.

Where can I play this computer?



Title: Re: Is this a sign of the technological singularity?
Post by: Colin2B on 13/03/2016 14:49:44
I presume that all the ''moves'' possible are programmed into the computer ,
No, that's the point of this post
Title: Re: Is this a sign of the technological singularity?
Post by: evan_au on 16/03/2016 20:19:45
And the final score was: Human World Champion: 1 game; for the Computers, AlphaGo: 4 games.

Perhaps the human picked up some tricks from the first 3 games? He won game 4, and apparently came fairly close in game 5.

Title: Re: Is this a sign of the technological singularity?
Post by: flr on 17/03/2016 06:10:44
short answer: no.
Title: Re: Is this a sign of the technological singularity?
Post by: Tim the Plumber on 18/03/2016 22:05:55
Humans do stuff that computers can't or find very hard and computers do stuff well that we find hard.

We find chess and go hard. These games are very suited for computers. They are games where every move can be worked out in advance. So there is a new program that uses experience rather than pure computing power to work out the possibilities. So?

Now lets see it try a situation where there are an infinate number of possibilities.

My betting is on the combination of human and computer winning.