The Naked Scientists Forum
Non Life Sciences
"Electro ..... signals in the computer?
"Electro ..... signals in the computer?
0 Members and 1 Guest are viewing this topic.
"Electro ..... signals in the computer?
28/08/2014 13:04:05 »
They say the computer operates like the brain. Well, actually, they say the brain operates like a computer but my thinking is it should be the reverse for obvious reasons. So, my questions:
In the brain, electrochemical signals travel the synapses to trigger messages to areas of the body and/or mind. I have a feeling the same process takes place in the computer. The messages the operator calls for have to be sent through the system to the correct destination (the asked-for information).
If I am right, what is this signal system called and where does it operate? What is the equivalent of the brain's "synapses" and of the brain's "electrochemical signal"?
I did try Google but got long drawn-out articles about how computers work and never found what I wanted.
Thank you for a very simple reply to my two questions.
Neilep Level Member
Re: "Electro ..... signals in the computer?
Reply #1 on:
28/08/2014 19:41:17 »
Knowing about synapses and electrochemical signals doesn't tell you very much about how the brain works - all it does is tell you the mechanism by which one neuron can send a signal to another neuron. To understand what the brain does you have to look at how collections of neurons are wired together into functional units to carry out specific tasks, and collectively all these functional units are tied together in a way that enables the brain to work as an information processor.
With a computer it is very similar. There is a processor which acts like the brain, and within the processor there are functional units which carry out specific tasks. Different components within these functional units are called logic gates, and they are arranged in ways designed to provide the functions required by the processor. In the brain, the design isn't all done in advance, so a lot of it has to be set up through a learning process in which neural networks of neurons are trained to carry out the specific tasks required of them, and they are never guaranteed to be perfect in the way they perform. They perform though in much the same way as the groups of logic gates in computer processors, although they are much messier and contain a lot of extra complexity which is only necessary because of the way they are set up through learning rather than through intelligent design.
What you need to understand first is how a group of logic gates or neurons can carry out a useful task. Neurons are complicated and messy, but a collection of several hundred can carry out the same task as a few logic gates in a processor, so if you understand how logic gates carry out tasks, it's easy to imagine how neurons can do the same job just by adding lots of unnecessary complexity to the system which adds no useful function but is required in order to give the collection of neurons sufficient flexibility to learn to link up in the right ways to carry out the simple task with a reasonable degree of reliability. So, you need to see a specific example using logic gates.
There are different kinds of logic gates. One of them is an OR gate. This takes two inputs lines and has one output line. If a signal comes in through either or both of the input lines, a signal will be sent out down the output line. A neuron can act in the same way. Another kind of logic gate is the AND gate. This will send a signal out of its output line if it receives signals into it through both of its input lines, but if it only receives one input (or none) it will send no signal out. Again, a neuron can behave the same way.
If you want to add two numbers together, you can do this by designing an arrangement of logic gates, then you simply send in the two values at one end and the answer comes out at the other. Collections of neurons can carry out the same task in the same way. If you are adding a number which may be 0 or 1 to another number which may be 0 or 1, you can imagine that the first number would be sent into the system down one input line while the other is sent down the other input line to the same AND gate, though for zero values there would be no signal sent, so a signal coming in represents 1. If signals come in down both lines, the AND gate will send out an output signal which means 2. If the AND gate doesn't send out a signal, it means that the answer is either 0 or 1. The same input signals can be sent into an OR gate, and that will send a signal out if either or both of the inputs are 1, so the output from this gate will say whether the answer is 1 or 0 if it isn't 2. By adding more gates to this function unit you can sort out the results and deliver an answer down two different lines representing possible values from 0 to 3, though it will never be 3 if there are only two inputs which cannot be greater than 1, but a more complex function unit may take two or more input lines per number being added, and it may have a multitude of output lines to represent a wider range of answers to match. In a typical computer processor, 32 or 64 input lines are in place for each of the numbers being added, and 33 or 65 output lines exist to send out the result. Behind that is a complex mess of logic gates tying together lots of subunits doing the simple thing I described above. [How it's actually done is slightly different and more efficient, but it would need diagrams to explain it, so what I've given you instead is the minimum necessary for you to understand how in principle it can be done.]
Now, you talk of communication in your post, but that's a level up from this low level stuff with logic gates or neurons. For communications you need long-distance wiring to send signals along. In a computer you have those too, with some of them being short links within the processor while others are long ones leading out to webcams, microphones, keyboard, mouse and screen. In a computer, the processor communicates with these via ports. The number that comes out of a maths calculation could be sent to the screen as a binary signal just by making a pixel white for a 1 and black for a 0, so it would be possible to feed the outputs from an addition function unit into a port that displays the result directly on the screen as a pattern. Alternatively, it could be processed first and sent to the speakers as a series of short or long beeps, while in the brain it can be processed to turn it into number words that can be spoken.
The whole process is much the same for brain and computer, but there is a big difference in that the brain is more parallel in its processing. A computer processor runs a program which it has to keep reading in from memory, and that program dictates how it reads and acts on other data read in from memory and from ports, but the brain's program is built into the arrangement of high-level processing units in the brain, and these run all the time in parallel instead of having a single processor take turns at carrying out all these tasks in rotation. Newer computers have more than one processor, but to work more like the brain they would need to have hundreds of them rather than just a few, and then each of those processors could be dedicated to a single task and could have it's program built into it permanently instead of having to keep reading it from memory in order to know what to do next. When working on machines with multiple processors, the different processors need to communicate with each other and they do this by putting data in memory for each other to work with, and then results can be sent back in the same way. The brain does a similar thing, but it has direct wiring between processors to enable data to be sent back and forth for specific purposes, thereby eliminating the need to send and receive it via memory. It would be possible to do this with computers too, but it is too soon to do so because we don't yet know the best way to wire the different processors together as we don't know which tasks they should be dedicated to for maximum efficiency, so we need to keep things flexible by using program code to control them (sitting in memory) and by passing data between them through memory rather than direct wiring. Brains took millions of years to evolve efficient ways of doing things, whereas we are just working things out now and we need to be able to modify our machines repeatedly without having to build a new one with different specialised wiring every time we want to modify the programs they're running. It may be that some day we'll be able to build a perfect thinking machine with direct wiring and with all the programs built directly into the high-level functional units that have to perform specific actions endlessly, but we are not ready to do that yet as it would be a costly mistake to lose all the flexibility before we know the design is right, and that will maintain the most significant difference between brains and computers for a long time to come (ignoring sentience/consciousness issues, but that's a whole nother discussion).