Naked Science Forum
Non Life Sciences => Physics, Astronomy & Cosmology => Topic started by: flr on 10/10/2013 07:46:30

An algorithm shows the steps that solve a problem. For example the following algorithm will calculate the factorial of a number N! = 1 * 2 * 3 * .. * (N1) * N :
P :=1;
for i from 1 to N do
P := P * i
od;

I was wondering how our mind 'compute' [understand/operate] with concepts?
The way we understand concepts could be split in steps like in an algorithm?

I think concepts are covered by "gist" ... http://en.wikipedia.org/wiki/Semantic_memory#Models

An algorithm shows the steps that solve a problem. For example the following algorithm will calculate the factorial of a number N! = 1 * 2 * 3 * .. * (N1) * N :
P :=1;
for i from 1 to N do
P := P * i
od;

I was wondering how our mind 'compute' [understand/operate] with concepts?
The way we understand concepts could be split in steps like in an algorithm?
Obviously, there are many problems that are computable, like the example you have given, that can be quite happily solved by a Turing machine. However, such problems are solved on the basis of certain axioms but it has been shown that within a reasonably complex set of axioms there exist situations where such axioms are not strictly provable and that contradictions can arise in attempting to do so. This is why algorithms are useful but not infallible in trying to model the world since they are essentially selfreferential and may not represent phenomena that do not fit into their methodology. For this reason it is doubtful whether the human mind will ever be subject to being reduced to an algorithm based on axiomatic rules.

For this reason it is doubtful whether the human mind will ever be subject to being reduced to an algorithm based on axiomatic rules.
Interesting, however I don't understand how else a problem can be solved if not broken down to an algorithm.

On the other hand let's take the algorithm:
do while (1==1)
enddo
This algorithm never finishes.
No algorithm can be conceived for a turning machine to tell if the above (or any other) algorithm can finish in real time or it never finishes.
I believe it is interesting and intriguing that the human mind will almost instantly recognize that the above code will never finish.

For this reason it is doubtful whether the human mind will ever be subject to being reduced to an algorithm based on axiomatic rules.
Interesting, however I don't understand how else a problem can be solved if not broken down to an algorithm.

On the other hand let's take the algorithm:
do while (1==1)
enddo
This algorithm never finishes.
No algorithm can be conceived for a turning machine to tell if the above (or any other) algorithm can finish in real time or it never finishes.
I believe it is interesting and intriguing that the human mind will almost instantly recognize that the above code will never finish.
The way human beings think in the real world does not rely on algorithms but on probabilistic, gut feeling modes. This is because we can never be 100% sure that nature will behave as expected. With a computer algorithm if conditions change it is not adaptive enough to cope with the new situation. This is why human beings have a flexible approach to problems. Your example highlights the ability of a human mind to recognise something as true while a computer will blindly follow its code without any real understanding.

Hmm, are you saying that there is no program able, (no way) to test if this statement has a finite result, or not?

It is not about having a finite result. The question was about the halting problem, i.e. can a given hardware architecture (such as a turning machine) tell if a given code will finish in finite time?
For example no algorithm can be devised for a computer to tell if a code like
do while (1==1)
enddo
will finish in finite time.
We can however recognize in an instant that the above code will not finish in finite time.

Most of our computers are based on a sequential computer architecture (like a Turing machine) which does one instruction at a time, in a sequence defined by the algorithm which is provided by someone else  the programmer. To make things faster, today we often build a computer out of several sequential computers which share the workload (or several thousand, for a supercomputer).
However, biological brains need to be able to react to unforseen situations, and so are able to "write" and "modify" their own programs (a similar concept, called "selfmodifying code" is frowned upon in sequential computers as being dangerously unmaintainable). Nerve cells seem to be able to correlate their inputs and outputs to the overall outcome, and positive outcomes are reconfirmed and strengthened, while negative outcomes cause the current decisions to be devalued. Each cell in the brain must be able to do calculations in parallel, taking inputs from other cells, and feeding answers to yet other cells.
Researchers are actively investigating how circuits in the brain work, but a computer architecture called "neural networks (http://en.wikipedia.org/wiki/Neural_network)" is modeled on some ideas of how neural circuits might work in the brain. In this environment it is more useful to talk about "training" rather than "programming", since the network learns the "right" answers iteratively and interactively, not based on a programmer who knows beforehand how to find the "right" answer.
[Of course, if you look at inside the nerve cell, the level of the DNA, you will find another layer of programmed machine churning out sequences of amino acids that allow the cell to fulfill its function in the brain. This is similar to the architecture of some computers (or the Java language), where there is "microcode" inside the computer, which is a another program telling the computer how to carry out the steps of the computer's instructions.]