0 Members and 1 Guest are viewing this topic.
only a living being could possibly solve... using imagination
If that computer is programmed with binary logic, then it must use mathematics to solve the problem.
If that computer is programmed with binary logic, then it must use mathematics to solve the problem. Mathematics however cannot solve problems dealing with the metaphysics of reality. Thus only a living being could possibly solve 2+2=5 using imagination, a property of the mind which allow one to bend the physical lawsof sentience.
'to bend the physical laws of sentience' sounds like pseudo-profound BS - a Chopra-esque deepity - unless, of course, you can explain what these physical laws are (why not show the maths while you're at it), and how they can be 'bent' []
... dead molecules ...
... I meant that imagination allows one to resolve the ubiquity of consciousness using metaphysical freedom.
A computer based on algorithmics have no imagination, no emotions, and no consciousness.
The pseudo-profound BS in my humble opinion is that artificial intelligence could ever create from dead molecules a conscious being.
these can virtualised on a digital algorithmic ...
QuoteOne will never get wet from a simulation of rain. To get wet you still need real rain.
One will never get wet from a simulation of rain. To get wet you still need real rain.
... it were possible to hack into someone's nervous-system it would be possible to accurately simulate any experience , (including wetness).
A few thoughts about AI:Could an AI computer feel sorrow or regret when faced with an error of it's own making?Could an AI computer fall in love with another AI computer without being instructed to do so?Could an AI computer appreciate art to the extent that it could distinguish between beauty and ugliness also without instruction?And lastly, could an AI computer enjoy the activity of "playing" even though the "playing" had no specific profit or progress as it's goal?I frankly don't know the answers to these questions myself and I would hazard a guess that it's highly unlikely that definitive answers to these questions will ever be answered with any degree of certainty.
Could an AI computer appreciate art to the extent that it could distinguish between beauty and ugliness also without instruction?
Could an AI computer appreciate art?
Could an AI computer feel sorrow or regret when faced with an error of it's own making?
could an AI computer enjoy the activity of "playing" even though the "playing" had no specific profit or progress as it's goal?
A rich and welcoming play environment produces more innovated and interested adults who are self-motivated to learn new things. We need to approach this with AIs that are continually self-motivated to learn.
Current AI learning algorithms (like backpropagation) do change behavior, but don't rely on emotional states.
I believe John Searle said something like:One will never get wet from a simulation of rain. To get wet you still need real rain.
my belief is that: 1. A Turning Machine will never generate consciousnesses, because it is an insufficient physical state/structure for such task. Similar with "China Nation" experiment.
A Turing Machine will never generate consciousnesses, because it is an insufficient physical state/structure for such task.
Join together 100 billion neurons—with 100 trillion connections—and you have yourself a human brain