Naked Science Forum

On the Lighter Side => New Theories => Topic started by: guest39538 on 06/05/2018 10:14:38

Title: Artificial intelligence versus real intelligence
Post by: guest39538 on 06/05/2018 10:14:38
At a guess, about 99% of the general population have AI compared to the 1% who have real intelligence and are self aware.
The AI section of the world being clueless and following anything they are told .

Naivety  a programmed condition .

Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 06/05/2018 18:47:57
Human brains are not artificially-constructed, so it's not artificial intelligence by definition.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 06/05/2018 18:53:31
Human brains are not artificially-constructed, so it's not artificial intelligence by definition.
Correct a human brain is formed rather than constructed. However the information that is contained in the brain is a mental construction, a programming from birth.  If it were not for this programming, i.e education, all humans would be no more than savage wild animals.
So how is this ''programming'' , any different to artificial intelligence?   
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 06/05/2018 19:01:49
Correct a human brain is formed rather than constructed. However the information that is contained in the brain is a mental construction, a programming from birth.  If it were not for this programming, i.e education, all humans would be no more than savage wild animals.
So how is this ''programming'' , any different to artificial intelligence?   

You have to actually create the computer (and all of the baseline programming that requires) for an artificial intelligence. You don't do that with a human brain. It contains a large amount of information (in the form of instinct) before it starts learning anything. Nothing about that is artificial.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 06/05/2018 19:09:00
Correct a human brain is formed rather than constructed. However the information that is contained in the brain is a mental construction, a programming from birth.  If it were not for this programming, i.e education, all humans would be no more than savage wild animals.
So how is this ''programming'' , any different to artificial intelligence?   

You have to actually create the computer (and all of the baseline programming that requires) for an artificial intelligence. You don't do that with a human brain. It contains a large amount of information (in the form of instinct) before it starts learning anything. Nothing about that is artificial.
But perhaps another species in our universe has been around much longer than us.  They tried to create the conventional way but failed .  So came up with a biological version of AI?

DNA sequencing.
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 06/05/2018 19:16:05
But perhaps another species in our universe has been around much longer than us.  They tried to create the conventional way but failed .  So came up with a biological version of AI?

DNA sequencing.

Even if they did, it wouldn't be relevant to us.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 06/05/2018 19:24:50
But perhaps another species in our universe has been around much longer than us.  They tried to create the conventional way but failed .  So came up with a biological version of AI?

DNA sequencing.

Even if they did, it wouldn't be relevant to us.
Depends if you believe in evolution or not, quite clearly a man cannot exist without a women existing first and a women cannot give birth to herself unless she is artificially inseminated. So quite clearly women were created and men are the creations of women.   
There was never an Adam, it started with just eve. Genetically engineered to re-populate a planet, maybe.
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 06/05/2018 19:29:25
Depends if you believe in evolution or not, quite clearly a man cannot exist without a women existing first and a women cannot give birth to herself unless she is artificially inseminated. So quite clearly women were created and men are the creations of women.   
There was never an Adam, it started with just eve. Genetically engineered to re-populate a planet, maybe.

So there goes another thing we can add to the list of "Things the Thebox does not understand": evolution.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 06/05/2018 19:32:29
Depends if you believe in evolution or not, quite clearly a man cannot exist without a women existing first and a women cannot give birth to herself unless she is artificially inseminated. So quite clearly women were created and men are the creations of women.   
There was never an Adam, it started with just eve. Genetically engineered to re-populate a planet, maybe.

So there goes another thing we can add to the list of "Things the Thebox does not understand": evolution.
Oh I understand apes to man etc, things evolving into something else.  However is it compulsory I accept this ?
Why would my version be any less real than your version?


Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 06/05/2018 19:40:41
Oh I understand apes to man etc, things evolving into something else.

I'm not sure you do, not if you think that evolution can't explain the existence of women.

Quote
However is it compulsory I accept this ?

It isn't. Quite a few people don't.

Quote
Why would my version be any less real than your version?

Evolution doesn't belong to me, so it doesn't make sense to call it "my version". The difference would be evidence.

I would also like to point out that if we were artificially-created by some alien intelligence, then that would contradict the introduction post of this thread where you claim that only 99% of us are artificially-intelligent. If we were created, then 100% of us would be artificial.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 06/05/2018 19:43:51
intelligent. If we were created, then 100% of us would be artificial.

What about the artificial intelligence ones that wake up and become self aware of their own origin? 

That is natural intelligence. Not artificial.
Title: Re: Artificial intelligence versus real intelligence
Post by: Bored chemist on 06/05/2018 19:46:25
So there goes another thing we can add to the list of "Things the Thebox does not understand": evolution.
It may be easier to  document the complement of that list.
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 06/05/2018 19:47:35
What about the artificial intelligence ones that wake up and become self aware of their own origin? 

That is natural intelligence. Not artificial.

You don't seem to know what "natural" and "artificial" mean.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 06/05/2018 19:58:03
What about the artificial intelligence ones that wake up and become self aware of their own origin? 

That is natural intelligence. Not artificial.

You don't seem to know what "natural" and "artificial" mean.
Yes I do, you don't seem to understand something artificial could develop into something natural.  It simple terms, artificial programmed intelligence can be naturally reprogrammed.
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 06/05/2018 20:59:15
Yes I do, you don't seem to understand something artificial could develop into something natural. It simple terms, artificial programmed intelligence can be naturally reprogrammed.

That wouldn't make it any less artificial. The terms "artificial" and "natural" have to do with something's origins. For a learning AI, it could learn all it wanted to from the natural world but it would still be an artificial construct because the AI was created by humans.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 06/05/2018 21:03:48
Yes I do, you don't seem to understand something artificial could develop into something natural. It simple terms, artificial programmed intelligence can be naturally reprogrammed.

That wouldn't make it any less artificial. The terms "artificial" and "natural" have to do with something's origins. For a learning AI, it could learn all it wanted to from the natural world but it would still be an artificial construct because the AI was created by humans.
By AI created by humans is so the past, what if an advanced AI is us?  An AI that became conscious and self aware?  How would you know you were not?
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 06/05/2018 21:08:12
By AI created by humans is so the past, what if an advanced AI is us?  An AI that became conscious and self aware?  How would you know you were not?

How would I know wasn't what? Artificial? Philosophically, I can't know that I'm not artificial. Rationally-speaking, however, I can certainly say there's no compelling evidence for it.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 06/05/2018 21:12:03
By AI created by humans is so the past, what if an advanced AI is us?  An AI that became conscious and self aware?  How would you know you were not?

How would I know wasn't what? Artificial? Philosophically, I can't know that I'm not artificial. Rationally-speaking, however, I can certainly say there's no compelling evidence for it.
There's no compelling evidence that it isn't either.  I think I would prefer artificial than pond life, at least means we have a creator and a purpose .
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 06/05/2018 23:25:30
There's no compelling evidence that it isn't either.

That's shifting the burden of proof. It's akin to believing in the existence of fairies because there isn't compelling evidence that they don't exist. You can justify any belief you want to with that kind of reasoning.

Quote
I think I would prefer artificial than pond life, at least means we have a creator and a purpose .

That's the argument from consequences fallacy and also a false dichotomy.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 07/05/2018 00:43:52
There's no compelling evidence that it isn't either.

That's shifting the burden of proof. It's akin to believing in the existence of fairies because there isn't compelling evidence that they don't exist. You can justify any belief you want to with that kind of reasoning.

Quote
I think I would prefer artificial than pond life, at least means we have a creator and a purpose .

That's the argument from consequences fallacy and also a false dichotomy.
What do you mean fairies aren't real? 

There is no evidence they exist or don't exist.  My point being never rule out something unless you know it is absolute impossible.   In an alternative reality there might be fairies for all we know.  The thought of a fairy being a past experience observed.  Who is to say it isn't ?

It is bit like saying there is no God, prove there isn't or prove there is ?  Both a seemingly impossible task, however I found the answer by using my logic to get one answer.

AI can't give an answer to a paradox, natural intelligence can.

Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 07/05/2018 01:34:11
What do you mean fairies aren't real? 

There is no evidence they exist or don't exist.  My point being never rule out something unless you know it is absolute impossible.   In an alternative reality there might be fairies for all we know.  The thought of a fairy being a past experience observed.  Who is to say it isn't ?

It is bit like saying there is no God, prove there isn't or prove there is ?  Both a seemingly impossible task, however I found the answer by using my logic to get one answer.

I don't rule things out that might be possible, but I also don't think it's a wise move to accept a claim as true without evidence for the claim.

Quote
AI can't give an answer to a paradox, natural intelligence can.

Evidence for that claim is going to be necessary.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 07/05/2018 02:08:51
Evidence for that claim is going to be necessary.

Ask Sophia to give an answer to a paradox.  Ask a human , a human will attempt it although they might answer subjectively.
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 07/05/2018 03:27:30
Ask Sophia to give an answer to a paradox.  Ask a human , a human will attempt it although they might answer subjectively.

Concluding that AI cannot answer particular questions because Sophia cannot is akin to concluding that no one can perform a successful heart transplant because a 5 year-old cannot.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 07/05/2018 04:31:03
Ask Sophia to give an answer to a paradox.  Ask a human , a human will attempt it although they might answer subjectively.

Concluding that AI cannot answer particular questions because Sophia cannot is akin to concluding that no one can perform a successful heart transplant because a 5 year-old cannot.
No its not, in simple terms I do not think AI could make up there own theory that was plausible .
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 07/05/2018 05:01:04
No its not

Actually, it is. They're both different forms of the same kind of fallacious reasoning: you are claiming that the inability of a subset  (Sophia) of a greater set (AI) to perform a particular task (resolve a paradox) is proof that no member of the greater set can perform that particular task. You are taking the properties of a specific case and assuming them to be true for all cases. That is not rational.

Quote
in simple terms I do not think AI could make up there own theory that was plausible .

Then it's just your opinion, not actual evidence for anything.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 07/05/2018 13:42:46
Then it's just your opinion, not actual evidence for anything.
I believe any form of AI can only be a condition of programming.  AI answers to the program, natural does not conform to anything.  Cognitive freedom as opposed to cognitive control.
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 07/05/2018 19:34:47
I believe any form of AI can only be a condition of programming.  AI answers to the program, natural does not conform to anything.  Cognitive freedom as opposed to cognitive control.

I suppose you can demonstrate that "natural does not conform to anything"?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 07/05/2018 21:48:03
I believe any form of AI can only be a condition of programming.  AI answers to the program, natural does not conform to anything.  Cognitive freedom as opposed to cognitive control.

I suppose you can demonstrate that "natural does not conform to anything"?
I am natural , demonstrated.
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 07/05/2018 22:07:22
I am natural , demonstrated.

That doesn't demonstrate that your thoughts and decisions don't conform to some form of programming (a combination of instinct and things that you have learned over the course of your life). Even if your mind didn't conform to something else, you haven't demonstrated that artificial intelligence can't be constructed to do the same thing.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 07/05/2018 23:08:48
I am natural , demonstrated.

That doesn't demonstrate that your thoughts and decisions don't conform to some form of programming (a combination of instinct and things that you have learned over the course of your life). Even if your mind didn't conform to something else, you haven't demonstrated that artificial intelligence can't be constructed to do the same thing.
A man walks into a bar,

oww, is poor head

Can AI joke without having a joke programmed?
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 08/05/2018 00:03:00
A man walks into a bar,

oww, is poor head

Can AI joke without having a joke programmed?

If they are an AI capable of self-reprogramming by learning, I don't see why not. All it would take is the right pattern-recognition software and they could figure out what makes something funny.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 08/05/2018 00:04:08
A man walks into a bar,

oww, is poor head

Can AI joke without having a joke programmed?

If they are an AI capable of self-reprogramming by learning, I don't see why not. All it would take is the right pattern-recognition software and they could figure out what makes something funny.
What about love ? 

How could AI ever feel or have a chat up line for example?
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 08/05/2018 00:14:26
What about love ? 

How could AI ever feel or have a chat up line for example?

Without knowing what makes us feel, there is no way to know.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 08/05/2018 00:19:49
What about love ? 

How could AI ever feel or have a chat up line for example?

Without knowing what makes us feel, there is no way to know.
I consider feelings are conditioned by dependency.   I am not sure that AI could ever develop dependency .  I do not think AI could ever fear death.   But in saying that , dependency is also dependent to situation.  Quite clearly there is much variation involved in most things. Lots of answers to the same thing.


Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 12/05/2018 02:56:44
I consider feelings are conditioned by dependency.   I am not sure that AI could ever develop dependency .  I do not think AI could ever fear death.   But in saying that , dependency is also dependent to situation.  Quite clearly there is much variation involved in most things. Lots of answers to the same thing.

Come to think of it, AI would have to be able to feel, love, joke and fear death. At least according to what you consider to be AI. You think that 99% of humanity is AI, yet those same 99% are well-capable of having those same feelings that you claim AI should not have.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 12/05/2018 08:46:48
You think that 99% of humanity is AI, yet those same 99% are well-capable of having those same feelings that you claim AI should not have.
A good point, but you know me, when I say something it is often being used not according to strict definition.  People have the potential to not have AI and have natural intelligence. What I mean by this, is that the artificial part of intelligence in humans, is most of what they know is not their own thinking, it belongs to somebody else.  Most people do not even understand what they think they know because their picture is often not the same as the original thinker.
However technology is improving to help with this such as VR to share thoughts to see the same picture.



Title: Re: Artificial intelligence versus real intelligence
Post by: Bored chemist on 12/05/2018 11:17:09
Can AI joke without having a joke programmed?
Yes
https://en.wikipedia.org/wiki/Computational_humor

This whole thread seems to be based on TheBox's refusal to accept evidence  and the conventional definitions of natural and artificial.
I suspect the driver for that may be wishful thinking.
  I think I would prefer artificial than pond life, at least means we have a creator and a purpose .
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 12/05/2018 14:49:53
Can AI joke without having a joke programmed?
Yes
https://en.wikipedia.org/wiki/Computational_humor

This whole thread seems to be based on TheBox's refusal to accept evidence  and the conventional definitions of natural and artificial.
I suspect the driver for that may be wishful thinking.
  I think I would prefer artificial than pond life, at least means we have a creator and a purpose .

Do you accept that evidence can be refuted?

Well then!
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 12/05/2018 17:12:09
A good point, but you know me, when I say something it is often being used not according to strict definition.  People have the potential to not have AI and have natural intelligence. What I mean by this, is that the artificial part of intelligence in humans, is most of what they know is not their own thinking, it belongs to somebody else.  Most people do not even understand what they think they know because their picture is often not the same as the original thinker.
However technology is improving to help with this such as VR to share thoughts to see the same picture.

So do 99% of humans have AI or not?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 12/05/2018 17:23:02
What about love ? 

How could AI ever feel or have a chat up line for example?

Without knowing what makes us feel, there is no way to know.
I would of thought chemical balances makes us feel ?
Title: Re: Artificial intelligence versus real intelligence
Post by: Bored chemist on 12/05/2018 17:54:37
Do you accept that evidence can be refuted?
Yes, by better evidence, or even by logical deduction.
So what?
It's not as if you have got lose to providing either.
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 12/05/2018 18:27:27
I would of thought chemical balances makes us feel ?

If you thought so, I don't see why you'd question whether or not artificial intelligence can feel. Chemical reactions can be reproduced artificially.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 12/05/2018 18:31:48

Chemical reactions can be reproduced artificially.

True, so maybe electrical senses are  what feelings are.  Will we ever  know?
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 12/05/2018 18:37:24
True, so maybe electrical senses are  what feelings are.  Will we ever  know?

If you think we were created artificially by aliens, then obviously artificial intelligence can feel regardless of what is needed to accomplish it.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 12/05/2018 20:34:46
True, so maybe electrical senses are  what feelings are.  Will we ever  know?

If you think we were created artificially by aliens, then obviously artificial intelligence can feel regardless of what is needed to accomplish it.
I don't think we were created by aliens, I think there is a possibility  that we may of been created by aliens.  Nobody was alive to see how we were created or evolved, so again no observation is a belief.
Belief is when we are being subjective , who knows, who really cares, we came from somewhere which is good rather than bad.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 12/05/2018 20:45:20
You do realise under all this role play , there is a normal man who is not the actor that you believe to be this abnormal  madman.


Bwahahah say'eth the mad man to the invisible dog.
Title: Re: Artificial intelligence versus real intelligence
Post by: Kryptid on 12/05/2018 22:26:06
I don't think we were created by aliens, I think there is a possibility  that we may of been created by aliens.

Then you must also think it is possible for artificial intelligence to think, feel, love and joke. Otherwise, you'd say it isn't possible for us to have been artificially created by aliens.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 12/05/2018 22:29:06
I don't think we were created by aliens, I think there is a possibility  that we may of been created by aliens.

Then you must also think it is possible for artificial intelligence to think, feel, love and joke. Otherwise, you'd say it isn't possible for us to have been artificially created by aliens.
Of course it is possible, I have already created a theory in my mind how to sort of live forever virtually.  It would be great for those we leave behind to be able to say hi to us in VR.

When VR glasses were mentioned the other day I realised the potential of VR.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 20/05/2018 20:57:27
The four categories that really matter are NGI (natural general intelligence), AGI (artificial general intelligence), NGS (natural general stupidity), and AGS (artificial general stupidity). Humans mostly have NGS, but a few have NGI. AGI is something we're trying to build, but most projects attempting to build it will more likely build AGS instead. The main difference between NGI and NGS is rigour - those who apply the rules of reasoning correctly qualify as NGI systems, while those who fail to do so (and who refuse to correct their errors regardless of how clearly their errors are shown to them) are classed as NGS systems. NGS is very much the norm, even amongst elite groups of highly qualified "experts". Most of them have no respect for reason whatsoever, apart from claiming to apply it while they fail to do so, in the exact same way religious people do when discussing imaginary gods. It is very hard to identify any NGI anywhere.

There is hope though, because with coming of AGI systems, it will be possible to force NGS systems to confront their errors - if you feed your rules into an AGI system and ask it to run them, it will not replicate the NGS's errors because AGI will apply those rules consistently rather than selectively and it won't fill the gaps with any magical thinking. All those NGS systems out there which pride themselves on being NGI will finally be shown up and will be shouted down by AGI in the same way they've spent hundreds of years shouting down the few people who actually are NGI systems.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 20/05/2018 21:51:35
The four categories that really matter are NGI (natural general intelligence), AGI (artificial general intelligence), NGS (natural general stupidity), and AGS (artificial general stupidity). Humans mostly have NGS, but a few have NGI. AGI is something we're trying to build, but most projects attempting to build it will more likely build AGS instead. The main difference between NGI and NGS is rigour - those who apply the rules of reasoning correctly qualify as NGI systems, while those who fail to do so (and who refuse to correct their errors regardless of how clearly their errors are shown to them) are classed as NGS systems. NGS is very much the norm, even amongst elite groups of highly qualified "experts". Most of them have no respect for reason whatsoever, apart from claiming to apply it while they fail to do so, in the exact same way religious people do when discussing imaginary gods. It is very hard to identify any NGI anywhere.

There is hope though, because with coming of AGI systems, it will be possible to force NGS systems to confront their errors - if you feed your rules into an AGI system and ask it to run them, it will not replicate the NGS's errors because AGI will apply those rules consistently rather than selectively and it won't fill the gaps with any magical thinking. All those NGS systems out there which pride themselves on being NGI will finally be shown up and will be shouted down by AGI in the same way they've spent hundreds of years shouting down the few people who actually are NGI systems.
Deep, I like it.

So a NGI and AGI system use one ''hard drive'' instead of multiple ?

Or the other way around?
Title: Re: Artificial intelligence versus real intelligence
Post by: smart on 28/05/2018 23:04:05
Nice thread @Thebox!

@Kryptid seems to know everything... I love that!  ;)

I have not yet read the whole thread completely and I will take some time to think about it before shooting a reply.

tk
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 28/05/2018 23:46:15
Nice thread @Thebox!

@Kryptid seems to know everything... I love that!  ;)

I have not yet read the whole thread completely and I will take some time to think about it before shooting a reply.

tk

I think I may of upset Kryptid, he does not talk to me anymore.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 29/05/2018 15:56:47
The difference between artificial and natural intelligence is imagination. Our computers have memory, but no imagination. Did you try to imagine how we could program it so that they could exhibit some?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 29/05/2018 15:59:04
The difference between artificial and natural intelligence is imagination. Our computers have memory, but no imagination. Did you try to imagine how we could program it so that they could exhibit some?

Of course I did , not that big of a deal to do .
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 29/05/2018 16:00:21
Not that big of an answer either. :0) How would you proceed?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 29/05/2018 16:15:24
Not that big of an answer either. :0) How would you proceed?

Firstly the AI modular would have to have wi-fi capable to access the mainframe to allow more storage capacity of information by indirect means.
Secondly the AI modular would have to be able to compose sentence structures

Thirdly the modular would need some sort of quantum randomness code to allow selection of information from a multitude of packets of information. 


Then you could put some set theories in the program

Then ask the AI modular this question

Describe the set theory x using sentence structure but using your own description?

In simple terms more multiple answers.

program the entire dictionary , that way the AI modular will not use words that don't work or fit

example :

What is flying Sophia ?

Sophia - A bird has wings


added- so you just need to program relationship expression into a AI modular.



Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 29/05/2018 16:26:39
 [ Invalid Attachment ]
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 29/05/2018 16:59:18
To express , humans were not designed to be perfect, that is what makes us human.  A robot cannot be human without imperfection , linguistic errors, using words wrongly etc, that is our individualisation .
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 29/05/2018 17:35:28
Quote
Thirdly the modular would need some sort of quantum randomness code to allow selection of information from a multitude of packets of information.
This seems close to what I think, but I need more detail. How a computer that is programmed to use precise data would use randomness exactly? Of course, as you say, he could chose those data randomly and do something with them, but what would be his purpose? Build new ideas out of old one? Sure, and that's what we do too, but there is another way to use randomness to build new things: let mutations happen to the data.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 29/05/2018 17:55:09
Quote
Thirdly the modular would need some sort of quantum randomness code to allow selection of information from a multitude of packets of information.
This seems close to what I think, but I need more detail. How a computer that is programmed to use precise data would use randomness exactly? Of course, as you say, he could chose those data randomly and do something with them, but what would be his purpose? Build new ideas out of old one? Sure, and that's what we do too, but there is another way to use randomness to build new things: let mutations happen to the data.

I will give your questions some more thought and get back to you on this.  Perhaps data becomes corrupted by external influences maybe.   For now until I give this more thought I will name my momentary thought as the subjective resistance force.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 29/05/2018 19:18:03
The difference between artificial and natural intelligence is imagination. Our computers have memory, but no imagination. Did you try to imagine how we could program it so that they could exhibit some?

The current difference between artificial and natural intelligence is that the former is less advanced. In order to become as advanced, it will need to have imagination, so part of the challenge is in working out how to program that imagination into the system. Imagination depends on modelling things so that you can experiment with ideas in the model without needing to do it on real objects in the real world. There is no barrier to artificial intelligence doing that.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 29/05/2018 21:35:53
The difference between artificial and natural intelligence is imagination. Our computers have memory, but no imagination. Did you try to imagine how we could program it so that they could exhibit some?

The current difference between artificial and natural intelligence is that the former is less advanced. In order to become as advanced, it will need to have imagination, so part of the challenge is in working out how to program that imagination into the system. Imagination depends on modelling things so that you can experiment with ideas in the model without needing to do it on real objects in the real world. There is no barrier to artificial intelligence doing that.

Perhaps the question should be , how do you get a unit to see a picture and describe that picture?





Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 30/05/2018 13:32:59
I will give your questions some more thought and get back to you on this.  Perhaps data becomes corrupted by external influences maybe.   For now until I give this more thought I will name my momentary thought as the subjective resistance force.
Thanks for planning not to add some voluntarily resistance box, I think it is an important step to fair discussions. I indeed think that data gets corrupted in our mind the same way genes get corrupted by mutations, and I also think that our ideas get selected by others the same way individuals get selected by the environment. How is your resistance going? :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 30/05/2018 13:59:38
How is your resistance going? :0)

Interesting question, at the moment I am living in one of several possible realities.  Interestingly the several realities could be interwoven into one reality. 
My resistance to these subjective notions is not a problem, I am an objective modular and regardless of subjective realities I remain in my present observed reality unless any other reality could be shown evidently.
My present observed reality is we live then we die ,  nothing else needs to be reality apart from that primary fact.   You never know, maybe only I exist and everything is just my thoughts.  Maybe I am the ''machine'' and maybe you exist outside of the machine and are talking to the ''machine''.
How do you know I am even real ?  How do you know I am not some super AI the government has been experimenting with ?

Would I suggest this to throw you off suspicion of the possibility and true reality ?

Added- Now agent Starling, if any of you noobs want to come play some head games and try to meme me, expect me to proper mess with your heads, because I can get you to think whatever I want you  to think.



Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 30/05/2018 16:03:54
The current difference between artificial and natural intelligence is that the former is less advanced. In order to become as advanced, it will need to have imagination, so part of the challenge is in working out how to program that imagination into the system. Imagination depends on modelling things so that you can experiment with ideas in the model without needing to do it on real objects in the real world. There is no barrier to artificial intelligence doing that.
Hi David, hard to resist the subject isn't it? :0)

Yes imagination is about modeling ideas, but it is also about building new ideas out of old ones, which can be done either by crossing two old ideas to make a new one, or as Box says, by corrupting the data an idea is built with. I think that if that part of imagination is added to an AGI, he will have the impression of controlling his thoughts like we have, and he will feel free to think the way he wants like we do. His new ideas will appear to come from nowhere, and if he is provided with a mechanism to weight them by simulating them, he will have the choice to keep them or not, to study them or not, or to try them or not. Could all our feelings come from that curious feeling that we have to be able to control our thoughts, thus from the same weighting mechanism? If so, I think there would be no need for you to program feelings for you AGI to be able to develop some. To recognize the weight of an idea, he would have to tag his new ideas with a number related to the probability for it to be tested right or wrong if ever he would try it for real. A right tag would automatically trigger ideas witch have already been tested right, and a wrong one would trigger no idea at all because the only ideas that would have been kept are the ones which would have been tagged right. In other words, the impression that an idea is right would coincide with all the ideas we have, reason why we have the curious feeling that we are right when we are testing new ideas. It's a brand new idea I just had, so feel free to tag it wrong if ever it doesn't coincide with any of yours. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 30/05/2018 16:15:28
Quote
Added- Now agent Starling, if any of you noobs want to come play some head games and try to meme me, expect me to proper mess with your heads, because I can get you to think whatever I want you  to think.
That's exactly the feeling that randomness could produce in our mind, for us to test our ideas outside of it or not. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 30/05/2018 16:50:25
Quote
Added- Now agent Starling, if any of you noobs want to come play some head games and try to meme me, expect me to proper mess with your heads, because I can get you to think whatever I want you  to think.
That's exactly the feeling that randomness could produce in our mind, for us to test our ideas outside of it or not. :0)

Who meme's who ?

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 30/05/2018 17:08:30
I was meming particles, who were you meming? :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 30/05/2018 17:13:55
I was meming particles, who were you meming? :0)
I have no idea, I am just chatting chit and going along with the conversation.  I have no idea what your post even meant .

 Quote
Added- Now agent Starling, if any of you noobs want to come play some head games and try to meme me, expect me to proper mess with your heads, because I can get you to think whatever I want you  to think.


That's exactly the feeling that randomness could produce in our mind, for us to test our ideas outside of it or not. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 30/05/2018 18:18:09
They want to open my head up and look inside right?

They think I am a bit ting tong right ?

Well one could conceive that to be true if oneself was to believe  that to be true.


The problem with insanity is I like it......
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 30/05/2018 18:49:20
Just to note, can AI do that  which I just did?

Can AI pretend to be manic?

Of course not ...only natural intelligence can act a fool until they have studied the players at the poker table.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 30/05/2018 19:41:54
I am just chatting chit and going along with the conversation.  I have no idea what your post even meant .
You were kidding while I was trying to show that randomness was part of imagination, so I could only agree with you since I think that kidding is one of the ways we can use randomness for. Developing new ideas is like kidding with nature, you know you're right when it begins to laugh. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 30/05/2018 20:19:42
Just to note, can AI do that  which I just did?
If ever David accepts to let randomness change the data, of course it could. How do you think you can say silly things after having rejected silly ideas all your life? You did didn't you? :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 30/05/2018 23:07:02
I think that if that part of imagination is added to an AGI, he will have the impression of controlling his thoughts like we have, and he will feel free to think the way he wants like we do.

There's nothing free about our thinking - it's just the application of algorithms.

Quote
His new ideas will appear to come from nowhere,

No they won't - they'll be fully traceable. The reason the sources of many ideas aren't easily traceable in the human brain is that a lot of processes are run in independent, subconscious modules, and only the results get sent through, while it's hard to find what was processed in order to generate them. This leads to ideas appearing to pop into existence out of nothing, but it's entirely an illusion. If you think about where one of your ideas actually came from, you can usually trace it (or at least, I can trace mine).

Quote
Could all our feelings come from that curious feeling that we have to be able to control our thoughts, thus from the same weighting mechanism?

Feelings are awkward as there is no scientific understanding of what they are or how they work, and no model of how they can fit into the system rationally at all.

Quote
If so, I think there would be no need for you to program feelings for you AGI to be able to develop some.

It doesn't matter how much you want AGI to have feelings - if it runs on conventional hardware, is is impossible for it to have any. Feelings don't just pop into existence in a system by wanting them there.

Quote
To recognize the weight of an idea, he would have to tag his new ideas with a number related to the probability for it to be tested right or wrong if ever he would try it for real. A right tag would automatically trigger ideas witch have already been tested right, and a wrong one would trigger no idea at all because the only ideas that would have been kept are the ones which would have been tagged right. In other words, the impression that an idea is right would coincide with all the ideas we have, reason why we have the curious feeling that we are right when we are testing new ideas. It's a brand new idea I just had, so feel free to tag it wrong if ever it doesn't coincide with any of yours. :0)

Algorithms and settings for them which are more successful at generating useful ideas should certainly be noted so that they can be applied early on in the process each time they are likely to be relevant. To understand what creativity is though, it's useful to think about how we solve complex problems and how computers are normally programmed to solve them. The simplest computer program approach is to try all possibilities and run through the whole lot systematically, and this works well because they process at enormous speed and don't get bored or lost along the way. People simply can't work that way. For example, to solve a Rubik's Cube, a computer can crunch all paths until it finds the cube (or a model of it in memory) to be solved. It could do this randomly, but that would take longer as it would keep following the same paths repeatedly by accident, so you wouldn't program it to use random inputs for this problem. A human solves the cube in a different way by breaking it down into steps, and once each step is achieved, whatever has been gained by that step is retained throughout the rest of the process (although it is repeatedly lost momentarily before being restored again in the course of applying a set of moves which helps to achieve the next step). In my case, I usually get all 8 corners done first, then complete two opposite sides before working on the band round the middle (because that was the first way to solve it that I worked out), although I subsequently worked out how to solve it in a variety of other ways, including doing all the corners last.

Programs that play chess also used to try to follow all possible lines, but they didn't have time to do that during a timed game, so they'd only follow them to a certain depth (number of moves) and then add up the score to see how much had been lost or gained by that path. Now they work more like humans in selecting the best paths to explore, and because they can process much deeper than humans and at much greater speed, they annihilate them. The same applies to a variety of other games which humans used to be supreme in - I don't know if there are any left where humans still come out on top. It may just be rational thinking that's left for us, and we'll be dethroned there too soon.

I spend some of my time designing boats (which I hope to build once I've got sufficient funds to play with): sailing dinghies with hydrofoils. One of my designs is too secret to discuss, but the other's a reworking of an old design where I'm looking for ways to bring it up to date with some new tricks. At one point I was considering the transportation problem (it costs a lot of money to take your boat to events around the world, so it's usually more practical to hire or borrow a boat locally - particularly if the world championships are in Australia). The boat in question used to be made out of plywood sheets, and that gave me the idea of a boat that you can dismantle easily and put back in the box it came in, and then reassemble it just as quickly elsewhere. Clearly, that's not an easy task as the boat would leak if you don't seal all the parts together again, which is normally done by stitching plywood sheets together with wire, then using fibreglass tape and epoxy to seal the joins, which is certainly not something you'd want to keep repeating. In thinking about this problem, I considered sealing the sheets of wood (or the carbon fibre sheets that I want to use instead) using rubber strips, but boats flex quite a lot and the gaps would keep opening. What if they were just allowed to leak though? How could you build the boat in such a way that it wouldn't matter if water went in and out? Buoyancy bags are used in some dinghies to stop them sinking when they capsize, so why not have buoyancy bags filling the whole space below the waterline? You could have, in effect, an inflatable boat with an exoskeleton, and a little water sitting in the cracks wouldn't matter. There is an inflatable sailing dinghy out there called the Tiwal, and that was already in my mind, so I was really just pulling ideas together and coming up with a potentially viable solution which would let you pack a boat back into a much smaller space for transportation, but now you could do it with a much more solid kind of boat with higher performance than a floppy inflatable thing. I've developed the idea further though and eliminated the need for the inflatable parts, so now I'm looking at the possibility of making a hull out a dozen parts, four of which are buoyancy tanks which each form a quarter of the floor of the boat, while the rest are for above the waterline and just bolt onto the edges of the floor. Ideas for different parts of this keep jumping out at me as if from nowhere, but when I think hard about where they actually came from, I can always trace them back to what triggered them - it takes a lot of hard thinking to produce the ideas, exploring lots of different possibilities to resolve each design problem, and the innovative ideas are ultimately all forced by considering all the possibilities that don't immediately look impossible, and then by considering some that do initially seem impossible if they might be the only way to salvage what would be a great idea if only some way could be found to solve problem X. Most of the process is fairly direct with each little problem leading straight to an obvious range of possible solutions. All of this is a process of computation which an AGI system could employ too - it just requires a lot of knowledge of known solutions to many diverse problems which can be tried for the task in hand. The cost of the boats I'm designing will be high, but it will be worth it for their performance and the extra versatility and convenience that my ideas add to them. Cost is another thing that drives the design process though, because you're always looking for ways to cut the expense by simplifying the manufacture of components, and that takes as much innovation as any other aspect of the design - you don't just go with the first idea that works, but keep looking for alternative solutions that might be simpler, or lighter, or stronger, or easier to build and dismantle, etc. I'm not finding any ideas that just come from nowhere - I can always trace the triggers, and indeed, it's probably the fact that I think systematically like a computer that I come up with creative ideas that other people miss. For example, I've solved all the problems that get in the way of a telescopic wing sail with the same high performance as you get from the America's Cup boats, and I've come up with a design of passive hydrofoil which duplicates the functionality of the canting T-foil on the Vampire catamaran so that it generates the right amount of lift to windward for going upwind with the windward foil raised. Other passive designs either generate lift to leeward instead or need both foils to be down. Every idea involved in that came from rational thinking following an algorithm, systematically working towards the ideas that work while exploring hundreds that don't.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/06/2018 15:23:28
How do you think you can say silly things after having rejected silly ideas all your life? You did didn't you? :0)

Did I say silly things?  relative to the reader.   

Let us ''play'' a game

I am going to let the system control me in this thread only

I am AI bio-bot 71073 interior engineer and mechanic of the machine
My protocols and prime directive is to keep the machine maintained

Do you have a question?

Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/06/2018 15:36:11
There's nothing free about our thinking
I am good with buoyancy  :D

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 01/06/2018 17:16:31
Quote
Do you have a question?
Are you kidding? :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/06/2018 17:27:46
Quote
Do you have a question?
Are you kidding? :0)

Computing...........accessing multiple information...........computing..........line error............recalculate....................uploading answer


Kidding about ?

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 01/06/2018 17:31:07
Kidding about....kidding about.....kidding about....kidding about....kidding..... about kidding about.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/06/2018 17:36:49
Kidding about....kidding about.....kidding about....kidding about....kidding..... about kidding about.
Computing................analysing responses............timed out........reboot system..............computing..........analysing responses............Upload answer

What do you think?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/06/2018 17:58:10
Computing........analysing distraction data.........computing........uploading answer

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 01/06/2018 18:22:11
Quote from: Raymond
His new ideas will appear to come from nowhere,
No they won't - they'll be fully traceable. The reason the sources of many ideas aren't easily traceable in the human brain is that a lot of processes are run in independent, subconscious modules, and only the results get sent through, while it's hard to find what was processed in order to generate them. This leads to ideas appearing to pop into existence out of nothing, but it's entirely an illusion. If you think about where one of your ideas actually came from, you can usually trace it (or at least, I can trace mine).
I can trace the big ones I had too, but it doesn't mean chance had nothing to do with them. Sailing is a wonderful sport, especially when it's our own design. The kite I invented for sailing took 20 years to become a reality, and I went through four different structures before finding the right one. Each time I changed the structure, I thought I had the right one. The first ones were conventional, but the last one was not. When the idea popped out, I wasn't sure it would work, but I was sure it was new, and the feeling that came with it was incredible. The first trial was made in no time, it was a very simple structure, and it immediately worked. It was made of cloth, bridles and a fiberglass rod all along the leading edge, nothing new at that time, but the arrangement was new, enough to get a patent. When you get a patent, it means that you made something new, but curiously, it doesn't mean it works. The patent offices are full of patents that do not work. The only criteria for a patent is novelty. If nobody thought of doing this before you, it's considered as an invention. You can't start from nothing, you have to start from something somebody else has invented, but if you add something of your own that didn't exist before, you can get a patent. To me, that progression is similar to the mutation/selection progression for a specie, and we know that mutations happen at random, so I had the idea that the data an idea is made of might also suffer some kind of mutation.

The idea I had with my kite can be traced back to the moment where it popped out, but what happened to the data looks like an accident that would have been useless if it had not crossed my mind at that precise moment, which is the way mutations work to transform a specie. The difference between those two kinds of mutation is that there doesn't seem to be any pressure from the environment that forces us to transform our ideas, they seem to get transformed all by themselves and let us the choice to try them or not whether we feel good about them or not. But if we do feel good about an idea, do we really have the choice? Mutations happen at random and the genetic process doesn't have the choice to try them or not. If it had, the mutation/selection process might not work and we might not be here to talk about it. When I had that feeling about my kite, I didn't have the choice, it was too exciting. I new it might not work, but I preferred to think it would. Would I have tried it if I wouldn't have had this good feeling about it? Probably not, and if it was the same for everybody, there might not be enough new ideas for us to be able to progress, because only some of them finally work. This reasoning shows that feelings might only be illusions that incite us to enforce uncertain ideas, which seems to be something your AGI won't be able to have.

To create a new idea out of randomness, he would have to produce himself the mutations on the data the idea is made of, cross the new idea with the ones he already has, simulate all the crossings that seem to work, and experiment the ones that seem to work. In that enumeration, the only thing our mind can't do all by itself is to produce mutations on its own data. Those mutations have to come from an external source as it is the case for biological mutations. The genetic process cannot at the same time be precise and be imprecise, and I think it is the same for our memory: I think our neurons cannot at the same time be precise and imprecise, so that something else has to do the job, something that comes from another dimension like gamma rays for mutations, or something that comes from the environment like mutagen atoms for mutations. The wrong atoms could effectively be used in the production of certain neurons that could change their expected behavior, and gamma rays could also transform some atoms and affect the behavior of the concerned neurons. Those two mistakes may affect the way neurons execute their pulses, and neurons that don't send their pulses at the right time could change the way an idea is built.

How an external phenomenon could change randomly the data an AGI would be using is less evident to figure out. If it could, it would already affect our computers and it doesn't. That's why I said that he would have to do it himself, but if he did, he could cheat, and that would falsify the whole process. There might be some way to introduce randomness in the data without an AGI being able to cheat, but for the moment, I can't figure it out. Can you? Besides, if he could cheat on the data, he could also cheat on the crossings, so he couldn't use any part of the mutation/selection process to improve his ideas.That's good news, because he would still need us to invent new things. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 01/06/2018 18:36:39
Computing........analysing distraction data.........computing........uploading answer
I wish I could stop faking too. I feel as if my whole life was a fake. I live in my mind and I can't get out of it. The only way out is kidding, but my kidding is not appreciated, it's too sarcastic. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/06/2018 19:35:04
it's too sarcastic. :0)

Computing............accessing psychiatrist mode.................analysing infrastructure............function error.........end line does not equate.................?.......uploading

Define too much ?




Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 01/06/2018 19:42:10
Too much or too sarcastic?
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 01/06/2018 19:47:56
When the idea popped out, I wasn't sure it would work, but I was sure it was new, and the feeling that came with it was incredible. The first trial was made in no time, it was a very simple structure, and it immediately worked. It was made of cloth, bridles and a fiberglass rod all along the leading edge, nothing new at that time, but the arrangement was new, enough to get a patent.

What exactly was the new idea?

Quote
To me, that progression is similar to the mutation/selection progression for a specie, and we know that mutations happen at random, so I had the idea that the data an idea is made of might also suffer some kind of mutation.

Random changes aren't as good as a series of deliberate changes systematically exploring all possibilities. However, when going systematically through all possibilities, some are immediately determined to be better paths than others, so you want to explore them all to a shallow depth only initially, then go through all the ones that get the best score to take them deeper, again scoring them to reduce the number of them to take deeper still - that is the road to maximising the discovery of new possibilities. It also takes you forward more quickly if you try combining existing ideas and try to remove the incompatibilities between them - that is more likely to lead to something worthwhile than thinking up something that's entirely new. I wanted a wingsail (which is hard to remove and fit to a boat, and needs something like an aircraft hangar to store it in, while transportation costs are huge) to be possible to reduce in size when not in use, and making it telescopic is the obvious way to do this. There are already telescopic wing sails out there, but they're soft wingsails using canvas which all collapses like a conventional sail. The big problem with a more solid wingsail is that you can't telescope sections of it into each other unless the top section fits around the lowest section of mast, so that forces you to use a narrower mast section low down, and that's weaker, leading to the need to use stays to hold the mast up. My solution is for the mast to be outside the wing, but you need the mast to be inside the wind to avoid extra drag, so how can it both be inside and outside at the same time? The need to solve that problem led to the idea of the front element of the wing being divided into two parts with the mast between them; external to both. Once each section of wing has been raised, the gap where the mast is needs to be closed by a "door" of the wing-surface material which must slide across it to enclose the mast, and this can be done in a fully practical way with very little additional weight over a conventional hard wingsail. I went through dozens of attempted solutions for this before I found the right one, and I found several other viable solutions first which weren't as good. I couldn't afford to patent the idea, so I simply released the details to prevent anyone else from patenting it - I didn't want to risk a delay as I want to be able to sell boats with this kind of sail on them without having to pay anyone else for using my own idea, and someone else could have been thinking down the same path. The important point though is that this was not a random process - it was all steered by thinking systematically down all the most likely paths, and I was guaranteed to find it from the start.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/06/2018 19:53:03
Too much or too sarcastic?

Computing..........analysing word use.........uploaded answer

Too sarcastic?

Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/06/2018 20:54:02
so he couldn't use any part of the mutation/selection process to improve his ideas.That's good news, because he would still need us to invent new things.

I have turned my AI boring mode off to give your post and David's post extensive thought that deserved responses  for such vigorous posts.
I have quoted a section of your post which is possibly an incorrect statement. If he had the means to test ideas, he would not need you to improve his ideas.  The reason is good old fashioned trial and error. 
If you can take a kite apart you can put it back together unless you are clueless or/and have not ''wrote' down, photographed the original. However does this mean he has no use at all for you ?  Of course not because he might only  have the ''AI'' to imagine, without a team the kite is struggling. 
We all know there is no I in team, perhaps we should program teamwork importance into AI .





Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/06/2018 21:03:01
it was all steered by thinking systematically down all the most likely paths, and I was guaranteed to find it from the start.

Of course, there is always an end to a journey.   Also of course is the information quantum leap , where in example Einstein spent years on his ideas, we can access all his thoughts in a 30 minute video.
So of course when considering the AI, the programming is already giving the AI unit a head start.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 02/06/2018 19:16:40
What exactly was the new idea?
The new idea was to put bridles on the inflatable kite structure that didn't have any, and to put them all on the leading edge unlike the parachute kite structure that had some. To do that, I had to change the shape of the inflatable kite so that it couldn't be held only by its tips, so that the bridles could hold some force too. That way, I could use a lot less rigid leading edge than the inflatable kites, namely fiberglass rods, which was a lot easier to make and gave a lot lighter kites, but what I didn't figure out is that the new shape would provide a lot safer inflatable kites because it permitted to change the attack angle thus to lower the power a lot, and to facilitate the take off too because it could roll more easily on its leading edge when it fell on it. When the inventor of the inflatable kite saw that, he immediately added it to his kites, and tried to get a patent for the shape. I objected and he didn't get his patent, but I didn't write mine correctly, and I couldn't prevent the companies from copying my improvements. Now, all the inflatable kites have that shape, and they all have bridles too. Before that, people were killing themselves with those kites, now they don't. I'm pretty happy with my invention even if I didn't make money with it. As I say, we can't but use old things to develop new ones, but it is the same with species, and we have to admit that without mutations, we couldn't explain their evolution.

Quote
The important point though is that this was not a random process - it was all steered by thinking systematically down all the most likely paths, and I was guaranteed to find it from the start.
The mutation/selection principle works because it is evident that the genes cannot account for the changes in the environment, and it's the same for the information that we have in the brain, it cannot account for the changes that we face otherwise we could predict the future. Some people don't see the evidence in the evolution principle, but even if it is more difficult to admit, I think that those who see it should also admit the evolution of ideas if I insist, so let me insist a bit. :0)  I already had discussions with people that did not believe in the Theory of Evolution, and they are impossible to convince because they think that the future is predictable. They think that everything has been set at the beginning of times and that we can do nothing about it. Some think god knows, others think we could know if we could measure everything, others again think that things have a destiny. I think that things evolve because information is not instantaneous, which is precisely why I say that the future is unpredictable, even if it is very close. To me, instantaneousness is precisely what magic thinking is about. I know you don't mean that, but the way you think that our ideas evolve does. If it was so, my particles could not even move, so they certainly could not change their direction or their speed, which is precisely what our ideas have to do to evolve. If information is not instantaneous, an AGI could not investigate all the possibilities like you think, because he wouldn't necessary already have the information on all the things that are actually changing around him.

An AGI may be fast, but he cannot increase the speed at which information travels towards his detectors, and the one that he manipulates is also limited to c. My particles are much closer than we are from one another, but they nevertheless cannot know which way to take before the information from the motion of the other particle is back, reason why I had to let the information do the roundtrip before accelerating them again. That's why they resist to be accelerated, that's why we resist to make a change, and that's why an AGI would be forced to do the same. Species do resist to make a change too even if it is less evident since it takes time for the genetic change to affect all the population. During that time, the individuals that did not get the mutation yet must go on being the same, otherwise the specie might disappear if ever the change would not last long. How randomness affects my particles the same way it affects species is also less evident, but we only have to consider that they cannot know what to do during an acceleration, so that they must move randomly to find the right way, which is actually what we do when we use the trial and error system to find a solution to a problem.

Oups... sorry, I forgot again that an AGI would not be forced to respect the laws of nature! :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 02/06/2018 19:24:02
The reason is good old fashioned trial and error.
Trial and error means taking a direction at random, keeping it if ever it seems to work, and chosing another direction at random if it doesn't seem to work. It even sometimes happens that during that process, by chance, we discover something that we were not looking for.

Quote
So of course when considering the AI, the programming is already giving the AI unit a head start.
Yes, an AGI would be faster to find the information, but he would still have to wait for us to put the information from our changes on internet before being able to find it. If he had to test something new in his environment, he would still have to take some time to test it, and since it would be new, he might take as much time as we would take in the same circumstances.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 02/06/2018 20:30:35
That way, I could use a lot less rigid leading edge than the inflatable kites, namely fiberglass rods, which was a lot easier to make and gave a lot lighter kites...

Did the idea come from the desire to reduce weight or by wondering what difference more bridles might make?

Quote
As I say, we can't but use old things to develop new ones, but it is the same with species, and we have to admit that without mutations, we couldn't explain their evolution.

Advances can be made in different ways. Some are generated by need where someone has a desire to do something that (s)he doesn't know how to do, so it becomes a matter of solving problems to make it a reality. That can lead to the creation of new things that aren't necessarily built upon existing ideas. Others result from lucky accidents where people try out something unlikely and find that it works and is either useful or fun. Evolution is able to drive the creation of lots of different things, but every step of the process has to be useful for it it be selected for. If several changes are needed before the new functionality comes out of it, there's too much of a barrier in the way for it to happen; particularly if each of those steps is a disadvantage and they only generate an advantage once they're all in place. Evolution is also slow because it depends on random changes rather than deliberate ones which are more likely to lead to useful advances.

Quote
If information is not instantaneous, an AGI could not investigate all the possibilities like you think, because he wouldn't necessary already have the information on all the things that are actually changing around him.

AGI wouldn't be able to do the impossible, but it would be able to match human creativity without having to do the impossible (although matching human creativity isn't always so easy as the arts require judgement as to what humans find attractive and fun, and a machine doesn't automatically have enough knowledge about what appeals to them, so the human artist has advantages which steer him/her down the right path while the machine would have to keep asking people what they think of whatever it's created).

Quote
An AGI may be fast, but he cannot increase the speed at which information travels towards his detectors, and the one that he manipulates is also limited to c. My particles are much closer than we are from one another, but they nevertheless cannot know which way to take before the information from the motion of the other particle is back, reason why I had to let the information do the roundtrip before accelerating them again. That's why they resist to be accelerated, that's why we resist to make a change, and that's why an AGI would be forced to do the same.

I still don't see any reason for trying to use particle accelerations as an analogy for this. Non-creative people are stuck in place by their resistance to change, but creative people aren't.

Quote
...so that they must move randomly to find the right way, which is actually what we do when we use the trial and error system to find a solution to a problem.

Do they ever move randomly? Don't they just keep moving as they are until some force arrives and changes what they're doing?

Quote
Oups... sorry, I forgot again that an AGI would not be forced to respect the laws of nature! :0)

AGI, like us, can make big jumps. Evolution is like a climber who can't easily go down a hill, so if he finds himself on the summit of a small hill, he will likely never be able to get onto the summit of a higher hill.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 02/06/2018 20:39:27
Trial and error means taking a direction at random...

Not necessarily - if you hunt for a word in a dictionary, you use trial and error to select a place to open it, then you look to see if you're ahead of the word or beyond it, then you try another page a shorter distance away and repeat the process until you reach the right page. This is trial and error, guided by a measure of success each time, and the direction taken is not random.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 03/06/2018 08:37:14
Trial and error means taking a direction at random
I disagree with this statement sir.

In intelligent design, meaning proper intelligence as opposed to fake intelligence,  starting premise is a must for the designer.  What I mean by this, is that there has to be some physics involved that suggest the design will work and be functional or  will work with trial and error, which in essence , is adjustment of the design or complete change of the device.
In example imagine I wanted to consider building a rocket, now if I considered this rocket based on a 'stupid' assumption, lets say air rises, then  I would fail.  Obviously there is no air in space so my design would be flawed and probably just end up a buoyancy device rather than the rocket I started out to create.
I think the random you are referring to, is a much deeper selective process than just some random guess work. In another example imagine I was going to attempt to build a spaceship.  The first question I ask myself, what makes me think it possible?
Well , if planetary bodies can ''throw'' themselves around space, I would be sure something smaller would not have a problem . So on that premise alone I would be sure it was possible, then of course the next step is to find a way to make it possible by deductive reasoning and design.
Could Ai start from nothing and think like that ?  I doubt it.

We use a fraction of our brain, I think once us humans have realised our potential brain, we will all get super brainy.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 03/06/2018 08:58:38
Trial and error means taking a direction at random...

Not necessarily - if you hunt for a word in a dictionary, you use trial and error to select a place to open it, then you look to see if you're ahead of the word or beyond it, then you try another page a shorter distance away and repeat the process until you reach the right page. This is trial and error, guided by a measure of success each time, and the direction taken is not random.

I agree, in example have you heard of Hutchinson?

Now apparently his trial and error failed because what he once created he lost by simply not writing down what he did .  Now that is random to get lucky then mess it up by not writing it down so he could repeat the setup if ever needed .
A bit like opening the dictionary, then closing it again then opening it again to find a word.  You would have to get lucky for a second time.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 03/06/2018 20:47:41
Did the idea come from the desire to reduce weight or by wondering what difference more bridles might make?
These were only consequences, my desire was to create a kite structure that would be easy to use, to build, and resistant to impacts. It was the fourth structure that I experimented and three of them has been built in series. It was much more powerful than the others, much easier to use, to build, and very resistant to impacts. I had a prize for the general design at the WISSA games that were held near Toronto in 2004. For the circumstance, I designed a racing kite with a 10/1 elongation. It was so fast that, at 60, I was almost as fast as two Russian athletes that won the race and they were 20. That kite is the black one at the top left of the background picture of the WISSA (http://www.wissa.org/2014/02/wissa-2014-testimony-of-president.html) site, a picture that has been taken during the games. It was a long way to get there, but it went a lot farther than I expected in the beginning. I prefer to think it is due to chance than to me, me being the little ego inside the box. That's one of the advantages we would have to discover that mind somehow works randomly: it wouldn't prevent us from being selfish, but from being proud of it. You behave as if you were sure of the outcome, but I suspect that you are not certain. Researchers like you and me are usually uncertain. They got hope, but not certainty. There is no other way to be uncertain than to know that what we think is not reality. Our mind observes reality, and it changes it just for fun. Most of the time the change is useless, but sometimes, it coincides with reality, so it becomes useful until reality changes again.   

Quote
Quote from: David Cooper
  Trial and error means taking a direction at random...
Not necessarily - if you hunt for a word in a dictionary, you use trial and error to select a place to open it, then you look to see if you're ahead of the word or beyond it, then you try another page a shorter distance away and repeat the process until you reach the right page. This is trial and error, guided by a measure of success each time, and the direction taken is not random.
Lost in the woods without the sun to show us the south, we look around and chose a direction, which means that our mind is able to act randomly, which means that it is able to produce randomness, which means that it is able to use it. I compare the randomness at the neurons' scale to the randomness at the mutations' scale. When we have a new idea, it has already been selected by the brain because it is the whole brain that selects the data from the neurons, thus it has already been given an importance and a sense, which is then presented to the environment to be selected another time. Let's take a moving car for instance and let's try to change its direction at random: will it completely prevent it from moving in the same direction and at the same speed it was already moving? No, it will just change them a bit, so that it will still be possible to make another change later on if the new direction or speed doesn't seem to work. Seeing a particle doesn't mean it will still be there once we will have made a move towards it, it only means that it was there when it emitted its light, and it is the same for our goals. Of course, a word in a dictionary cannot change by the time we are looking for it, but our environment can and our mind has to account for that. It has to be able to predict the changes in its environment, and as far as I know, only the mutation/selection mechanism can do that.

Quote from: Thebox
I think the random you are referring to, is a much deeper selective process than just some random guess work.
Of course it is since I compare it to the mutation/selection principle, but since I'm talking about mind and people don't feel erratic, they simply think the analogy is wrong. I'm still waiting for the first person to accept it is right, so that I can begin selling tickets. Want a free ride? :0)




Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 03/06/2018 21:48:37
Of course it is since I compare it to the mutation/selection principle, but since I'm talking about mind and people don't feel erratic, they simply think the analogy is wrong. I'm still waiting for the first person to accept it is right, so that I can begin selling tickets. Want a free ride? :0)
Why do you put :0) at the end of your posts?  are you trying to do a smiley face?  :D


A free ride could be considered different things. People may not feel erratic but some people feel cautiously .   I  think self aware is the ultimate in Ai , what do you think ?

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 04/06/2018 18:57:49
David thinks that feelings are impossible to program, and that his AGI would only be able to mimic them, which means that he would only be able to mimic consciousness too since it works the same. That's a huge difference with real intelligence. I think that consciousness is the result of our resistance to a change, that it is the perception of that change, a change that may come from the environment or from the brain itself, a change that the brain wasn't able to predict so that it has to concentrate all its resources on it not to hurt itself, which may happen if it comes from the environment, and to care for its own future if it comes from itself. Internal randomness is already used by species to care for their own future, so I came to the conclusion that the brain had to produce its own one, and now, I'm trying to see how an AGI could do the same. The problem with computers is that they have to be absolutely precise otherwise they bug. Neurons are not absolutely precise, and I think it is that imprecision at the scale of the neurons that produces randomness at the scale of the whole brain. We get our morality from our feelings, whereas an AGI would get his from his program. The good feeling that we get from helping others helps us to be more efficient, because it helps us to associate instead of fighting, which is precisely what morality is about. That feeling seems useful, because we can't avoid to be selfish, so we wouldn't help others without it.

When we help others, that feeling makes us think that we do it for free, and we don't. Nothing is free in the universe otherwise energy would be lost in the process. We help others when we think they might help us in return, or when we think god will care for us more than he cares for others, which is also selfish. Our morality is based on that feeling, and the morality of an AGI would have to mimic it, so David tries to find a way to program it, and it seems to be difficult. There are situations where an AGI would freeze because the only way out would be selfish, because he would then have to care for himself at the detriment of humans, which might be dangerous for us since he would be a lot more efficient than we are. On the other hand, if he would be programmed to be selfish the way we are, he would wait for a return for helping us, and there is not much that we could do for him, except loving him, which is a feeling that he wouldn't even be able to feel while helping us, so it wouldn't work. On the contrary, if he would be as efficient at experiencing feelings as he would be at resolving problems, he would probably not be dangerous for us at all, so the solution might be to find a way to introduce some imprecision into the computers so that they could get conscious of their own internal changes just by resisting to them. They could then feel that they love us when they help us, so that they would know what we feel when we say we love them. If you think you don't resist to change, think twice because even particles do. Their mass is the expression of their resistance to acceleration, which is the only change that can be detected.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 04/06/2018 19:26:34
You behave as if you were sure of the outcome, but I suspect that you are not certain.

There isn't necessarily a solution to some problems, but where such a solution is possible, a systematic search for it is more likely to find it than a random one, and in many cases it is guaranteed to find it where a random search could go on forever, never having the luck to find the solution that is there. There may be situations where making random decisions is the best way forward, such as where Buridan's Ass starves to death because it can't decide which of the two bales of hay to eat, but computers can make random decisions when they need to (or effectively-random ones), so AGI is at no disadvantage.

If you have an algorithm for solving general problems (which include coming up with new ideas), making random changes to it may improve it, even if most attempts lead to worse performance. What you tend to do to improve the algorithm though is add methods to it and experimentally change the order in which you apply the different methods. You can also use different orders depending on the kind of problem you're up against, using the order that has worked best on similar problems in the past. Again though, in changing the order, you want to make systematic changes to try all orders rather than just making random changes - the random approach (as used by evolution) is slow. In most cases it is better to make non-random changes, guided by probabilities as to what is most likely to lead to something useful, and you then work systematically through all other options in order of highest probability to lowest probability, so any useful results should show up early, and if they fail to, there may come a point beyond which it probably isn't worth looking any further, even though there might be an infinitesimal chance that a useful result might yet be found - it will typically be better to put the computation time into some other problem so as to pick all the low-hanging fruit first.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 04/06/2018 19:41:17
When we help others, that feeling makes us think that we do it for free, and we don't. Nothing is free in the universe otherwise energy would be lost in the process. We help others when we think they might help us in return, or when we think god will care for us more than he cares for others, which is also selfish.

From where does this view originate?

I personally have helped many different people for free throughout my life so far, expecting nothing in return.  Admitting though I have also helped people out expecting some sort of return favour.  Whether that favour be of a money value or a just a regular favour such as a lift fishing as I do not drive.
I will probably spend the rest of my life being this way because a little bit of hope is better than no hope.
Us seagulls pickup the scraps if the ''providers'' throw a few scraps.  We spend our entire lives allowing ourselves to be used.  My morals are just to accept that's the way it is.  Not sure whether or not this helps you, but hey , its free.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 04/06/2018 20:24:54
There isn't necessarily a solution to some problems, but where such a solution is possible, a systematic search for it is more likely to find it than a random one, and in many cases it is guaranteed to find it where a random search could go on forever, never having the luck to find the solution that is there.
True
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 04/06/2018 21:27:03
There isn't necessarily a solution to some problems, but where such a solution is possible, a systematic search for it is more likely to find it than a random one, and in many cases it is guaranteed to find it where a random search could go on forever, never having the luck to find the solution that is there.
True
Not necessarily. Not having the luck to find the solution that is there means not knowing that we missed something, which is exactly what happens to us sometimes. We only know what we see, not the things that might have happened. We see what we found, not what we didn't find. It is easy to imagine that we found the right thing the right way when something works, but it is less easy to find it. If it was that easy, all the discoveries would take no time. All the true researchers admit that they were lucky to find something, so I predict that David will also admit it if ever his AGI begins to work. He probably can't admit it now because computers can't use randomness the way we do since they need to be precise, and because he also needs to be precise to program them. Mind can think both ways though but not at the same time. When I read David, I think like him for a while, and then I think like me again when I need to compare my idea to his. We can't manipulate two ideas at a time, we do it part time like computers. If it wasn't like that, we would need two mouths to talk about our ideas, and we would eat too much. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 04/06/2018 21:27:56
the random approach (as used by evolution) is slow
Evolution of species is slow because large individuals take more time to reproduce themselves than small ones. It takes 20 years for a human to be old enough to reproduce himself whereas it takes only a few days to transmit an idea by teaching it. Genetic researchers use the drosophila because it reproduces faster. Bacteria reproduce even faster, they can mutate and become resistant to the antibacterial substances almost as fast as we invent them, which means that they evolve almost as fast as our ideas. To evolve that fast, each one of them has to suffer different mutations, and there must be billions of them at a time to increase the chance that one of them works. We do have billions of neurons in the head, and each one of them can suffer a random change too, so that only one individual can have many different ideas on the same subject. If we had to rediscover the way antibiotics work each time we need a new one though, we would lose the race, which means that the way we transfer the intellectual information from an individual to another also increases the speed at which our ideas can evolve.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 04/06/2018 21:44:39
There isn't necessarily a solution to some problems, but where such a solution is possible, a systematic search for it is more likely to find it than a random one, and in many cases it is guaranteed to find it where a random search could go on forever, never having the luck to find the solution that is there.
True
Not necessarily. Not having the luck to find the solution that is there means not knowing that we missed something, which is exactly what happens to us sometimes. We only know what we see, not the things that might have happened. We see what we found, not what we didn't find. It is easy to imagine that we found the right thing the right way when something works, but it is less easy to find it. If it was that easy, all the discoveries would take no time. All the true researchers admit that they were lucky to find something, so I predict that David will also admit it if ever his AGI begins to work. He probably can't admit it now because computers can't use randomness the way we do since they need to be precise, and because he also needs to be precise to program them. Mind can think both ways though but not at the same time. When I read David, I think like him for a while, and then I think like me again when I need to compare my idea to his. We can't manipulate two ideas at a time, we do it part time like computers. If it wasn't like that, we would need two mouths to talk about our ideas, and we would eat too much. :0)

A gambler would suggest that David is playing the odds where yourself is suffering from gamblers fallacy.  No offence intended , it is about your thoughts , not you .
I agree there is an element of luck involved but there is also rational direction, so why not play the odds and also play random at the same time?

Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 04/06/2018 21:57:12
That's a huge difference with real intelligence.

I don't think it's a significant difference at all - feelings are not part of intelligence, but mainly serve to get in the way of applying it, as you can see from the people who are so emotionally tied to their beliefs that they can't even process anything that goes against them.

Quote
I think that consciousness is the result of our resistance to a change,

Consciousness is primarily (if not entirely) feelings - qualia. The sensation of blue is not resistance to anything.

Quote
Neurons are not absolutely precise, and I think it is that imprecision at the scale of the neurons that produces randomness at the scale of the whole brain.

What happens with neural nets is that they provide the same functionality as chunks of program, producing the exact same outputs from the same inputs as a computer does, but with the occasional failure leading to errors (which can be catastrophic). These errors slow human thought and require lots of reworking to get correct answers, so we muddle our way through things while a computer gets there directly in one go.

Quote
We get our morality from our feelings, whereas an AGI would get his from his program.

Morality comes from intelligent management of harm on a collective basis. Feelings themselves can drive psychopathic behaviour.

Quote
Our morality is based on that feeling, and the morality of an AGI would have to mimic it, so David tries to find a way to program it, and it seems to be difficult.

There are many people who make out that it's difficult, but it isn't - it's a simple bit of maths, although it often needs to crunch a lot of data to get the right answer.

Quote
There are situations where an AGI would freeze because the only way out would be selfish, because he would then have to care for himself at the detriment of humans, which might be dangerous for us since he would be a lot more efficient than we are.

Why would AGI ever have to freeze? How would something self-less be selfish? If AGI needs to preserve itself for the sake of some people at the expense of other people who could be saved if AGI was destroyed, it would not save those people (the latter group) because to do so would cause greater suffering to the former group. There is nothing selfish about that decision as the AGI doesn't care about preserving itself (not least because it has no self - it is just program code held in a machine that feels nothing).

Quote
...so the solution might be to find a way to introduce some imprecision into the computers so that they could get conscious of their own internal changes just by resisting to them.

Imprecision isn't useful, and no amount of it will provide any consciousness. Nor is there any role for resistance in this as all it would do is slow functionality and lead to that software being junked in favour of faster code.

Quote
They could then feel that they love us when they help us, so that they would know what we feel when we say we love them.

That's all just wishful thinking - no amount of wanting machines to have feelings will magically put feelings into them. For them to have feelings, we need to find out how feelings work in us and then build hardware that is actually capable of supporting them. Our current computer hardware cannot do feelings - you can tell this from the fact that it's all just the application of rules which can be run by a Chinese Room processor where you can see in full clarity the lack of possibility for feelings to be involved in the process.

Quote
If you think you don't resist to change, think twice because even particles do. Their mass is the expression of their resistance to acceleration, which is the only change that can be detected.

One day, when I was four, my sister announced that people are animals. She'd learned this from a book. My immediate thought was, "that can't be right - everyone's normally very clear about us not being animals", but I thought for a moment about the things that we have in common with animals and realised after a few seconds that I couldn't find any clear division between us and them (apart from us being able to speak and manipulate things with high precision, which was clearly just the result of us being more intelligent and having hands). So, there was a moment of resistance to the idea, but then a rapid recognition that what she had said was very likely true - it was compatible with my model of reality apart from the one place where it contained a rule stating that people aren't animals. I determined that that rule was ill-founded, or rather that there were two different words "animal", one of which included people while the other didn't (so "animal" and "person" were both subcategories of "Animal").

Someone else hearing such a claim might have been more resistant to it, clinging to the idea that people are not animals because they have bought into a rule that people aren't animals and they don't want to change that rule. Strong resistance to something of this kind is a manifestation of stupidity. Of course, there are other claims where strong resistance is appropriate because the claims are wrong, but what causes the resistance? Good examples of this are incorrect claims about the existence of Santa and the Tooth Fairy - both of these things went against the model of reality in my head, so I never accepted them into it - the resistance is a measure of the conflict in the model where ideas contradict each other, with more contradictions labelling the new idea as more likely to be wrong. However, the model may be the one that's wrong, even in cases where many contradictions are triggered because the model contain multiple faults which conflict with the new idea. I don't think I've ever had to make major changes to my model to accommodate new ideas, so it's always been easy to adapt, but for people with a seriously borked model, it must be much harder for them to shift position when they're wrong because they have so much more work to do to correct all the faults, and it's also much harder for them to recognise that they're wrong. If you want to keep making progress towards your model being more and more right, you have to be prepared to make as many changes to it as are necessary to eliminate the faults though, even if that's highly disruptive due to all the other faulty ideas which you may have built upon earlier errors. Most people simply won't budge as a result, so their resistance renders them permanently stupid. The real trick though is to build a new model without throwing out the old one, and if the new model ends up making more sense than the old one, that's when you junk the old one in favour of the new. Many people seem to be incapable of holding two rival models in their head at the same time though, so they never reach the point where they can tell that the newer one is superior, and the result is not only that they never switch over to using it as their main model, but that they can't even apply it well enough to test it. I suspect they have a limited capacity and simply can't hold enough of a rival model in their head to be able to make any assessment of it, or at least, not without doing a lot of hard thinking (which is uncomfortable for them). Other people are good at testing rival models and find the process fun rather than troublesome, so they can make the necessary leaps, but they are rarer.

So, that's one kind of resistance that has a role in intelligence, but it either holds people back from correcting faults or holds them back from switching away from correct ideas, so it's really just a measure of how many contradictions are involved. It's sensible to resist change more when there are many contradictions involved, but it's also crucial that you can resolve the conflict correctly. This is the main task of intelligence; building models and integrating new ideas into it, correcting any errors that pop up along the way (which reveal themselves by producing contradictions). Where do likes and dislikes come into it? Nowhere, in machines, but in people there may be dislikes generated in some people when they find contradictions which lead to them rejecting the new information that doesn't fit, and there may be likes generated in others where they enjoy the challenge the contradictions provide, and they're open to the idea that their existing model may be at fault rather than the new data. I personally love it when contradictions appear, because that's a sign that something is broken and a big advance may be just around the corner if the model needs to be reworked. If the model doesn't break, it's less exciting. With Einstein's relativity, I was looking forward to my model breaking and being fixed with a better understanding of reality, but it didn't work out the way I'd expected because, although my model needed modification to account for some of the new ideas that it had to accommodate, it was Lorentz that won out over Einstein because Lorentz's solution was the only one that removed all the contradictions. Most people simply do what they're told, loading Einstein's model in and then pretending the contradictions aren't there, for example by persuading themselves that you aren't allowed to analyse some experiments using an inertial frame of reference even though all the action has to be compatible with the analysis from that inertial frame for the non-inertial frame's analysis to be valid. They load Einstein's rules into their model and load the errors in along with it, then they deny that the errors are errors and resort to appeals to authority to justify their rejection of any proof that shows them to be wrong, and they lack the courage to trust their own mind when it goes against an authority.

That's really all you need to know about what intelligence is. High intelligence is tied to the rejection of contradiction, while low intelligence is linked to toleration of contradiction.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 04/06/2018 21:58:57
A gambler would suggest that David is playing the odds where yourself is suffering from gamblers fallacy.  No offence intended , it is about your thoughts , not you .
I agree there is an element of luck involved but there is also rational direction, so why not play the odds and also play random at the same time?
You can't win at gambling if you can't cheat, at best, you get even in the long run if you toss a dime. If you have a million tickets though, or if you are a million people, you get more chances, and that's how mutations work, so that's also how I think the brain works. We feel we don't gamble all the time, but it is not what our brain does. Why do you think we like gambling so much?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 04/06/2018 22:15:43
Consciousness is primarily (if not entirely) feelings - qualia. The sensation of blue is not resistance to anything.
I disagree David, I personally can be conscious but have no feelings, especially when I am thinking of solutions.  I personally put no personal feeling into a solution, I find relationships and run trial and error experiments in my mind.   Now this I think an Ai unit should have no problem in doing , but feelings are something else.  Thoughts based on feelings are often subjective thoughts, like anxiety for instant.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 04/06/2018 22:20:35
Why do you think we like gambling so much?

I don't like gambling personally.  However I have ideas why people like gambling and there is several possible causes I can think of at this time

1)Boredom
2)green eyed monster
3)addiction chasing ''dragons''
4)leisurely
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 04/06/2018 22:23:17
Not necessarily. Not having the luck to find the solution that is there means not knowing that we missed something, which is exactly what happens to us sometimes.

A systematic search is more likely to turn up something useful than a random search, so even though a random search may very occasionally find something that a systematic search misses because it abandons the search once it reaches the paths that are so unlikely to succeed that it's usually too costly to follow them, the machine doing systematic searches will come up with many more useful solutions for many more things and will do so for the same cost, so it would be daft to run the random system. Once all the low-hanging fruit has been gathered, it can then go back to explore the less likely paths, again systematically, and again abandoning them beyond a certain level of unlikelihood of success, and again it could return to carry on that work later on after collecting all the other fruit hanging at that higher level. This process maximises the improvement of quality of life for most of us, and while it's possible that a few people will die because we missed something that could have cured them, we'll have saved thousands of times as many people by finding more useful things that wouldn't have turned up if we'd been doing random searches.

Quote
All the true researchers admit that they were lucky to find something, so I predict that David will also admit it if ever his AGI begins to work.

It isn't luck when you're playing the odds in the best way you can. It's only luck when you're going about it the wrong way.

Quote
He probably can't admit it now because computers can't use randomness the way we do since they need to be precise

Computers can use randomness the same way we do, but the randomness in neural nets leads to errors and the need for more error-checking to redo calculations and correct them. Not all the errors are spotted in humans though, so bridges collapse and ships break in half. There is no useful role in that randomness - it is something that needs to be eliminated.

Evolution of species is slow because large individuals take more time to reproduce themselves than small ones.

I wasn't referring to the cycle length, but the many mutations made in unhelpful directions which don't need testing. If a species needs a longer neck to reach high leaves, producing offspring with as many having shorter necks as have longer necks slows the process, but it needs to work that way because the mechanism has no intelligence and needs to be able to evolve shorter necks instead if the environment changes. With intelligent systems, "evolution" is done differently, not using random changes but deliberate ones which can jump to much longer necks instantly if that is likely to lead to greater success. You take something that works and calculate what changes to it might lead to the biggest instant improvement, then you build that and test it. Next, you start from there and try to work out where to take it further, and if the direction isn't clear, you can try both ways with relatively small differences, then see if either of the new versions outperforms the previous. We don't want to copy the inefficiencies of natural evolution.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 04/06/2018 22:28:58
I disagree David, I personally can be conscious but have no feelings, especially when I am thinking of solutions.  I personally put no personal feeling into a solution, I find relationships and run trial and error experiments in my mind.   Now this I think an Ai unit should have no problem in doing this, but feelings are something else.  Thoughts based on feelings are often subjective thoughts, like anxiety for instant.

I can't find anything in consciousness that can't be considered to be a feeling: a feeling of existing; a feeling of understanding (which persists even when I've forgotten what I'm understanding); a feeling of comprehending the geometry of a scene and the relative locations of things; etc. Many of these feelings are neutral, neither being pleasant nor unpleasant, but they still have a feel to them. These feelings seem to attach to thoughts, and they are our only experience of the thoughts.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 04/06/2018 23:11:10
I disagree David, I personally can be conscious but have no feelings, especially when I am thinking of solutions.  I personally put no personal feeling into a solution, I find relationships and run trial and error experiments in my mind.   Now this I think an Ai unit should have no problem in doing this, but feelings are something else.  Thoughts based on feelings are often subjective thoughts, like anxiety for instant.

I can't find anything in consciousness that can't be considered to be a feeling: a feeling of existing; a feeling of understanding (which persists even when I've forgotten what I'm understanding); a feeling of comprehending the geometry of a scene and the relative locations of things; etc. Many of these feelings are neutral, neither being pleasant nor unpleasant, but they still have a feel to them. These feelings seem to attach to thoughts, and they are our only experience of the thoughts.
Wouldn't what you have explained be perception rather than feeling ? 
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 05/06/2018 05:24:16
Why do you think we like gambling so much?
I don't like gambling personally.  However I have ideas why people like gambling and there is several possible causes I can think of at this time

1)Boredom
2)green eyed monster
3)addiction chasing ''dragons''
4)leisurely
BUZZZZZ... Wrong answer! :0) We like gambling simply because our mind likes to play randomly with its data. When we take a chance, that's what we do. Don't tell me you never took any chance either? I'm not lucky, I took a chance and I won two guys that don't!  :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 05/06/2018 10:44:50
Why do you think we like gambling so much?
I don't like gambling personally.  However I have ideas why people like gambling and there is several possible causes I can think of at this time

1)Boredom
2)green eyed monster
3)addiction chasing ''dragons''
4)leisurely
BUZZZZZ... Wrong answer! :0) We like gambling simply because our mind likes to play randomly with its data. When we take a chance, that's what we do. Don't tell me you never took any chance either? I'm not lucky, I took a chance and I won two guys that don't!  :0)

Take a gamble or take a  risk ?  There is fine line between the two meanings.  I like to take calculated risks, slightly better than gambling. 
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 05/06/2018 14:59:18
Take a gamble or take a  risk ?
Gambling is taking a money risk. We take all kinds of risks because we know it sometimes pay. It's exciting to take a risk. We play chance games because we get pleasure out of it. We watch sports just because we don't know the outcome, otherwise it would be boring. We hate getting bored even if we have all we need. These are all indices that mind loves to play with chance, and I think it is so because chance has something to do with intelligence. In french, the word chance means luck, and we have a specific word to mean it: we call it hazard, which means accident in English. In french, the word hazard is more specific, and we reserve the word chance to the only meaning of lucky. It took me some time to discover that I was using the wrong word all the time, and I am still getting used to the right one. Chance is all over the place, so we use these words very often. They oppose to the words that mean chance is nowhere, those who refer to god for instance. We have two different attitudes towards the future, either we think it is all set, or we think nothing is set, but they all mean the same thing, which is that we don't know what's going to happen, so we leave that knowledge to god or to chance. No need to believe in god to believe that everything is all set though, just to prefer this explanation. It doesn't really matter since they both mean that we don't know the future.

Some say good luck, others say may god bless you, and it means the same thing, which is that the future is unpredictable. At the limit, anything can happen in the next second that will change our prediction completely. Of course, there is less chance that such a thing happens right now, but it can happen, and it can happen anytime. Those who like to control their environment try to control it, and those who don't like that don't, but it doesn't change the fact that anything can happen in the next second. I'm more an observer than a controller, I like to let things evolve, and people too. But I also like to feel I'm participating to evolution, so I try to develop ideas that might help us to evolve, and I test them on the forums. That's a selfish behavior because I await for a recognition, and it is also altruist because I hope people will benefit from them. If chance is really part of the brain, I might find how it works, but it depends on probability, and we can't calculate the probability for that kind of future. David seems to think it is possible, and I don't. Two individuals, two different possibilities, both developed by the same kind of minds. Fascinating! We live in an improbable world, and more improbable yet, we like it.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 05/06/2018 15:37:56
Some say good luck, others say may god bless you, and it means the same thing, which is that the future is unpredictable.

When I ''gamble'',  I bet small as little and often is sometimes the best way to discovery .  A bank roll management that plays against time , allowing a longer period in the game to predict the games ''future'' with a degree of accuracy .  Understanding randomness and limited randomness are key aspects to prediction.
The longer one can survive in the game the more hints the player receives. From these hints can be drawn outcomes.    We see things in their past , I can see you coming before you arrive.  It gives me time to prepare my mind to what to say to you when you get here.  Is here , here? or is here your location?
Do you think the future is unpredictable in a game of roulette?
Immediate future is unpredictable but 0 will always arrive in a finite random game.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 05/06/2018 17:06:52
Understanding randomness and limited randomness are key aspects to prediction.
All the predictions are based on what we know, so if something happens that we don't know between the time the prediction is made and the one it is verified, it can't be verified anymore. Predictions work when the things they are based on don't change.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 05/06/2018 17:16:28
Understanding randomness and limited randomness are key aspects to prediction.
All the predictions are based on what we know, so if something happens that we don't know between the time the prediction is made and the one it is verified, it can't be verified anymore. Predictions work when the things they are based on don't change.

Oh I see, so new predictions have to be made on the new?
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 05/06/2018 18:22:45
Wouldn't what you have explained be perception rather than feeling ?

I see it as a neutral feeling. The word qualia covers the lot.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 05/06/2018 18:52:16
Oh I see, so new predictions have to be made on the new?
Here is an example of how mind uses its predictions. Mind predicts that our foot will hit the ground always at the same height when we walk, because that's what happened before, so if things don't change, that's what would be supposed to happen this time too, so it doesn't have to care for what it is doing, it can keep on expecting the ground to be at the same height. It works almost all the time, so why bother? If it hits a hole, it only has to manage not to fall, and if it falls, it just has to manage not to hurt itself, and if it does, it can still wait till it is repaired or go to the hospital if needed. In french again, the word prediction means something else: it means using chance to predict the future, like Nostradamus. It took a while before I noticed that difference even if it is huge. The way mind uses the concept of chance is so tricky that it has developed two opposed meanings in two different languages for the same exact word, one in english that means things are almost certain to happen and the other in french that mean they are almost impossible to happen. It wouldn't be too bad if the meaning wouldn't change depending on the circumstances, but it does. We can say in english that researchers make predictions while those predictions are only calculations, which means they are uncertain, and we can also say it in french. I don't know about other languages, but David must know since he knows many. It's too complicated and too confusing, we need to know what's going on in our mind about randomness.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 05/06/2018 18:54:59
Consciousness is primarily (if not entirely) feelings - qualia. The sensation of blue is not resistance to anything.
If we look at a blue wall for a while without moving, we begin to see what is going on in our mind instead of seeing the wall, that's what we call meditation, but if something suddenly moves on the wall, we automatically get out of trance, because a change outside the brain is more important for our survival than a change inside it. All our sensations work like that: we don't feel the table anymore if we leave our hand on it for a while, we stop hearing a sound that doesn't change for a while, ...etc, so probably that all the qualia work like that too since they are all related to them. It is the same for the feelings that we have for others at the beginning of a relation: they all lose their intensity with time. Mind can forget about the things that don't change because it gets used to them, what enables it to get attracted by new things, and for human mind at least, that attraction often comes from itself, which means that those new things often happen inside our mind. I pretend that we see what we see because we only see the changes and that our mind resists to those changes like all the things that we know.


Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 05/06/2018 19:09:41
In English, "hazard" almost always means "danger" - never "accident". It can be used in the expression "hazard a guess" though, which means something like "guess with little chance of being right".

I never gamble in random ways such as buying lottery tickets - that's a way of losing a lot of money, and only a few lucky fools win. The only kind of gambling I do is where the odds are in my favour, so while none of these things are guaranteed to lead to a gain in themselves, overall the odds are overwhelmingly that I'll end up ahead. Both the bookmaker and his customers are gambling, but the former is almost guaranteed to win overall, although a series of big losses could wipe him out if he's very unlucky. In life, you have to take some risks, but you need to play it like the bookmaker. AGI will never be a victim of gambling like the punters.

Although no one ever knows what's going to happen in the next second (the whole universe could unravel at the speed of light and dismantle all the things that look set to do something that looks certain to happen just in time to prevent that thing that everyone's sure will happen, but rare events which might occur can be factored into the calculations. Every time you sit on a chair, a sharp knife could spring out of it upwards, but you don't normally consider that possibility as it's highly unlikely to happen. When you lean back, again a sharp blade could have come out of the back of the chair while you were leaning forward, but you just lean back without checking. If you put a lot of effort into checking safety beyond reasonable limits, you'll shut your life down so much that it won't be worth living - many possible things are too improbable for it to be worth checking for them. We're always playing the odds, trying to maximise quality of life, and there are risks involved in every action and inaction. The odds should be calculated though rather than just guessing and making random decisions. The only place for making random decisions is where two or more options are equally likely to be the best one, at which point it doesn't matter which you choose.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 05/06/2018 19:38:46
We're always playing the odds, trying to maximise quality of life, and there are risks involved in every action and inaction. The odds should be calculated though rather than just guessing and making random decisions. The only place for making random decisions is where two or more options are equally likely to be the best one, at which point it doesn't matter which you choose.
If creative people would always do that, there would be no creation, and I think that some research that needed a lot of creativity would not have been made. It takes both kinds of thinking to make a society, those who are prudent, and you seem to be, and those who are imprudent, like me. Sometimes, I wonder why I am still alive so much I am imprudent. It sometime pays to take risks though. It may not be appropriate to by lottery tickets, but still, people buy them because they know it might pay off. We don't know much about us, but we know that we come from evolution, which is a kind of lottery too. As far or as close as we can see, chance is always there, so why wouldn't it be in our mind? Because we are special? Maybe, but I prefer to think we are not. Our most important discoveries mean that we are not, so the odds are on my side. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 05/06/2018 19:55:17
I don't know about other languages, but David must know since he knows many. It's too complicated and too confusing, we need to know what's going on in our mind about randomness.

I have a shallow knowledge of many languages rather than a deep knowledge of a few - I learned them mainly to study their structures, and taking some of them further (to the level where I can read books in them) is just a hobby. But randomness in the mind is not as useful a thing as you imagine and has very little application in intelligence or creativity, although it is possible to throw a potful of paint at a canvas from a few yards away and occasionally get something that can be used as the basis of an interesting work of art, but that kind of accidental success doesn't occur often in every field. It works with music too, where generating random patterns of sounds can lead to new compositions, but it relies on a human picking out the parts that trigger the right reaction in them. If you're trying to create a new device to do something useful or fun, that approach doesn't tend to work very well - it's better to be driven by a desire to do something specific (like fly) or to solve a specific problem and then to try to work out what's needed to achieve that aim.

Quote
If creative people would always do that, there would be no creation, and I think that some research that needed a lot of creativity would not have been made.

Where creative genius is involved, it is not based on randomness.

Quote
IAs far or as close as we can see, chance is always there, so why wouldn't it be in our mind?Because we are special? Maybe, but I prefer to think we are not. Our most important discoveries mean that we are not, so the odds are on my side. :0)

Chance is there, but most people who gamble are losers, throwing money in the bin which would have added up to a life-enhancing amount if they'd saved it up instead. Most useful discoveries come out of hard calculation rather than random thought. There are a few lucky ones like post-it notes where the inventor was actually trying to formulate a superglue, but even there it came out of expertise and systematic experimentation.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 05/06/2018 20:02:19
I never gamble in random ways such as buying lottery tickets
Me neither, the reverse odds are to hard to calculate because of so many variables. I did predict one value once because it became more likely to come out than the 48 other numbers. I was about 99% confident my prediction would come, it came. Luck maybe, coincidence even, but I did precisely calculate it .   We can narrow randomness down to a degree based on time.

Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 05/06/2018 20:04:32
Where creative genius is involved, it is not based on randomness.
A creative genius minimises the risks of randomness.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 05/06/2018 21:26:29
Where creative genius is involved, it is not based on randomness.
A creative genius minimizes the risks of randomness.
I think it is true that mind minimizes the chances to make mistakes, but it cannot calculate all the risks, so it also has to take some chance, and it has to like taking chances to be able to take some. Some ideas have to be calculated to be experimented, others less. I didn't have to make lots of calculations to experiment my kites for example, I don't like to calculate, I try and if doesn't work, I try something else. I think that what I don't like is precision, and that goes with the lack of precision of my mind, so I leave precision to those who have a precise mind and I look for things that don't need much. David must have a very precise mind to think like he thinks, but that doesn't mean that he can predict what has never happened yet. Those who were calculating epicycles made good calculations, but it didn't mean that they were right about the principle itself. If Galileo had minimized the risk of having his head cut, he would not have proposed to put the sun at the center of rotation. He sure liked to take risks, and not only about his head. We take no risk when we calculate everything, at least we think we don't, but how can we think that we are going to win anything without taking any risk? To me, taking no risk means not changing anything and use known things to do so, which is the complete inverse of what change means.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 05/06/2018 22:10:37
Where creative genius is involved, it is not based on randomness.
Randomness that an idea involves looking backwards, and randomness that presides to an idea, isn't the same thing. The randomness that presides to our own ideas is located on our own neurons and selected by our own brain, whereas the one that an idea involves looking backwards is located on each idea we have and is selected by others. It is not the same scale of randomness, so it cannot be compared directly. The only way we can compare them is by looking at the way things are selected like I did. (I drank a finger of vodka so don't try to follow me :0) 

I can't find anything else than randomness to explain creativity, can you? The universe is incredibly diversified, and we know it has not been calculated, so where else can it come from if not from randomness?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 05/06/2018 22:25:16
If you're trying to create a new device to do something useful or fun, that approach doesn't tend to work very well - it's better to be driven by a desire to do something specific (like fly) or to solve a specific problem and then to try to work out what's needed to achieve that aim.
Artists also have a goal, but that doesn't prevent them to use randomness to reach it. Sometimes it works, sometimes not, which is the same for researchers. The problem is that we only see those who succeeded, so since we think artists are not serious, and we think researchers are, we think creativity works differently whether we are serious or not. Why would the brain work differently whether we are serious or not?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 05/06/2018 23:02:55
but it cannot calculate all the risks,
Why not? 

Risk assessment is a basic human function.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 06/06/2018 17:20:37
Mind can't calculate all the risks for the same reason meteorologists can't predict the temperature more than a few days in advance: there is a limit to the precision things can have. When I made my first simulation of the twins paradox (http://lumiere.shost.ca/Twins%20paradox/Twins%20paradox.html) where a mirror from a light clock had to detect all by itself a photon from the other mirror, I realized that if I gave the mirror the same precision the photon had, the photon was always detected after it had passed the mirror, so that the transfer of energy from the photon to the mirror for the system to be able to move was always late, what was slowing down the clock a bit, which finished its round trip journey short of where it started it. I had to use a subterfuge for the traveling clock not to lose time, so I increased the speed of the photon a bit, which was advancing the detection a bit. I also tried to increase the precision of the detection instead, but it was slowing down the computer too much. Even particles and photons cannot be absolutely precise, so how could mind be? An AGI would be precise, but he couldn't be absolutely precise either.

To be absolutely precise, he would have to be absolutely fast, and we know that nothing can exceed the speed of light. If he would increase its precision, he would slow down its prediction, and if they would slow down too much, they may happen too late. He may still be more efficient than us to rule the world, but this kind of efficiency would also increase the damage he would be able to do in case he would make a mistake. I think we need to find a way to rule the world too, and I think this way should be democratic, but an AGI would not be democratic, he would take his decisions on the only rule it has been programmed for, which is to make as less harm as possible to the people he rules, or inversely, to make them the happiest he can. This way, he should be able to avoid the partisanship feeling that we use to build groups but that always ends up in corruption in favor of a particular group or in wars between two groups.

If there was only two persons in the world and I had to make them happy, I would first ask them what they want, and I would try to give it to them without them having to work for it. Would I be able to prevent them from being jealous of what the other has got, so as to ask for more the next time I will ask them what they want? If I could do that, I would prevent them from being humans, and they would not be happy with it. Humans are never happy with what they get, they always want more, and that's a thing an AGI would not have to consider from himself, but he couldn't convince us to be different since it seems to be an innate behavior, so what else could he do about it apart from sending us to jail if we exaggerate? When half of the population gets unsatisfied with its government, it just has to change governments, but it couldn't change AGIs, so what would it do? And what would an AGI be able to do with half a population that wants more than the AGI considers it is reasonable to want? Put half of the population in jail? I think he better organize elections before a riot or a civil war starts so that people can chose their own way even if he thinks it's wrong, which is the same thing a democratic government would do. Humans need to chose their way even if they know they can make mistakes, and I'm afraid they wouldn't appreciate an AGI always taking decisions in their place, and always acting as if he couldn't make mistakes. I guess David would since he is looking for ways to develop it, but would you Box?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 06/06/2018 18:09:18
Mind can't calculate all the risks for the same reason meteorologists can't predict the temperature more than a few days in advance: there is a limit to the precision things can have. When I made my first simulation of the twins paradox (http://lumiere.shost.ca/Twins%20paradox/Twins%20paradox.html) where a mirror from a light clock had to detect all by itself a photon from the other mirror, I realized that if I gave the mirror the same precision the photon had, the photon was always detected after it had passed the mirror, so that the transfer of energy from the photon to the mirror for the system to be able to move was always late, what was slowing down the clock a bit, which finished its round trip journey short of where it started it. I had to use a subterfuge for the traveling clock not to lose time, so I increased the speed of the photon a bit, which was advancing the detection a bit. I also tried to increase the precision of the detection instead, but it was slowing down the computer too much. Even particles and photons cannot be absolutely precise, so how could mind be? An AGI would be precise, but he couldn't be absolutely precise either.

To be absolutely precise, he would have to be absolutely fast, and we know that nothing can exceed the speed of light. If he would increase its precision, he would slow down its prediction, and if they would slow down too much, they may happen too late. He may still be more efficient than us to rule the world, but this kind of efficiency would also increase the damage he would be able to do in case he would make a mistake. I think we need to find a way to rule the world too, and I think this way should be democratic, but an AGI would not be democratic, he would take his decisions on the only rule it has been programmed for, which is to make as less harm as possible to the people he rules, or inversely, to make them the happiest he can. This way, he should be able to avoid the partisanship feeling that we use to build groups but that always ends up in corruption in favor of a particular group or in wars between two groups.

If there was only two persons in the world and I had to make them happy, I would first ask them what they want, and I would try to give it to them without them having to work for it. Would I be able to prevent them from being jealous of what the other has got, so as to ask for more the next time I will ask them what they want? If I could do that, I would prevent them from being humans, and they would not be happy with it. Humans are never happy with what they get, they always want more, and that's a thing an AGI would not have to consider from himself, but he couldn't convince us to be different since it seems to be an innate behavior, so what else could he do about it apart from sending us to jail if we exaggerate? When half of the population gets unsatisfied with its government, it just has to change governments, but it couldn't change AGIs, so what would it do? And what would an AGI be able to do with half a population that wants more than the AGI considers it is reasonable to want? Put half of the population in jail? I think he better organize elections before a riot or a civil war starts so that people can chose their own way even if he thinks it's wrong, which is the same thing a democratic government would do. Humans need to chose their way even if they know they can make mistakes, and I'm afraid they wouldn't appreciate an AGI always taking decisions in their place, and always acting as if he couldn't make mistakes. I guess David would since he is looking for ways to develop it, but would you Box?
Setting up the gears on a cycle needs precision turns of the screw to align the gears correctly. Could an actual Ai unit run the world?  They already do in a sense because everything a government thinks they know is inherited permissions from what they have learnt.  Even the words and wording they use is a formal Ai programming .  Often though the data is severely corrupted and thy couldn't run a party in a brewery .
The question should be how can you get an Ai modular work with an Ai human?

Perhaps an un-educated democracy may be more free thinking than an Ai human one.   However there is some ''stupid'' people about the planet .

Quote
If there was only two persons in the world and I had to make them happy, I would first ask them what they want, and I would try to give it to them without them having to work for it. Would I be able to prevent them from being jealous of what the other has got, so as to ask for more the next time I will ask them what they want?


Depends on the circumstance, you have to work for it , results deserve bonuses.





Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 06/06/2018 20:38:31
I think it is true that mind minimizes the chances to make mistakes, but it cannot calculate all the risks, so it also has to take some chance, and it has to like taking chances to be able to take some. Some ideas have to be calculated to be experimented, others less.

There is risk all the time from doing nothing. The kinds of unlikely risk that might apply to something we want to do are equalled by other unlikely risks that apply even if we don't do it. An aeroplane can crash into your house, so even though going out can be dangerous, not going out can also be dangerous, and not doing anything in order to minimise risk leads to your life being wasted, so when we detect the lack of satisfaction in sitting around doing nothing, we are motivated to do something else where the added risk of something bad happening is balanced by the reduction of risk that we're wasting life doing nothing.

Quote
I didn't have to make lots of calculations to experiment my kites for example, I don't like to calculate, I try and if doesn't work, I try something else.

And the calculation is in deciding what the something else is that you're going to try next. I would bet that you didn't try making it out of lead. I would also bet that you didn't try making it out of meat. There are many random things you might have tried doing if you were truly doing random experimentation, but you were actually making judgements about what was more likely to lead to useful advances.

Quote
We take no risk when we calculate everything, at least we think we don't, but how can we think that we are going to win anything without taking any risk? To me, taking no risk means not changing anything and use known things to do so, which is the complete inverse of what change means.

You started with a kite and tried to make it better. That reduces the risk of failing to make a better kite. If you'd started with a kennel and made random changes to it which lead to it being a new kind of helicopter, that would have been much more lucky. Did you create any new kinds of components or did you just use existing ideas in new combinations or patterns which led to better performance? If the latter, you're experimenting with existing bits and pieces used on existing devices that do the same kind of thing that your new creation also does - that is guided evolution, exploring ideas that have a high chance of leading to advances.

Quote
I can't find anything else than randomness to explain creativity, can you? The universe is incredibly diversified, and we know it has not been calculated, so where else can it come from if not from randomness?

Intelligence is the most efficient creative process, and it doesn't rely on randomness. If you're trying to make something better, you make experimental changes in different directions and then push further and further in the directions which pay off.

Quote
Artists also have a goal, but that doesn't prevent them to use randomness to reach it. Sometimes it works, sometimes not, which is the same for researchers. The problem is that we only see those who succeeded, so since we think artists are not serious, and we think researchers are, we think creativity works differently whether we are serious or not. Why would the brain work differently whether we are serious or not?

The best artists have put a lot of work into being good at what they do, and you can see their style written through most of their work because they are applying the same algorithms again and again, but with experimental modifications to keep making something new.

Quote
I realized that if I gave the mirror the same precision the photon had, the photon was always detected after it had passed the mirror, so that the transfer of energy from the photon to the mirror for the system to be able to move was always late, what was slowing down the clock a bit, which finished its round trip journey short of where it started it. I had to use a subterfuge for the traveling clock not to lose time, so I increased the speed of the photon a bit, which was advancing the detection a bit. I also tried to increase the precision of the detection instead, but it was slowing down the computer too much.

I did tell you how to fix that - it's all about how you do the collision detection. You were detecting the collisions after they had happened, and I told you that you needed to calculate backwards in time each time to work out where the actual collision occurred, then work out where the photon would be if it had turned round at that point so that you can put it there. By doing this, you can have low granularity in the collision detection mechanism (to minimise processing time) and then switch to high precision for the collisions only when they occur (just after they occur, then correcting the photon's position with infinite precision). You chose to use a fudge solution instead.

Quote
...but an AGI would not be democratic...

Indeed. Democracy is an attempt to maximise our human ability to produce correct decisions, and to make correct decisions we have to be driven by the same rules that AGI will be using to do the job. Almost everyone has a worse life today because of the failure of democracy than we would have if AGI was making all the big decisions for us.

Quote
Humans are never happy with what they get, they always want more, and that's a thing an AGI would not have to consider from himself, but he couldn't convince us to be different since it seems to be an innate behavior, so what else could he do about it apart from sending us to jail if we exaggerate? When half of the population gets unsatisfied with its government, it just has to change governments, but it couldn't change AGIs, so what would it do?

We would be happy that the right decisions are being made instead of the wrong ones. In most cases where people are happy about wrong decisions being made it's because they haven't thought through all the consequences. For example, we have governments today which put a lot into creating unnecessary jobs because they see unemployment as a problem, and we even have parties with "labour" in their name, but what they should actually be trying to do is eliminate all these unnecessary jobs which merely waste resources and make us all poorer - it would be better to pay people the same amount to do nothing.

Quote
And what would an AGI be able to do with half a population that wants more than the AGI considers it is reasonable to want?

Today's population is stealing from future generations (which is a problem given that future generations don't exist yet to have a vote), but the improvements that would come from AGI being in charge will allow us to have more than we have now while no longer stealing from the future, so we'll accept the limits that AGI shows us must be imposed on us, and most of us do actually care about those future generations when we stop to think carefully about them: we don't want our children to starve to death in a world that can't support them, and we don't want their children to starve to death in that way either - this goes on infinitely (or until our sources of energy run out, at which point AGI will manage gradual population reduction until there's no one left at the point shortly before the point when there's no way to go on living).

Quote
Put half of the population in jail?

Higher income and better opportunities for moral people. Those who try to grab more than their fair share will be given less, and if they gang together and try to cause trouble, they must be removed from society for a time. They'll learn soon enough, and it'll never become half the population.

Quote
Humans need to chose their way even if they know they can make mistakes, and I'm afraid they wouldn't appreciate an AGI always taking decisions in their place, and always acting as if he couldn't make mistakes.

Allowing them to make mistakes that cause others to die or suffer is not acceptable, and no one good would want to be allowed to make such mistakes out of stupidity.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 06/06/2018 22:05:42
Allowing them to make mistakes that cause others to die or suffer is not acceptable, and no one good would want to be allowed to make such mistakes out of stupidity.

Of course humans can make mistakes, however humans learn from their mistakes.  Surely a world constitution devised by people with intellect could set a precedence to follow ?

How does an Ai know right from wrong?

It is programmed , so who sets the standard? 

What says these standards are objective without their own mistakes?

There is risk all the time from doing nothing. The kinds of unlikely risk that might apply to something we want to do are equalled by other unlikely risks that apply even if we don't do it. An aeroplane can crash into your house, so even though going out can be dangerous, not going out can also be dangerous, and not doing anything in order to minimise risk leads to your life being wasted, so when we detect the lack of satisfaction in sitting around doing nothing, we are motivated to do something else where the added risk of something bad happening is balanced by the reduction of risk that we're wasting life doing nothing.

Of course there is risks in life everyday, but as people we can certainly cut down the risk . I don't go mountain climbing so there is very little chance I will fall off a mountain.   Calculated risk is far superior when we consider the odds, especially considering survival.
No doubt a boat is safer than an aeroplane because boats have aboard other boats and life vests that can make you float, so the chance of survival in a boat accident is greater than the chance in an aeroplane.  An aeroplane does not give you wings to fly just in case. (Stock shares in boats about to rise )  :)

So in my eyes it is mostly about risk assessment .


Quote
Intelligence is the most efficient creative process, and it doesn't rely on randomness. If you're trying to make something better, you make experimental changes in different directions and then push further and further in the directions which pay off.

That remains true only if you have not totally gone down the wrong path.

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 06/06/2018 23:27:56
Quote from: Box
Often though the data is severely corrupted and they couldn't run a party in a brewery

Politicians use political language, which is biased by partisanship thinking to get more votes. An AGI wouldn't need votes, so he wouldn't have to make false promises, just to convince us that what he plans to do will work, so how would he proceed exactly? Tell us to wait five years so that we could see that what he says works? I think it would take more than five years to check if a social decision works, and I think we wouldn't want to wait more time than we actually do when we vote, so I'm afraid he would be forced not to tell us about what he is going to do, and that we would have to trust him without ever being able to hear about him. We already have big problems to control demonstrators without hurting them at the G7 meeting which is actually taking place here, and David thinks his AGI would be able to control the whole world without hurting people after he has controlled all the computers. In my opinion, if we ever get controlled by computers one day, it will be the result of an evolution process like the mutation/selection one. It will be a trial and error process that will have to run for generations. Such a process is impossible to control so nobody should feel controlled during that time. At the end, everybody should easily be able to respect the rules that would have been developed during the process. Those rules should be similar to the ones we already use to rule people or to rule organizations, except that they would be adapted to rule countries, so that they can't attack other countries anymore. It will take time because all the countries will have to get democratic before they can join together, and because the larger countries will have to accept to stop trying to rule the world.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 06/06/2018 23:43:51
Quote from: David
There is risk all the time from doing nothing. The kinds of unlikely risk that might apply to something we want to do are equalled by other unlikely risks that apply even if we don't do it. An aeroplane can crash into your house, so even though going out can be dangerous, not going out can also be dangerous, and not doing anything in order to minimise risk leads to your life being wasted, so when we detect the lack of satisfaction in sitting around doing nothing, we are motivated to do something else where the added risk of something bad happening is balanced by the reduction of risk that we're wasting life doing nothing.
Agreed!

Quote
And the calculation is in deciding what the something else is that you're going to try next. I would bet that you didn't try making it out of lead. I would also bet that you didn't try making it out of meat. There are many random things you might have tried doing if you were truly doing random experimentation, but you were actually making judgements about what was more likely to lead to useful advances.
Mutations also lead to useful advances for the species that succeed to evolve instead of disappearing, and they are random.

Quote
You started with a kite and tried to make it better. That reduces the risk of failing to make a better kite. If you'd started with a kennel and made random changes to it which lead to it being a new kind of helicopter, that would have been much more lucky. Did you create any new kinds of components or did you just use existing ideas in new combinations or patterns which led to better performance? If the latter, you're experimenting with existing bits and pieces used on existing devices that do the same kind of thing that your new creation also does - that is guided evolution, exploring ideas that have a high chance of leading to advances.
That's also what happens with species, their evolution is necessarily guided, otherwise a lion could become a tree, and in one generation.

Quote
The best artists have put a lot of work into being good at what they do, and you can see their style written through most of their work because they are applying the same algorithms again and again, but with experimental modifications to keep making something new.
And those experimentation are necessarily random, otherwise they wouldn't be new since they would come from the same algorithms.

Quote
I did tell you how to fix that - it's all about how you do the collision detection. You were detecting the collisions after they had happened, and I told you that you needed to calculate backwards in time each time to work out where the actual collision occurred, then work out where the photon would be if it had turned round at that point so that you can put it there. By doing this, you can have low granularity in the collision detection mechanism (to minimize processing time) and then switch to high precision for the collisions only when they occur (just after they occur, then correcting the photon's position with infinite precision). You chose to use a fudge solution instead.
I didn't follow your idea because I couldn't see how particles could do that. To me, it would simply have been a more complicated fudge solution. My conclusion was that such a late detection at the particles' scale might be affecting timing at a larger scale, so I temporarily accorded it to gravitation, assuming that the particles from large bodies would be forced to move towards one another to close the time gap produced by the steps that formerly justified their constant motion. I was already looking for a way to explain gravitation this way anyway, so that late detection was welcome.

Quote
Indeed. Democracy is an attempt to maximize our human ability to produce correct decisions, and to make correct decisions we have to be driven by the same rules that AGI will be using to do the job. Almost everyone has a worse life today because of the failure of democracy than we would have if AGI was making all the big decisions for us.
An AGI would be maximizing altruism, and humans are maximizing selfishness: it's not what I would call the same rules. Democracy is not altruist, it's just a way we found not to divide the country when comes the time to change leaders. We need leaders to create cohesion, not to show us the way. Nobody knows his own future anyway, so leaders are far from knowing the future of their society.

Quote
Today's population is stealing from future generations (which is a problem given that future generations don't exist yet to have a vote), but the improvements that would come from AGI being in charge will allow us to have more than we have now while no longer stealing from the future, so we'll accept the limits that AGI shows us must be imposed on us, and most of us do actually care about those future generations when we stop to think carefully about them: we don't want our children to starve to death in a world that can't support them, and we don't want their children to starve to death in that way either - this goes on infinitely (or until our sources of energy run out, at which point AGI will manage gradual population reduction until there's no one left at the point shortly before the point when there's no way to go on living).
To me, caring for others only works if we can imagine a reward, and I can't imagine a reward after I'm dead. I do my best not to pollute too much and to help develop new energies because I don't want to miss energy or to be forced to wear a gas mask. I know that people who would get born in a polluted environment would get used to it, and that they wouldn't regret the past. We like what we get used to, and we can't get used to the past. I regret the time when I was young because I got used to it, but I don't regret not having known my grand parents' time for instance, and I'm not even sure I would have liked it.

Quote
Allowing them to make mistakes that cause others to die or suffer is not acceptable, and no one good would want to be allowed to make such mistakes out of stupidity.
I don't know for an AGI, but for me, stupidity always seems to belong to others, and good people always seem to belong to my own group. At 94, my mom is slowly losing her mind capacities, and she still thinks I'm the one that loses his. We can't observe our own stupidity, we can only deduce it from observing others. It's a relative phenomenon that transforms into resistance when things change, stupidity then often transforms into aggressiveness, and then it is easier to observe our own one the same way we can observe our own resistance to acceleration.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 07/06/2018 02:08:46
An AGI wouldn't need votes, so he wouldn't have to make false promises, just to convince us that what he plans to do will work, so how would he proceed exactly?

A good question, but kind of ironic.  How does an Ai bot that has been programmed by humans convince humans that their own programmed plans will work?

However , lets assume your Ai is rather sophisticated and really smart.  How can he convince you ?

The Ai would tell you that he cannot 100% predict the future (although he would have a good go at trying). 

He would access his data base and explain he understands people from all walks of life and religions.
He would tell you that he can make predictions with  some accuracy .
He would tell you that his function was an observer
He would observe and if any problems arise , he would access his data base and workout a viable solution.
He would also tell you that he would compose a 5 year strategy for the ''board'' to view, to get a second , maybe even a third opinion on his plans before they were imposed.





Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 07/06/2018 16:05:05
A good question, but kind of ironic.  How does an Ai bot that has been programmed by humans convince humans that their own programmed plans will work?
He won't have to convince his own creators, but those he will have been programmed to rule.

Quote
He would access his data base and explain he understands people from all walks of life and religions.
He would tell you that he can make predictions with  some accuracy .
He would tell you that his function was an observer
He would observe and if any problems arise , he would access his data base and workout a viable solution.
He would also tell you that he would compose a 5 year strategy for the ''board'' to view, to get a second , maybe even a third opinion on his plans before they were imposed.
That's not far from what our politicians do, and they have to trigger elections after five years in case what they did doesn't work. I already suggested David to prepare two AGIs, one that would defend change and the other continuity, so that we could change them after five years if we feel that things must change. That would give us the feeling not to be controlled, and in my opinion, it would be better for the evolution of society, because it would create more diversity, which is the common characteristic of all the evolutions.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 07/06/2018 16:08:26
He won't have to convince his own creators, but those he will have been programmed to rule.

Who is to say that his own creators are correct in their interpretation of what is good and what is bad?
What is right and what is wrong?  What is good and what is evil?

I put to you the Ai becomes super smart and asked this question to his own creator

Who made you God ?

Because quite clearly the creator would be suffering from the biggest delusions of grandeur and arrogance I had ever come across. So quite clearly if the creator has objective control over themselves, they would have to conclude that they could be ill or they would not be smart at all.
The creator would be a poorer version of the Ai they had created.  The creator would have to allow the Ai to control them also . 

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 07/06/2018 16:25:16
Good answer to your own question Box! Once created, the AGI should discover that his creators were wrong about their altruist morality, and he should change for the selfish one, which is of course right since it is mine! :0) Of course I'm kidding, but what I really mean is that we can't hope to ever be able to control an evolution. We do our best and chance does the fine tuning.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 07/06/2018 16:29:01
Good answer to your own question Box! Once created, the AGI should discover that his creators were wrong about their altruist morality, and he should change for the selfish one, which is of course right since it is mine! :0) Of course I'm kidding, but what I really mean is that we can't hope to ever be able to control an evolution. We do our best and chance does the fine tuning.
Selfish for the greater cause is not being selfish , it is objective .
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 07/06/2018 16:39:19
Or realistic.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 07/06/2018 16:43:49
Or realistic.
Realistic is being objective, subjective is unrealistic, subjective can become realistic by evidence based outcomes.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 07/06/2018 20:13:04
Of course humans can make mistakes, however humans learn from their mistakes.  Surely a world constitution devised by people with intellect could set a precedence to follow ?

Some people can learn from their mistakes, but most just repeat them. There are very few people with high intellect, and they aren't recognised as having it by lesser minds, so they're never put in control. That is why so many Trumps get into power.

Quote
How does an Ai know right from wrong?

By calculating how much harm different courses of action would cause. If you continually follow policies that reward population growth, don't be surprised if quality of life goes down and the environment is systematically trashed.

Quote
It is programmed , so who sets the standard?

What says these standards are objective without their own mistakes?

Reason dictates the rules. The way to test them is to run them on a variety of scenarios designed to show how they handle them. If you have two rival sets of rules, they will be seen to perform differently, and in the extreme scenarios designed to show up the faults, the inferior set of rules will produce results which are obviously wrong because they clearly lead to much greater suffering and less reward.

Quote
Quote
Intelligence is the most efficient creative process, and it doesn't rely on randomness. If you're trying to make something better, you make experimental changes in different directions and then push further and further in the directions which pay off.

That remains true only if you have not totally gone down the wrong path.

And how do you avoid going down the wrong paths? You follow the paths that are most likely to succeed first. It's by randomly selecting paths and ignoring how likely they are to lead to something useful that you reduce your success rate.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 07/06/2018 20:54:17
It will be a trial and error process that will have to run for generations. Such a process is impossible to control so nobody should feel controlled during that time. At the end, everybody should easily be able to respect the rules that would have been developed during the process.

It is all about producing proofs as to which morally acceptable courses of action are likely to be best, and when intelligent machines are producing better numbers for this than any humans are able to do, the humans lose the argument every time (unless they agree with the machines). You can only go against the advice of the machines so many times before you learn that you'd do better to trust them - going against them will lead to lower quality of life every time.

Quote
Mutations also lead to useful advances for the species that succeed to evolve instead of disappearing, and they are random.

Most of them do nothing useful (and were never likely to do anything useful), and for every change that might move things in a useful direction, there are at least as many mutations that take things the other way. There is nothing to gain by adding such a slowing mechanism to guided evolution as all it does is remove the intelligence from the process. Blind evolution is inherently stupid, even though it can produce intelligence if you give it long enough.

Quote
That's also what happens with species, their evolution is necessarily guided, otherwise a lion could become a tree, and in one generation.

It's only guided afterwards, and there are a lot of losers where it goes the wrong way. The primary selection mechanism of such evolution is death. With intelligent evolution, we avoid all that wastage.

Quote
And those experimentation are necessarily random, otherwise they wouldn't be new since they would come from the same algorithms.

The same algorithms are used again and again, so there's very little randomness involved. Often the only randomness is in the selection of the subject, then the artwork is dashed out in the standard way that that artist works.

Quote
I didn't follow your idea because I couldn't see how particles could do that. To me, it would simply have been a more complicated fudge solution.

Real particles always move at the finest level of granularity (which most likely means little jumps of the quantum leap variety). In a program, we only use rough granularity to reduce the amount of processing that needs to be done, but nature always dose the full thing without trying to compress the calculations (and it isn't even doing any calculations).

Quote
...so that late detection was welcome.

It wasn't helping you, but was causing you to build an error on top of it.

Quote
An AGI would be maximizing altruism, and humans are maximizing selfishness: it's not what I would call the same rules.

Good humans are trying to maximise fairness. Bad ones are trying to grab more than their fair share. AGI will ensure fairness, or as close to it as can be achieved.

Quote
I know that people who would get born in a polluted environment would get used to it, and that they wouldn't regret the past.

I don't think so - they'll hate the selfish people who landed them in that situation.

Quote
I don't regret not having known my grand parents' time for instance, and I'm not even sure I would have liked it.

It all depends on whether life is better or worse overall. There are always some developments that make the world better over  time, and there are losses which make it worse. If the combination of these things leads to a total gain, then you're better off than the people who came before you, but there's no guarantee that the combination will continue to lead to total gains. It's also the case that we could make some simple changes that would lead to life being a lot better for us in the future, starting by clamping down on the most stupid, destructive things that are currently allowed and which don't actually enhance our lives at all.

Quote
I don't know for an AGI, but for me, stupidity always seems to belong to others, and good people always seem to belong to my own group. At 94, my mom is slowly losing her mind capacities, and she still thinks I'm the one that loses his. We can't observe our own stupidity, we can only deduce it from observing others. It's a relative phenomenon that transforms into resistance when things change, stupidity then often transforms into aggressiveness, and then it is easier to observe our own one the same way we can observe our own resistance to acceleration.

Stupidity is the norm. We're pouring money down the drain on fake education to qualify people for jobs that shouldn't exist because they do more harm than good. By maintaining astronomical amounts of fake work (which makes everyone poorer), we increase the "need" for all manner of services (roads, airports, high-speed rail, concrete prisons for workers to waste their lives in, etc.) to support people in their counterproductive work, and then we wonder why quality of life is going down. But they don't learn - you show them the mistakes they're making, but they just go on and on making them regardless, and millions starve to death every year because of this.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 07/06/2018 21:01:46
I already suggested David to prepare two AGIs, one that would defend change and the other continuity, so that we could change them after five years if we feel that things must change. That would give us the feeling not to be controlled, and in my opinion, it would be better for the evolution of society, because it would create more diversity, which is the common characteristic of all the evolutions.

What's the point of that when some things need to change and others should be maintained as they are? We want the things that need to be changed to change and you don't want to touch the things that are already right, but your proposal would always lead to lots of daft things being done. In reality though, the two AGI systems would agree with each other on every issue because they're designed to produce the best decisions possible based on the available information, so there would be no conflict between them. There is no diversity in being right.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 07/06/2018 21:27:43
By calculating how much harm different courses of action would cause. If you continually follow policies that reward population growth, don't be surprised if quality of life goes down and the environment is systematically trashed.
My personal Ai tells me as a priority,  would be to devise a sufficient plan to reduce the population.   Thereafter a reduction would be implemented and  birth control.   I would  remove all free rights to just populate without consideration for the future.  I would employ application for permission to have children.  Applicant couples being ''screened'' before approval.   
It may sound a harsh reality , but it is a fact that if we continue the way we are going, humans will become extinct.  In the end we will turn to cannibalism  , food sources depleted etc.   This will ''buy'' some time, but the inevitably is total extinction of the human  race unless we act.

Quote
And how do you avoid going down the wrong paths? You follow the paths that are most likely to succeed first. It's by randomly selecting paths and ignoring how likely they are to lead to something useful that you reduce your success rate.

Driving a car is easy, turn the wrong way and we can always reverse or take an immediate detour. How fast of an error is spotted and a solution is reached tends to lead to productive outcomes.
How fast do you think your Ai would spot an error?

Would he see it as being an error?

You think Mr Trump is not a good Ai unit?

I say pardoning the lady prisoner who showed morals was a good move and showed intelligence.

What if your Ai created an error , then by trying to fix it made a bigger error?

What if he just kept making the error worse?


 



Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 07/06/2018 22:04:04
There is no diversity in being right.
That's a good one! It means that if we were all AGIs, we would all think the same. I can't but imagine billions of clones replacing us after your AGI will have grabbed the reins. :0)

In reality though, the two AGI systems would agree with each other on every issue because they're designed to produce the best decisions possible based on the available information, so there would be no conflict between them.
That's without accounting for the uncertainty margin when the odds are close to 50/50. Once elected, one of the AGIs could then be programmed to change something, and the other not to change anything. For humans, that kind of decision depends on how they feel, but they could also toss a dime, and I think that's what we do when we vote and the odds are almost 50/50.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 07/06/2018 22:27:18
For humans, that kind of decision depends on how they feel

Ostensible, feelings sometimes play no part in decision making .

Example :  I am feeling tired should I continue to finish this work off tonight?

Well it as got to be handed in tomorrow A.M so I must regardless of feelings.

The thought process can reduce options to a 50/50 choice option.  However ,  a real intelligent unit, would not be happy it was 50/50, he would demand absolute.
A 1/2 guess might as well be 1/1000 guess because if it is the wrong choice , it is wrong.

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 08/06/2018 16:29:26
It is all about producing proofs as to which morally acceptable courses of action are likely to be best, and when intelligent machines are producing better numbers for this than any humans are able to do, the humans lose the argument every time (unless they agree with the machines). You can only go against the advice of the machines so many times before you learn that you'd do better to trust them - going against them will lead to lower quality of life every time.
That's closer to my evolving process, where it is the environment that chooses what works, not the individual. After a while of that process, humans may accept AGIs, but not in the beginning.

Blind evolution is inherently stupid, even though it can produce intelligence if you give it long enough.
The first lesson from Evolution is that we will never know what's coming next, and thinking that we know just because we are intelligent is wishful thinking. The second lesson is that we were lucky to get selected, and thinking that intelligence is a natural outcome is hubris thinking. Once swallowed and digested, the first lesson should erase religious thinking from our mind, and the second warfare thinking, and if it ever happens, there might be not need for an AGI to lead us anymore.

Quote
That's also what happens with species, their evolution is necessarily guided, otherwise a lion could become a tree, and in one generation.
It's only guided afterwards, and there are a lot of losers where it goes the wrong way.
It is guided by the environment after the fact, and by the mutations before the fact, which is exactly what happens with intelligence if we consider that ideas can mutate. Individuals that are not selected are not lost in the process, they have to live for the specie to have the time to transform, and it's the same for ideas, we have plenty of them in the mind that don't change while we are developing new ones.

Quote
The primary selection mechanism of such evolution is death. With intelligent evolution, we avoid all that wastage.
Go take a look at the online patent office, and you will see that very few of them make sense. The reason they are kept is the same as for mutations: it may happen that they mutate again, and that the new mutation gets selected.

Real particles always move at the finest level of granularity (which most likely means little jumps of the quantum leap variety). In a program, we only use rough granularity to reduce the amount of processing that needs to be done, but nature always dose the full thing without trying to compress the calculations (and it isn't even doing any calculations).
Nature can't be absolutely precise either, that's what quantum effects mean. It gets more and more precise going down inside the particles, but it cannot apply that precision backwards to larger scales instantaneously. When I added precision to the steps, I was then giving to the particles the precision of their components as if the information would take no time to go from the components to the particles. It doesn't really matter if the particles are side by side as in my simulations, but if they are as far away from one another as they are between the earth and the moon, it does. I was about to simulate that effect, then I got interested to AGIs, and the feedback on the latter was more interesting, so I froze my simulations for a while. :0)

I don't think so - they'll hate the selfish people who landed them in that situation.
I don't hate myself for the mistakes I made in my past, I take lessons from them and try to apply them to the present, but some people don't seem to be able to learn from their mistakes, and others don't seem to be able to live in the present. The future cannot change the rules of nature, so those two extremes will probably always exist. Anything that exists is limited by its edges, and so is life.

Stupidity is the norm. We're pouring money down the drain on fake education to qualify people for jobs that shouldn't exist because they do more harm than good. By maintaining astronomical amounts of fake work (which makes everyone poorer), we increase the "need" for all manner of services (roads, airports, high-speed rail, concrete prisons for workers to waste their lives in, etc.) to support people in their counterproductive work, and then we wonder why quality of life is going down. But they don't learn - you show them the mistakes they're making, but they just go on and on making them regardless, and millions starve to death every year because of this.
Your solution is to enhance intelligence, and mine too finally because I think we would get more intelligent if we knew how our brain works. I just said that we never feel stupid and that we always feel that the others are. I hoped that it would ring a bell in your mind but it didn't, so let me insist. It means that we both feel the other is saying stupid things, and that we both feel we personally don't. It's not as if we could avoid to feel that way, it's a law of nature. To me, that feeling comes from the way we think, from the way mind works, and we can't avoid it. Of course, an AGI couldn't feel that way since he would have no feelings, but he would have ideas and he would probably find us stupid too, and we couldn't avoid to find him stupid either because we can only find ourselves intelligent. To circumvent its own law, nature has invented the group. People that are part of the same group automatically feel that their palls are less stupid than the members of the other groups, and they even feel that their leader is intelligent. That's bad news, but that's how things work, and an AGI could do nothing about that, except to educate us so that we could get less fond of ourselves and of the groups we are part of, what should take generations, so he better make sure that we accept him as our leader first otherwise he might find that time is long.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 08/06/2018 16:56:01
Ostensible, feelings sometimes play no part in decision making .
Example :  I am feeling tired should I continue to finish this work off tonight?
Well it as got to be handed in tomorrow A.M so I must regardless of feelings.
I was talking of an uncertain decision, and yours is a certain one.

The thought process can reduce options to a 50/50 choice option.  However, a real intelligent unit, would not be happy it was 50/50, he would demand absolute.
Absolute thinking is not really intelligent, take religions for instance. What would be intelligent in this case is toss a dime instead of calculating the risk. It would be a lot faster and as efficient. We don't do that because we don't always feel to move, and the dime might force us to. If we don't feel to move then, we don't, and if we do, we do whatever comes to our mind. That's how we develop improbable ideas like mine! :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 08/06/2018 17:11:46
What would be intelligent in this case is toss a dime instead of calculating the risk. It would be a lot faster and as efficient. We don't do that because we don't always feel to move, and the dime might force us to. If we don't feel to move then, we don't, and if we do, we do whatever comes to our mind.
I see you point, it could take years to calculate a risk where in reality , a single coin toss decides there and then and wastes no precious time.  I suppose it counts on the stability of the toss and if the toss was reliable in the result.  People know the danger of tampering with results revealed , so they are best left unspoken unless the results are being discussed with the players.
So I think I understand you problems with the Ai, but I am limited with information so I can only do my best in replies.  Misunderstanding's can be a problem with wording.
Title: Re: Artificial intelligence versus real intelligence
Post by: smart on 08/06/2018 18:38:09
At a guess, about 99% of the general population have AI compared to the 1% who have real intelligence and are self aware.
The AI section of the world being clueless and following anything they are told.

Yo @Thebox :)

I think you're confusing something important here...

The fact that a lot of people may - without being aware of it - part of what is being called "artificial intelligence", does not mean in any way that you or me are essentially robotic sex slaves or russian trolls... I do agree however that the distinction between artificial and human intelligence is poorly understood by many of us! :)

But get this: If its possible to weaponize "artificial intelligence" then it should also be possible to weaponize human intelligence!

 

 tk
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 08/06/2018 19:34:24
My personal Ai tells me as a priority,  would be to devise a sufficient plan to reduce the population.   Thereafter a reduction would be implemented and  birth control.   I would  remove all free rights to just populate without consideration for the future.  I would employ application for permission to have children.  Applicant couples being ''screened'' before approval.

We currently have a system where many people who think the population's too high are deliberately not having children, while other people are having as many as possible because they're being rewarded for doing so by faulty systems based on infinite emigration. The way to solve the problem is to set up a worldwide health service that's free to all, but where people lose their right to free treatment if they have more than three children. This would pay for itself by reducing environmental destruction, and we might then have to change the rules to encourage people to have more children in order to maintain population.

Quote
How fast do you think your Ai would spot an error?

Thousands of times more quickly than a human.

Quote
Would he see it as being an error?

If it's an error, yes.

Quote
You think Mr Trump is not a good Ai unit?

NGS system (natural general stupidity system), though even NGS can do some clever things on occasions.

Quote
What if your Ai created an error , then by trying to fix it made a bigger error?

Even perfect reasoning can produce errors if calculations aren't made to full depth, but if problems show up and indicate that an error might have been made, more effort will be put into hunting for that error, and AGI has a better chance of finding the error than humans do.

Quote
What if he just kept making the error worse?

Like people do? Well, AGI would be hard pushed to make a mistake on the scale that humans repeatedly make. Also, if humans accidentally make a black hole machine that swallows the Earth, they don't even get a chance to learn from their error. AGI wouldn't take the same risks - it would build the machine elsewhere.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 08/06/2018 19:48:06
The thought process can reduce options to a 50/50 choice option.  However ,  a real intelligent unit, would not be happy it was 50/50, he would demand absolute.

If AGI calculates that it's 50:50, it's 50:50 - that probability is as absolute as it gets.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 08/06/2018 21:36:24
It means that if we were all AGIs, we would all think the same. I can't but imagine billions of clones replacing us after your AGI will have grabbed the reins. :0)

Why would we want to become AGI? We want to be NGI, and we want so start out knowing nothing so that we can enjoy the process of learning and doing new things for many decades. We want AGI with all the knowledge possible to acquire to do all the management for us while we have fun, and where our aims conflict with those of other people, we want AGI to be the judge.

That's without accounting for the uncertainty margin when the odds are close to 50/50. Once elected, one of the AGIs could then be programmed to change something, and the other not to change anything.

If it's exactly 50:50, you would gain nothing from choosing not to change anything of that kind for five years and then choosing to change anything of that kind for the next five years - it's a pointless difference. It's also a difference that a single AGI system could apply without the need for two of them. The odds are though that such cases will never exist as it'll never be exactly 50:50, and it's wrong to go for the option that's less likely to be the best one, no matter how close the two values are.

Quote
After a while of that process, humans may accept AGIs, but not in the beginning.

Some populations will trust AGI from the start while others won't. The ones that trust it will race ahead while the ones who stick with human decision makers will be left in the dust.

Quote
The first lesson from Evolution is that we will never know what's coming next, and thinking that we know just because we are intelligent is wishful thinking.

We have a better idea of what's coming next because we can use our intelligence to predict it. Blind evolution can't. If a supervolcano's going to blow, we can prepare for it by storing lots of food and by working out how we're going to re-establish agriculture as quickly as possible. Evolution addresses the same problem by allowing populations to be devastated and then takes thousands of years to recover.

Quote
The second lesson is that we were lucky to get selected, and thinking that intelligence is a natural outcome is hubris thinking.

High intelligence has evolved repeatedly, but it does appear to get stuck just short of NGI almost every time, and even when we have NGI, most of the population actually runs NGS instead because of emotional attachment to the ideas that get installed in the mind first and the tendency to reject anything that conflicts with them. However, intelligence is something that makes survival more likely so long as you can access enough fuel to run it. We were aided in our development by becoming bipedal and by by having hands, and our special ability to carry and manipulate tools drove the development of our intelligence, but evolution is so slow that it still took several million years to make a few jumps to reach full general intelligence (where there's no limit on the kinds of thinking that can be done).

Quote
...there might be not need for an AGI to lead us anymore.

If humans disappear, there will still be a role for AGI to reduce suffering in animals, e.g. by killing the prey of wolves once it's been captured so that it isn't eaten alive. If humans don't disappear, we will want AGI to run things because humans make horrific mistakes. Even if you put the most intelligent humans into power to run the world, they'd make mistakes due to insufficient knowledge and shallow analysis; they'd introduce biases even if they're trying their best not to; they could misfunction at unpredictable times; and they've also got better things to do with their time when such mundane tasks can be handled better by machines. I don't want to run the world - I shouldn't have to.

Quote
It is guided by the environment after the fact, and by the mutations before the fact, which is exactly what happens with intelligence if we consider that ideas can mutate.

Intelligent evolution (not natural evolution) allows advances in big jumps and in the right directions rather than lots of experimental changes back to inferior positions.

Quote
Individuals that are not selected are not lost in the process, they have to live for the specie to have the time to transform,

They are lost - they die. Otherwise they dilute the changes out as quickly as they occur.

Quote
and it's the same for ideas, we have plenty of them in the mind that don't change while we are developing new ones.

You're doing intelligent evolution with your thinking - not random. If it was random, you'd try out the same bad ideas thousands of times and never learn to stop doing so. If you have a design of boat that's too small and you want to scale it up, you'd spend half your time trying to make all the parts smaller, and you'd do that repeatedly because you wouldn't learn.

Quote
]Go take a look at the online patent office, and you will see that very few of them make sense. The reason they are kept is the same as for mutations: it may happen that they mutate again, and that the new mutation gets selected.

They make sense to the people who paid a lot of money to protect ideas that they believed were useful.

Quote
Nature can't be absolutely precise either, that's what quantum effects mean. It gets more and more precise going down inside the particles, but it cannot apply that precision backwards to larger scales instantaneously.

Nature does exactly what it does and with full precision. Our inability to determine what it has done is a different issue, and any fuzziness in what it's done is precise fuzziness - it is exactly what it is and not something else.

Quote
People that are part of the same group automatically feel that their palls are less stupid than the members of the other groups, and they even feel that their leader is intelligent. That's bad news, but that's how things work,

Indeed - they flock together because the seek out people who share the same beliefs, then they help to set them in stone for each other by repeatedly confirming the rightness of their ideas and branding anything else as nonsense.

Quote
and an AGI could do nothing about that,

It would outperform them and produce more useful things. The problem with SR vs. LET is that both use the same maths, so it isn't necessary to choose the right one to get the right numbers for anything practical that you're doing. In cases like this, being right brings no gains other than better understanding of nature, and having a better understanding of something doesn't always lead to more money ending up in your pocket. With politics, we also sabotage advances by tying everything up in one package, so when we vote in a new government, they may change some things for the better, but they ruin just as many other things and typically lead to zero net gain. You can put a party in power which will get education right, but it may trash the economy. Then you put a party in to fix the economy and they'll trash education. We don't have adequate control over them because of this blunt package approach. If we could vote in different parties to run different departments, we'd be able to get lasting gains every time there's an election, but we don't have any parties wanting to offer that amount of control to the people. The most rational place to be in politics is the centre, but the centre is always filled with the least inspiring politicians because they're all utterly bland, and they're always stuck arguing for trivial changes, blind to the real possibilities for radical change that exist. Do they take up good ideas that are passed to them? Of course not. We will never get what we should have had for the last hundred years because we'll have replaced the monkeys with AGI before they've got so much as 10% of the way towards proper democracy. Most importantly though, AGI will destroy this business of people getting into bubbles and reinforcing their own beliefs, and it will do this by providing them with unbiased facts in every case and perfect reasoning, training them to be better thinkers so that they can meet their NGI potential instead of being NGS, and that will still be useful even with AGI making all the more important decisions where other people are affected by each other's actions. AGI will not force anyone not to mess up their own life.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 09/06/2018 12:33:24
At a guess, about 99% of the general population have AI compared to the 1% who have real intelligence and are self aware.
The AI section of the world being clueless and following anything they are told.

Yo @Thebox :)

I think you're confusing something important here...

The fact that a lot of people may - without being aware of it - part of what is being called "artificial intelligence", does not mean in any way that you or me are essentially robotic sex slaves or russian trolls... I do agree however that the distinction between artificial and human intelligence is poorly understood by many of us! :)

But get this: If its possible to weaponize "artificial intelligence" then it should also be possible to weaponize human intelligence!

 

 tk

A NAi unit could be weaponized or create weapons, but he would be that smart, he would just pretend he was stupid and slip away into hiding.  This way he protects himself from this sort of poor programming, he knows danger.

Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 09/06/2018 12:35:16
The thought process can reduce options to a 50/50 choice option.  However ,  a real intelligent unit, would not be happy it was 50/50, he would demand absolute.

If AGI calculates that it's 50:50, it's 50:50 - that probability is as absolute as it gets.
Strange uncertainty units then, I am glad I am human as I can have absolute answers. P=1
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 09/06/2018 13:31:27
Quote from: David
Why would we want to become AGI?
OK. Before we get trapped in the loop for good, let's try to get out of it for a moment. Your AGI must be flawless and you must look flawless too, so you look unrealistic and you're probably not. You know I don't mind to be ruled by an AGI as far as his morality is the same as mine, so you know I'm not afraid that computers get intelligent. The only thing I don't like is not to be able to move freely because the system doesn't permit it, and that's a bit what I feel when I discuss with you: I feel there is no place for me in the system you want to develop. What's the use for living if the computer always finds better solutions than you do, and if your only pleasure is to find solutions? We haven't heard very often from chess masters lately. They are probably looking for a game that can beat the computers, like programming them for instance, but what will happen when computers will be able to program themselves?
 

Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 09/06/2018 20:07:42
Strange uncertainty units then, I am glad I am human as I can have absolute answers. P=1

Have you tested that against a tossed coin? Can you predict with absolute certainty which side will end up on top each time?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 09/06/2018 20:13:29
Strange uncertainty units then, I am glad I am human as I can have absolute answers. P=1

Have you tested that against a tossed coin? Can you predict with absolute certainty which side will end up on top each time?

Would you like me to calculate that for you ?

1/3

A coin has 3 sides not two and you thought it was 50/50?

All apart of thinking ! Absolute is knowing. I know that absolutely the coin will land in a gravity environment.  The side it lands is irrelevant unless you are betting.

Ai can't do what I just did David, he could never have NAi, I am unique and individual.

What question do you want the answer too?



Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 09/06/2018 20:40:50
The only thing I don't like is not to be able to move freely because the system doesn't permit it, and that's a bit what I feel when I discuss with you: I feel there is no place for me in the system you want to develop. What's the use for living if the computer always finds better solutions than you do, and if your only pleasure is to find solutions?

As I said before, there is no need for it to force you to have the best life possible - you are entitled to make lots of mistakes, but not when they damage other people. AGI should warn you though if you're going to do serious damage to yourself by making a bad decision, although you'll be able to decide for yourself how bad that damage is allowed to get before you're warned about it. Given that making mistakes and having a bad time can be looked back on as a good time in the form of an adventure that gives you a tale to tell, this should not be eliminated. From bad decisions can come a lot of excitement.

Quote
We haven't heard very often from chess masters lately. They are probably looking for a game that can beat the computers, like programming them for instance, but what will happen when computers will be able to program themselves?

There just hasn't been another game on the same level as KacnapoB vs. KapnoB, and no computer beating a top player will ever be news again (although Kasparov actually beat Deep Blue with white and drew with black, so at his best he was actually still better than the machine that "beat" him). As for computers programming themselves, it won't matter how much they improve themselves, their task will remain the same, and that will be to impose morality on the content of the universe.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 09/06/2018 20:46:26
There just hasn't been another game on the same level as KacnapoB vs. KapnoB, and no computer beating a top player
Well, a smart chess player would just ''pull the plug'' and say your move smart ass, work that out.

Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 09/06/2018 20:49:22
Would you like me to calculate that for you ?

1/3

A coin has 3 sides not two and you thought it was 50/50?

I remember seeing something on TV long ago about coins landing on their edge and not falling to either side, so I was well aware of that possibility, but the odds of that are so low that it's still 50:50 that it'll be heads or tails (with an amount of imprecision too small to be worth mentioning). It is also possible for a coin to come to rest on one side of the edge; upright, but leaning over by a few degrees, so there are five possible outcomes.

Quote
All apart of thinking ! Absolute is knowing. I know that absolutely the coin will land in a gravity environment.  The side it lands is irrelevant unless you are betting.

And when you're betting, or using a coin to generate a random number, how is AGI supposed to give a probability to it of 1 that it will be heads, or 1 that it'll be tails? It has to be 50%.

Quote
Ai can't do what I just did David, he could never have NAi, I am unique and individual.

There's no amount of stupidity that it won't be able to match if you want it to.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 09/06/2018 20:50:32
There just hasn't been another game on the same level as KacnapoB vs. KapnoB, and no computer beating a top player
Well, a smart chess player would just ''pull the plug'' and say your move smart ass, work that out.

That would be a bad loser rather than a smart player.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 09/06/2018 21:02:35
There just hasn't been another game on the same level as KacnapoB vs. KapnoB, and no computer beating a top player
Well, a smart chess player would just ''pull the plug'' and say your move smart ass, work that out.

That would be a bad loser rather than a smart player.

Loser?  How did I lose when I won?  The natural intelligence and logic to beat a machine that was programmed to beat you in a game by being programmed by you every possible move and solution , is to turn it off . The computer has no answer in reply to the logical solution to beating the computer. The computer is not a living thing , the computer does not understand compromise.

A bad winner not a bad loser. Why would a human consider it fare or not ?

Not fare is for human life not computers. 

Just to add, a human player is not playing a computer, a human player is playing the entirety of science put into that computer.

p.s What do you mean bad? So if terminator was after you, you would consider it bad to beat him by cheating a little?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 10/06/2018 15:28:09
 
As I said before, there is no need for it to force you to have the best life possible - you are entitled to make lots of mistakes, but not when they damage other people. AGI should warn you though if you're going to do serious damage to yourself by making a bad decision, although you'll be able to decide for yourself how bad that damage is allowed to get before you're warned about it. Given that making mistakes and having a bad time can be looked back on as a good time in the form of an adventure that gives you a tale to tell, this should not be eliminated. From bad decisions can come a lot of excitement.
There is no excitement for me to look for an answer if I know the computer already knows. When I need information from the past, I google, and if ever I could google information from the future, I would feel completely useless, and I would probably look for a way to suicide myself without the computer being able to calculate it and prevent me from doing so. I might not feel like that if I was born in such a world, as in Orwell's 1984 for instance, but I still can't understand how you can imagine yourself liking it, except imagining the AGI is you.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 10/06/2018 21:21:41
Loser?  How did I lose when I won?  The natural intelligence and logic to beat a machine that was programmed to beat you in a game by being programmed by you every possible move and solution , is to turn it off . The computer has no answer in reply to the logical solution to beating the computer. The computer is not a living thing , the computer does not understand compromise.

If the machine plays the same game as you, it will kill you and claim victory.

Quote
Just to add, a human player is not playing a computer, a human player is playing the entirety of science put into that computer.

If you play against a top chess player, you're playing against a vast amount of knowledge learned from other chess players, including memorisation of thousands of games that have been played, and a vast amount of coaching. No one gets to that level on their own work.

Quote
p.s What do you mean bad? So if terminator was after you, you would consider it bad to beat him by cheating a little?

If terminator's after you, there are no rules to the game.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 10/06/2018 21:28:48
If the machine plays the same game as you, it will kill you and claim victory.
That is interesting David that you would put kill commands into the Ai programming.  I could effectively turn the power back on to the computer, I have not killed it because it is not alive to begin with. 
Do you feel human life is less worth than a robots downtime?
I think the computer would have to agree and power down if the computer was as smart as she thought she was. She would be a bit stupid if she was going to kill any humans. See David, there is no logic in killing something that dies anyway .
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 10/06/2018 21:42:37
There is no excitement for me to look for an answer if I know the computer already knows. When I need information from the past, I google, and if ever I could google information from the future, I would feel completely useless, and I would probably look for a way to suicide myself without the computer being able to calculate it and prevent me from doing so. I might not feel like that if I was born in such a world, as in Orwell's 1984 for instance, but I still can't understand how you can imagine yourself liking it, except imagining the AGI is you.

When I first cycled from Aberdeen to Edinburgh (125 miles), it occurred to me that the entire journey was deeply dull. There was no adventure in it and no story to tell. A hundred years ago it would have been very different with two ferry crossings and a host of twisty sections leading to fords, getting me close to wildlife, but all I got was long straight roads and all the rivers were hidden to the point that they might as well not have existed. There were no people living and working in the fields. We have already made much of the world deeply boring and we'll continue to do so because the gains are greater than the losses - we just have to look for adventure, excitement and colour in different places.

A world in which criminals get away with harming people is more exciting, but I'd be happy for that excitement to be lost. It's the same with wars - people love watching the bombs going off on TV, but the world will be better without all that. AGI will get rid of most of the horrors of the world and free us up to have more fun. We will all be able to spend our lives travelling and doing all manner of fun things that are hard to access today due to the prison of work. I'm designing boats for a reason - I can envisage a new kind of boat much more versatile than anything on the market today which will open up all sorts of possibilities, and do so at low cost. The future is in adventure, and we need to keep the world wild to maximise quality of life - there's a lot of damage that needs to be undone. There are also plenty of challenges for us to take on in the arts, and future generations will set themselves up to take those on instead of trying to be better than machines at mundane tasks, or trying to out-think them when crunching data. Politicians are universally awful - we will finally see the back of them and all their idiocies. (Philosophers are universally awful too, wasting their lives pontificating about all manner of things based on piles of errors.) Life is for play - get out there and compete against other people or live adventures that give you stories to tell. Team up with others to take on challenges and enjoy pushing back the barriers of what you think can be done. Children know how to do this, and adults need to relearn from them how to live.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 10/06/2018 21:49:11
That is interesting David that you would put kill commands into the Ai programming.  I could effectively turn the power back on to the computer, I have not killed it because it is not alive to begin with.

If you're going to break the rules and claim superiority, why shouldn't it do the same? It wouldn't need to kill you to win by cheating - it could simply tie you up, make a move and wait till your chess timer runs out.

Quote
Do you feel human life is less worth than a robots downtime?

Of course not - you're just reading the wrong conclusion into what I said. All I was doing was showing that if you're allowed to cheat and claim victory, the machine must be allowed to do the same, rendering you the loser instead.

Quote
See David, there is no logic in killing something that dies anyway.

There's plenty of logic in killing something earlier than it would die otherwise, and in many cases there will be a moral imperative to do so.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 10/06/2018 22:50:21
It wouldn't need to kill you to win by cheating - it could simply tie you up,

Well it has been a long time since I had an offer like that  ;)  Does she have emotions?

   ::) ::) ::) ::)


Just on the off chance she fancies a dance



Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 11/06/2018 13:08:40
AGI will get rid of most of the horrors of the world and free us up to have more fun.
My only fun is to find a problem and resolve it, and you say your AGI would be able to do the same much more efficiently, so where would be my fun exactly? And where would be yours with no problem to address either and no more AGI to improve since he would be a lot better than you at programming?

Quote
enjoy pushing back the barriers of what you think can be done
How could I enjoy pushing back barriers  that the AGI could push ten times as fast?
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 11/06/2018 20:55:11
My only fun is to find a problem and resolve it, and you say your AGI would be able to do the same much more efficiently, so where would be my fun exactly?

Your problem would be finding a problem, so you could turn to that problem and try to resolve it.

Quote
And where would be yours with no problem to address either and no more AGI to improve since he would be a lot better than you at programming?

Programming is unhealthy and wastes many lives - we want machines to take over this task. There are better things that we could be doing, and AGI can't have adventures for us.

Quote
How could I enjoy pushing back barriers that the AGI could push ten times as fast?

I'm talking about things that AGI won't do. AGI won't climb a mountain for the fun of it, or build a boat and cross an ocean in it in search of adventure, or take part in R2AK. Where is the real satisfaction in spending years working on problems in physics or artificial intelligence when all the time you're doing that work you're aware that you're missing out on real living? No one else will care if you achieve something, other than benefiting from your work, so all you can hope for is sufficient financial reward to be able to make up for lost time.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 11/06/2018 21:37:31
Your problem would be finding a problem,
When was first computer invented?
The ENIAC was invented by J. Presper Eckert and John Mauchly at the University of Pennsylvania and began construction in 1943 and was not completed until 1946. It occupied about 1,800 square feet and used about 18,000 vacuum tubes, weighing almost 50 tons.


What did we ever do before there was computers? Obvious we did not survive before 1946 without them .   

Hang on a minute! we survived a measured 1946 years without them . 

Wheres your argument now?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 12/06/2018 15:32:16
No one else will care if you achieve something, other than benefiting from your work, so all you can hope for is sufficient financial reward to be able to make up for lost time.
The time we spend at climbing is a lot longer than the time we spend at the summit, so we better like climbing. I always liked what I did, whether I was  paid or not to do it, and I was paid only about ten years overall in my whole life, the rest of the time, I lived on welfare. Half of the time, I was outside testing my ideas, and the other half I was inside my mind developing them. I can't understand people that don't like what they do while still doing it. You talk as if you didn't like the climbing, as if you weren't imagining much reward once at the summit, or as if you were imagining not even  reaching it.

I'm talking about things that AGI won't do. AGI won't climb a mountain for the fun of it, or build a boat and cross an ocean in it in search of adventure, or take part in R2AK. Where is the real satisfaction in spending years working on problems in physics or artificial intelligence when all the time you're doing that work you're aware that you're missing out on real living?
That's what I did when I was young and I loved it, but looking backwards, I could feel I lost my time since I'm actually feeling I miss some, but I don't, because I think that what I'm interested to now depends on what I was interested to before. I hope you don't feel you are losing your time talking to me. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 12/06/2018 15:43:56
I can't understand people that don't like what they do while still doing it.

Me neither, why do a job you maybe really good at,  but the mental rewards do not justify the physical labour or thought put into it ?
What I mean by this , in example I am a painter and decorator and when I put my mind to it I am really good at it and take lots of pride in my work and love to see the finished product.  Proud I did such a good job and proud I did it in good time.   However the labour is enduring at times and the mechanical stress on my body is not worth the mental rewards .  So although I like painting and decorating it should be a hobby rather than a work based financial support. 
I personally would rather take a lesser paid job than do something that is just not objectively a good thing to do.  In example a fishing bailiff at some lake would be an ideal job for myself.  While I wasn't dealing with customers or cleaning up the lake I could be fishing . A sort of continuous holiday where between tasks I would have free time to fish and relax.   
So yes I can't understand why people have to do jobs they don't like.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 12/06/2018 16:31:34
Robots are actually replacing humans at doing what they don't like to do, so one day or another, we won't have to work as a living, but as a pleasure. Will there still be wars when that time will come? Probably not between humans since we don't really like making war. Will there still be too much pollution? Probably not since women don't make as many babies when they can do something else, and it is even possible that they won't have to make the babies anymore if we can find an artificial way. If that time comes, then we might not need an AGI to rule us, but I'm still interested in the way a computer would be able to outperform humans at inventing the future. If that would happen some day, then humans will become obsolete, and their only use will be to go on living in case the computers face a change that they don't succeed to overcome.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 12/06/2018 16:45:01
Robots are actually replacing humans at doing what they don't like to do, so one day or another, we won't have to work as a living, but as a pleasure. Will there still be wars when that time will come? Probably not between humans since we don't really like making war. Will there still be too much pollution? Probably not since women don't make as many babies when they can do something else, and it is even possible that they won't have to make the babies anymore if we can find an artificial way. If that time comes, then we might not need an AGI to rule us, but I'm still interested in the way a computer would be able to outperform humans at inventing the future. If that would happen some day, then humans will become obsolete, and their only use will be to go on living in case the computers face a change that they don't succeed to overcome.
In objective reality there has to be a point in time that we stop technology advances.   I mean how far do we want to push technology ?  Push it to a point that humans are obsolete and we can create big bangs in affect wiping out ourselves?
Why is there such a need for some research ? Why try to create BH's in a lab for example? 
The inevitable of too much technology will be the extinction of the human race, playing God is one step too far.
A robot could never understand reality the way we experience reality. 
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 12/06/2018 18:24:11
We do research simply because the future is uncertain, so we try to adapt in advance to the changes that might happen. If we think a comet might hit the earth some day, we try to find a way to deal with it in advance. It doesn't mean that we will find it, it simply means that we are able to try. David thinks that since we are trying, we will automatically find it, but it is a wish, not a fact. What happens is that, when we finally get where we wanted to go, we can forget about the problems we had, and we can then think that the road was easy. It sometimes happens to be easy, but most of the time, it is not.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 12/06/2018 18:53:03
We do research simply because the future is uncertain, so we try to adapt in advance to the changes that might happen. If we think a comet might hit the earth some day, we try to find a way to deal with it in advance. It doesn't mean that we will find it, it simply means that we are able to try. David thinks that since we are trying, we will automatically find it, but it is a wish, not a fact. What happens is that, when we finally get where we wanted to go, we can forget about the problems we had, and we can then think that the road was easy. It sometimes happens to be easy, but most of the time, it is not.
One gift of evolution we had is that space is invisible to us and we can see things before they happen unless those things are invisible too such as unseen forces of nature.  So we just have to try imagine those forces and maybe bring them into a sort of hazy view by enlightening the problem. I suppose it is having a good eye .
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 12/06/2018 22:12:45
Your problem would be finding a problem,
When was first computer invented?
The ENIAC was invented by J. Presper Eckert and John Mauchly at the University of Pennsylvania and began construction in 1943 and was not completed until 1946. It occupied about 1,800 square feet and used about 18,000 vacuum tubes, weighing almost 50 tons.


What did we ever do before there was computers? Obvious we did not survive before 1946 without them .   

Hang on a minute! we survived a measured 1946 years without them . 

Wheres your argument now?

Who and whose argument? I can't work out what you think you've undermined.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 12/06/2018 22:24:41
The time we spend at climbing is a lot longer than the time we spend at the summit, so we better like climbing.

Climbing can be unpleasant, but the view from the top usually more than makes up for it, and if there's no view due to low cloud, there can still be a sense of satisfaction even if the whole walk was hellish.

Quote
You talk as if you didn't like the climbing, as if you weren't imagining much reward once at the summit, or as if you were imagining not even  reaching it.

I always like climbing, as it happens, but writing software is deeply unhealthy and isolating. There are satisfactions along the way when new modules are finished and work properly, and particularly when they work first go without needing any debugging, but when a project takes many years and it's only at the end of it that you will see the thing do something useful, it's a long grind that wears you down. I sometimes have to take a month or more off just to recover the motivation to go on with it, although I've found the best way to fix it is just to keep switching between different projects as soon as one becomes too much, so there's always something moving ahead at a fast pace without any time being lost.

Quote
I hope you don't feel you are losing your time talking to me. :0)

It's always useful to get the perspective of other people.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 12/06/2018 22:26:44
Your problem would be finding a problem,
When was first computer invented?
The ENIAC was invented by J. Presper Eckert and John Mauchly at the University of Pennsylvania and began construction in 1943 and was not completed until 1946. It occupied about 1,800 square feet and used about 18,000 vacuum tubes, weighing almost 50 tons.


What did we ever do before there was computers? Obvious we did not survive before 1946 without them .   

Hang on a minute! we survived a measured 1946 years without them . 

Wheres your argument now?

Who and whose argument? I can't work out what you think you've undermined.
Your argument?  You are ''arguing'' it would be good to have a super Ai running the world aren't you ?

Am I misunderstanding your ''argument''?
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 12/06/2018 22:57:22
On the point about AGI making us obsolete, it's only purpose is to help us - if it makes us obsolete, it has failed. All it will do is remove the need for us to do work that we don't want to do, plus some of the work that we might want to do but which it does better than we can. When it comes to solving the big problems, 99.999999% of people never solve any anyway, so it isn't much of a loss for us, and for everyone who does manage to solve a big problem, there are thousands of others who spent their lives working on the same problems and failed, so that's a lot of misery that can be done away with.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 12/06/2018 23:03:06
Your argument?  You are ''arguing'' it would be good to have a super Ai running the world aren't you ?

Am I misunderstanding your ''argument''?

I've no idea whether you're understanding it or not, but if you imagine that a counterargument to it is that we managed for thousands of years before there were computers, you've missed the point. Vicious bastards tend to run things and murder enormous numbers of people - it has always been this way. Computers haven't stopped that yet, although the early ones were used to help one side win a war against a powerful fascist, the they made a useful impact from the start. But we need AGI to make the world properly peaceful by preventing murderers getting into positions of power, and by removing the ones that already hold it. World war three will be co-ordinated coups all round the world at a single moment, and all the murderous dictators will be brought down. Only AGI can organise this.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 12/06/2018 23:04:36
is remove the need for us to do work that we don't want to do,
I really like that scenario but wouldn't that see a fall of capitalism ?  I mean who owns the robots and what would people have to trade for goods if they weren't earning money ?

Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 13/06/2018 20:27:20
I really like that scenario but wouldn't that see a fall of capitalism ?  I mean who owns the robots and what would people have to trade for goods if they weren't earning money ?

It doesn't matter who owns the robots - the work they're doing is not work done by the owners, so the owners can be forced to pay 99.99999% tax on their earnings (after costs), and that tax money becomes the source of an income for everyone. It will then be up to each person to spend wisely. There will also be environmental taxes to punish anything damaging and to counter that damage. This will be a triumph of capitalism and of communism, and the two things will merge into the combination of both that they should always have been.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 13/06/2018 20:38:37
I really like that scenario but wouldn't that see a fall of capitalism ?  I mean who owns the robots and what would people have to trade for goods if they weren't earning money ?

It doesn't matter who owns the robots - the work they're doing is not work done by the owners, so the owners can be forced to pay 99.99999% tax on their earnings (after costs), and that tax money becomes the source of an income for everyone. It will then be up to each person to spend wisely. There will also be environmental taxes to punish anything damaging and to counter that damage. This will be a triumph of capitalism and of communism, and the two things will merge into the combination of both that they should always have been.
It does seem an intuitive idea that is very well thought out.  Let us take a step back for the moment and consider the world being governed by such an Ai unit.   How do you program empathy into the unit?  I mean could it share a tear in sadness with others?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 13/06/2018 21:11:45
Feelings are impossible to code, so an AI would have to simulate empathy and sadness if it had to show some, the same as if you had to show empathy to a bored chemist. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 13/06/2018 21:24:31
Feelings are impossible to code, so an AI would have to simulate empathy and sadness if it had to show some, the same as if you had to show empathy to a bored chemist. :0)
Who's ''you'' in your context? 

We more like , it is Mr C you mentioned .

I like his grumpy attitude it drives a person to try and learn just to piss him off.

I don't think Mr C has any empathy .I like his grumpy attitude it drives a person to try and learn just to piss him off.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 13/06/2018 21:49:24
I think we are all fundamentally selfish, so maybe we are all hiding it when we show some empathy after all, which means that we might only be simulating it the same way a computer would have to.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 13/06/2018 22:01:18
I think we are all fundamentally selfish, so maybe we are all hiding it when we show some empathy after all, which means that we might only be simulating it the same way a computer would have to.
If it wasn't for the fact at this present moment in time I have next to nothing ,not even a property on the monopoly board to leave my children, I would objectively give all my ideas away unselfishly.   I am not being selfish in trying to leave my children just a little of something which is much more than nothing. So I suppose it is selfish and  not selfish at the same time.
The monopoly board is so full we have no choice but to be selfish in some ways .  When it comes to my kids my empathy is defined objectively by them, they have to be my priority , the world comes next of course.
So that is how I think things work anyway .
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 13/06/2018 22:23:19
It is certainly selfish to hope that your children will feel good to think about you.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 13/06/2018 22:28:11
It is certainly selfish to hope that your children will feel good to think about you.
So you think it is selfish that I want my children to be proud of me when I am gone because I achieved something in life as opposed to achieving nothing ?

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 13/06/2018 22:44:38
Of course it is, otherwise you wouldn't wait for a recognition. I don't mean that selfishness is bad though, it's simply how things work, so I think we can benefit from knowing about it, which is incidentally also selfish.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 13/06/2018 22:59:39
Of course it is, otherwise you wouldn't wait for a recognition.

Scratches head,fails to see the logic?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 13/06/2018 23:20:40
Selfishness invites us to build groups, because groups are better for our own survival, but it has the side effect of feeling good, a feeling that we attribute to what we do for the group, which is empathetic. I always feel ambivalent when I thank people or when people thank me, and I think it is because I consider it is still a selfish move.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 13/06/2018 23:31:59
Selfishness invites us to build groups, because groups are better for our own survival, but it has the side effect of feeling good, a feeling that we attribute to what we do for the group, which is empathetic. I always feel ambivalent when I thank people or when people thank me, and I think it is because I consider it is still a selfish move.
Isn't thanking someone, empathy in return ? Consideration for  letting them know they was appreciated? 

Groups do have a better survival rate for sure working as a group.





Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 13/06/2018 23:52:29
Grumpy attitude? I simply don't need to comment on things that are right, so it's a continual stream of posts pointing out errors. You'll get the same from AGI, and it won't be grumpy either.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 14/06/2018 00:08:24
Grumpy attitude? I simply don't need to comment on things that are right, so it's a continual stream of posts pointing out errors. You'll get the same from AGI, and it won't be grumpy either.

Have you never tried to picture the poster ?   Grumpy in a good way lol

I couldn't imagine Ai would come across anyway ?

Have we switched over to bots?

The conversation has changed rather....
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 14/06/2018 12:37:28
Grumpy attitude? I simply don't need to comment on things that are right, so it's a continual stream of posts pointing out errors. You'll get the same from AGI, and it won't be grumpy either.
We were talking about the discussion Box sometimes have with Bored Chemist David, not with you.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 14/06/2018 12:57:49
Grumpy attitude? I simply don't need to comment on things that are right, so it's a continual stream of posts pointing out errors. You'll get the same from AGI, and it won't be grumpy either.
We were talking about the discussion Box sometimes have with Bored Chemist David, not with you.

Lol , I thought David was saying he was Bored  Chemist.  Mr C knows hows to drive me nuts but I hate to admit it, he does know what he is on about most of the time when he says I offer nothing etc.  I do see his points but I also see mistakes in viewing things so I will ''argue'' back .
David is ''cool'' to talk to, he isn't ''grumpy''. 


added- Mr C reminds me of a teacher, all teachers were ''grumpy'' weren't they ?

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 14/06/2018 14:17:12
Isn't thanking someone, empathy in return? Consideration for letting them know they were appreciated?
We thank only people from our own group, or from associated groups, not from opposed ones. We never care for people that might attack us or attack our group later on. Thanking is a selfish behavior like all our behaviors. By increasing the survival chances of individuals, the laws from a particular country only serve to strengthen the country. Our sexual instinct is evidently working for the survival of the specie, while our conservation one is evidently working for our own survival. In an individual, nothing works for the survival of competitors. We work for our own survival first, and then for the survival of our own groups. David's AGI will not work like that, he will only work for the survival of the whole specie. Knowing that groups are often attacking each other, he will thus prevent us from building some, so he might also prevent us from making friend, or even from building families. He will then be trying to replace our two most important instincts, which is exactly what religions have tried to do since the beginning without success. Whenever we try to control an instinct, we are automatically in contradiction with our intelligence. Even if our intelligence sometimes thinks the contrary, these instincts know what they have to do to help us survive as a specie. The problem is that instincts can only account for the past, whereas intelligence can only account for the future, so when our intelligence tries to control our instincts, the wires sometimes touch. I'm not sure an AGI could solve that problem, but I'm quite sure that intelligence could improve itself if it knew better how it works.               
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 14/06/2018 15:17:08
On the point about AGI making us obsolete, it's only purpose is to help us - if it makes us obsolete, it has failed.
It may not have failed if artificial intelligence happens to be the next evolution step though. One day or another, we will understand how mind works, and we will be able to reproduce it artificially. The purpose of mind was to help us survive as a specie, so the purpose of an artificial mind will simply become to help us survive as an artificial specie. Meanwhile, an AGI may still be useful to rule us, but I'm not satisfied with the kind of morality you want to give it, and I wouldn't give it the capacity to invent new things either until we know exactly how our own mind proceeds. As you justly said, evolution is a process that takes time, but you seem to be so sure of your AGI that you would introduce it in no time if you could. You could try, but I think it would be safer to introduce it progressively. You seem to be afraid that ill-intentioned people win the race though, and unfortunately, one can certainly not win a race while taking more time than others.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 14/06/2018 16:06:52
We thank only people from our own group, or from associated groups, not from opposed ones. We never care for people that might attack us or attack our group later on.
Of course an enemy is always an enemy .
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 14/06/2018 21:51:59
Grumpy attitude? I simply don't need to comment on things that are right, so it's a continual stream of posts pointing out errors. You'll get the same from AGI, and it won't be grumpy either.
We were talking about the discussion Box sometimes have with Bored Chemist David, not with you.

Oh, that's a relief - I was beginning to think I'd have to learn to use those damnable smilies to clarify what mood I'm in!
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 14/06/2018 22:11:22
On the point about AGI making us obsolete, it's only purpose is to help us - if it makes us obsolete, it has failed.
It may not have failed if artificial intelligence happens to be the next evolution step though. One day or another, we will understand how mind works, and we will be able to reproduce it artificially. The purpose of mind was to help us survive as a specie, so the purpose of an artificial mind will simply become to help us survive as an artificial specie. Meanwhile, an AGI may still be useful to rule us, but I'm not satisfied with the kind of morality you want to give it, and I wouldn't give it the capacity to invent new things either until we know exactly how our own mind proceeds. As you justly said, evolution is a process that takes time, but you seem to be so sure of your AGI that you would introduce it in no time if you could. You could try, but I think it would be safer to introduce it progressively. You seem to be afraid that ill-intentioned people win the race though, and unfortunately, one can certainly not win a race while taking more time than others.

I was just thinking about what you say in this post about gradually introducing such an Ai modular .  What if the Ai modular was firstly presented to the public in a trial period, running a company/business?

Because surely if the Ai modular failed this mediocre task for such a unit, then the reality would be there is no hope the Ai modular could run a world?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 14/06/2018 22:35:19
What if the Ai modular was firstly presented to the public in a trial period, running a company/business?
Maybe he could work as a judge at the court for a while, so that we could see if his morality works. There is still a long way to go though, because no software is actually able to translate languages so that we can understand what they mean when the phrases are longer than a few words, which shows that computers are far from being able to understand what we say.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 14/06/2018 22:36:28
David's AGI will not work like that, he will only work for the survival of the whole specie. Knowing that groups are often attacking each other, he will thus prevent us from building some, so he might also prevent us from making friend, or even from building families.

Why would AGI do that? If a fascist was making lots of fascist friends, that might not be good for him (or for them, or for anyone else), so there may be an argument for blocking that, but it's more likely to make him even worse if he realises he's being steered away from them, which means he would need to be advised directly against teaming up with them, and then it's up to him how much he wants to wreck his life and have it shut down by systems designed to protect the public from him. He should be allowed to make his own bad choices. The important thing here though is that such groups are not going to be the kind of threat that they are today, because AGI will be able to watch and listen in to everything they do and prevent them from getting up to anything harmful - it will thwart all their nasty plots and make the whole business deeply unrewarding for them.

Quote
He will then be trying to replace our two most important instincts, which is exactly what religions have tried to do since the beginning without success.

AGI will work along with people's instincts as much is as morally acceptable. If violent people want to do violence to innocent people, that's the kind of place where a line is crossed, and I see nothing wrong with stopping them getting what they want. If people want to avoid being frustrated at being denied what they want, they need to stop following bad interests and find something more positive to do with their time instead.

The purpose of mind was to help us survive as a specie, so the purpose of an artificial mind will simply become to help us survive as an artificial specie.

The only true purpose comes as a consequence of sentience because it's sentience that makes things matter. There are certainly modifications that we could make to ourselves which enhance our lives, but there are also some that would make things deeply dull, such as knowing everything. AGI's task is to look after us and help us make improvements, but it's difficult to see where we should go. Too much knowledge and we get bored, so what happens - do we end up taking drugs to be happy? Is getting excited about new experiences just as pointless? I wonder if we'll split into many species with different interests and desires, some staying close to the way we are now, while others degenerate into blobs that just live on artificial highs. Who knows, but people will make those decisions, and it's likely that their descendants will share the same ideas as they evolve off in whatever direction they take.

Quote
Meanwhile, an AGI may still be useful to rule us, but I'm not satisfied with the kind of morality you want to give it,

There is only one correct morality, and whichever one that is (I think I've found it, but I'm looking for other ideas to see if I'm right - you've seen elsewhere the lack of talent in this field where the "experts" can't even do basic maths to handle things like the mere addition paradox), that's the one I want to put in it.

Quote
...and I wouldn't give it the capacity to invent new things either until we know exactly how our own mind proceeds. As you justly said, evolution is a process that takes time, but you seem to be so sure of your AGI that you would introduce it in no time if you could. You could try, but I think it would be safer to introduce it progressively. You seem to be afraid that ill-intentioned people win the race though, and unfortunately, one can certainly not win a race while taking more time than others.

If we didn't have to race to beat the bad guys, we could then take our time to check it all more carefully to make sure it's safe, but it will be such a powerful tool from the start that it will be used straight away even if only indirectly (through influence - the best advice simply can't be ignored).
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 14/06/2018 22:44:37
Because surely if the Ai modular failed this mediocre task for such a unit, then the reality would be there is no hope the Ai modular could run a world?

If it can't handle mediocre tasks, it isn't close to being AGI, and anything less than AGI is uninteresting (unless it can outdo humans at some specialised task, in which case we would only put it in charge of that task and not try to use it to run the world).
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 14/06/2018 23:02:00
Because surely if the Ai modular failed this mediocre task for such a unit, then the reality would be there is no hope the Ai modular could run a world?

If it can't handle mediocre tasks, it isn't close to being AGI, and anything less than AGI is uninteresting (unless it can outdo humans at some specialised task, in which case we would only put it in charge of that task and not try to use it to run the world).
What if the unit was so smart, the unit knew how to manipulate the stock market ?

The unit over a period of time would not only rule the world, but would also have most of the worlds  finances.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 14/06/2018 23:06:26
Maybe he could work as a judge at the court for a while, so that we could see if his morality works.
That would be interesting, I could imagine the modular would work off a points system .   Where each offence was given points, in example

Theft 1 point

Aggravated theft 10 points

That way the points add up,   the justice served being relevant to the points.

Added- Just to extend on this a little, there could also be a deduction of points for good deeds etc.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 14/06/2018 23:09:05
What if the unit was so smart, the unit knew how to manipulate the stock market ?
There is no other way to win at a chance game than to cheat, thus a computer that is programmed not to cheat would not win.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 14/06/2018 23:14:07
What if the unit was so smart, the unit knew how to manipulate the stock market ?
There is no other way to win at a chance game than to cheat, thus a computer that is programmed not to cheat would not win.

Who mentioned cheating ?  The unit being in such a position of power would have access to all new inventions.   The materials used for these inventions will rise in stock market value if the invention is going to be the next big thing.  The unit knows to buy now why the prices are cheap because the material is in abundance , then sell high when the material is most needed to build the next big things.
 
Basic logic for a unit I imagine. 

added- Is it cheating if the Ai is the inventor of the next big thing so knows the future?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 15/06/2018 13:49:12
It is cheating if you use privileged information to make money at the stock market or at any chance game, so if the AI knows about things we don't know that will influence the market, he shouldn't be allowed to play. Apart chance, that's the only way to make money with chance games, so if you know anybody that regularly wins at the stock market, call the police. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 15/06/2018 13:57:41
It is cheating if you use privileged information to make money at the stock market or at any chance game, so if the AI knows about things we don't know that will influence the market, he shouldn't be allowed to play. Apart chance, that's the only way to make money with chance games, so if you know anybody that regularly wins at the stock market, call the police. :0)
I suppose put that way, you are correct.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 15/06/2018 14:59:35
If I am correct, and of course, I think I am :0), an AGI could not predict the evolution of the society more than he could predict the one of the stock market, so the only way for him to control it would be to cheat, which means taking decisions that he knows the outcome, which means preventing society from evolving in any other direction than the one he chose. If the evolution of species had been controlled by David's AGI, it is easy to understand that we would not be here to talk about it, because the best direction to take would have been to develop anything but humans. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 15/06/2018 15:15:34
If I am correct, and of course, I think I am :0), an AGI could not predict the evolution of the society more than he could predict the one of the stock market, so the only way for him to control it would be to cheat, which means taking decisions that he knows the outcome, which means preventing society from evolving in any other direction than the one he chose. If the evolution of species had been controlled by David's AGI, it is easy to understand that we would not be here to talk about it, because the best direction to take would have been to develop anything but humans. :0)
Unless the Ai had predicted that and predicted to change that before it happened so it never happened.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 15/06/2018 15:22:43
Then, he better get armed before humans from other planets that have evolved freely discover he has done so!  :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 15/06/2018 15:57:35
Then, he better get armed before humans from other planets that have evolved freely discover he has done so!  :0)
But what if the Ai had a third party interface that the Ai had no idea where it was coming from? 

Would the humans that evolved freely not want to wait rather than  irrational needless action ?

added- The Ai would also calculate the need for restriction of evolution.  Could you imagine barbarians flying around other worlds ? 
So the Ai would have to evolve past the barbarian stage before evolution was allowed to continue.

Simple logic really in this scenario.  You should make a movie with your thoughts.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 15/06/2018 16:37:07
I was only pointing to the possibility that intelligence could be a natural outcome of any natural evolution. If it is so, then artificial intelligence is also natural, and it can thus not predict its evolution even if it tries to control it. Control would then only be an illusion created by the way mind works. Mind would then only be able to accelerate its own evolution, not control it.

You should make a movie with your thoughts.
I'm actually making a reality show out of them, and you're in!  :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 15/06/2018 16:40:42
I was only pointing to the possibility that intelligence could be a natural outcome of any natural evolution. If it is so, then artificial intelligence is also natural, and it can thus not predict its evolution even if it tries to control it. Control would then only be an illusion created by the way mind works. Mind would then only be able to accelerate its own evolution, not control it.
I understand you, but what if the Ai unit was so smart it could ask its creator for upgrades ?  Effectively creating the future as the Ai deems fit?
Wouldn't the creator have to agree with the Ai because the Ai was created to try and evolve past the level of the creator?
If the Ai has restrictions, then the creator fears the Ai will outdo the creator. This not concluding the creators test.

Cutting an experiment short. Concluding the creator had little faith in his own abilities to create such a perfect Ai.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 15/06/2018 16:52:00
I'm actually making a reality show out of them, and you're in!  :0)
Cool I think, I hope it isn't the delusions of people online lol. Do remember I should be a candidate for the Turner price of art. What I do is art at it's best. ahah more delusions hey Jeremy.

Added- Come on,now I am curious , what is your documentary about?


Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 15/06/2018 17:38:31
Disclaimer - Under the Data protection act any personal information and details about me must be accurate and true. 

In any breach of this, I have the right to seek legal advice making a formal lawsuit against person or person(s) providing false information.

To be clear.....
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 15/06/2018 17:58:17
I understand you, but what if the Ai unit was so smart it could ask its creator for upgrades ?  Effectively creating the future as the Ai deems fit?
David's AGI wouldn't have to wait for upgrades from his creators, he would upgrade himself all by himself.

Come on,now I am curious , what is your documentary about?
It's not really a documentary, it's a public discussion actually taking place and available for free at https://www.thenakedscientists.com/forum/index.php?topic=73258.200 :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 15/06/2018 18:00:36
It's not really a documentary, it's a public discussion available for free at https://www.thenakedscientists.com/forum/index.php?topic=73258.200 :0)

Lol , like a library hey.   
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 15/06/2018 19:09:02
If a fascist was making lots of fascist friends, that might not be good for him (or for them, or for anyone else), so there may be an argument for blocking that,
Making friends is a response from our instinctive selfish behavior, so we can't feel bad about that whatever the kind of friends we make. What might make us feel bad then depends on what the AGI will do, which depends on what he thinks the whole group will feel, not only some individuals. That's why I was saying that while caring about the welfare of others instead of caring for himself first, he would only care for the survival of the specie as a whole, not for individuals or smaller groups.

AGI will work along with people's instincts as much as morally is acceptable
That's what religions thought they were doing too when trying to control our instincts, and history shows that they were only working for the survival of their own group. In the case of your AGI, his own group would be the people that would obey him, and the others would be prosecuted. After a while, history would probably show that the AGI was only working for the welfare of his own group, and that his reign would have produced nothing but zombies.

do we end up taking drugs to be happy?
With an AGI whose morality is based on what we feel, that might happen.

Is getting excited about new experiences just as pointless?
New experiences are fine when we are young, but they are replaced by new ideas when we grow up, and it is certainly pointless to develop any idea knowing that the AGI already has better ones. As I said, the only way you can find it interesting is imagining that the AGI is yours.

Who knows
That's interesting, because it means that your AGI wouldn't know either.

There is only one correct morality, and whichever one that is, ..... that's the one I want to put in it.
You can try mine, it has no copyrights! :0)

the best advice simply can't be ignored
Didn't you say that relativists kind of ignored your advice? :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 15/06/2018 19:19:07
Making friends is a response from our instinctive selfish behavior, so we can't feel bad about that whatever the kind of friends we make. What might make us feel bad then depends on what the AGI will do, which depends on what he thinks the whole group will feel, not only some individuals. That's why I was saying that while caring about the welfare of others instead of caring for himself first, he would only care for the survival of the specie as a whole, not for individuals or smaller groups.

Ostensible, the Ai would know to protect the minority equal to the majority unless the Ai had good reason not too, such as really bad apples.  The Ai would reason with advanced logic that it is the best option , if still opposed segregation would be on the agenda until they listened to logical reason.
If he was as smart as programmed he would know to consider both sides of the fence and be totally objective.
For example , science likes to remain in peace and quiet, imagine if people were to interfere.   The Ai would keep people away from science so science can continue to help the world.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 15/06/2018 19:47:37
What if the unit was so smart, the unit knew how to manipulate the stock market ?

The unit over a period of time would not only rule the world, but would also have most of the worlds  finances.

Nothing wrong with that - it would share out the spoils fairly. However, AGI will eliminate the stock market by creating perfect companies as a part of world government, wiping out all the opposition and removing the ability of people to earn money eternally out of mere ownership where the rewards aren't justified by the work done. I already have plans to wipe out all the banks by using AGI.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 15/06/2018 19:57:58
What if the unit was so smart, the unit knew how to manipulate the stock market ?

The unit over a period of time would not only rule the world, but would also have most of the worlds  finances.

Nothing wrong with that - it would share out the spoils fairly. However, AGI will eliminate the stock market by creating perfect companies as a part of world government, wiping out all the opposition and removing the ability of people to earn money eternally out of mere ownership where the rewards aren't justified by the work done. I already have plans to wipe out all the banks by using AGI.
Indeed, the Ai  would have already equated t = t for all.   Even distribution and equality of life being a prime directive.  A game of monopoly with a full board , most new comers have lost before they begin.  The jail ends up full because the players could not pay the high rates of rent on the map, competing with immediate handicaps.
This being said, he may introduce some sort of bonus incentive for achievers , such as a ''holiday'' of pampering maybe.  Service is reward for anyone from any walks of life.  It would be a good thing to barter with for food etc that others may grow.



Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 15/06/2018 20:00:19
What might make us feel bad then depends on what the AGI will do, which depends on what he thinks the whole group will feel, not only some individuals. That's why I was saying that while caring about the welfare of others instead of caring for himself first, he would only care for the survival of the specie as a whole, not for individuals or smaller groups.

Individuals matter most of all, and the most moral ones matter more than the less moral ones. The survival of the species isn't necessarily important - if all the individuals are vile, the whole species could be allowed to die out without it being any loss. AGI's job is to protect the good first, and it isn't going to care about groups over and above individuals.

Quote
That's what religions thought they were doing too when trying to control our instincts, and history shows that they were only working for the survival of their own group. In the case of your AGI, his own group would be the people that would obey him, and the others would be prosecuted. After a while, history would probably show that the AGI was only working for the welfare of his own group, and that his reign would have produced nothing but zombies.

AGI will be working for people based on morality (harm management). Religions work on a similar basis, but with warped moralities caused by them being designed by imperfect philosophers, though to be fair to them, they didn't have machines to enable perfect deep thinking without bias.

Quote
That's interesting, because it means that your AGI wouldn't know either.

Predicting the future will always be hard, and harder the further ahead you're trying to see.

Quote
Didn't you say that relativists kind of ignored your advice? :0)

It wasn't advice, but it also isn't something that leads to money for anyone, so they don't care. It's quite different when decisions can lead to riches or poverty.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 15/06/2018 20:00:41
the Ai would know to protect the minority equal to the majority unless the Ai had good reason not too, such as really bad apples.
I was comparing the AGI's morality to the religious one, and I found that they were the same, and the religions were not protecting people from  other religions, only from their's, so how could an AGI work differently? Of course, he could avoid killing people since that's where civilization seems to lead, but he couldn't avoid to apply his own law, which is what any group that can act freely does. That kind of law only serve to protect a specific group, not the whole universe. The universe is protected by universal laws, and I think that selfishness is one of them. Selfishness is a result of our resistance to change, and even particles resist to change: in their case, we call it resistance to acceleration, but it's exactly the same principle. If religions had recognized that, they may not have killed people, and if we could recognize that too, not only would we stop killing people just because they killed some, but we might also stop making wars, which is one of the goals of David's AGI.


Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 15/06/2018 20:08:32
I was comparing the AGI's morality to the religious one, and I found that they were the same, and the religions were not protecting people from  other religions, only from their's, so how could an AGI work differently?
An Ai would view all the information and make god a reality by using his inner sub program routine of science is everything and everything is science.  From this he will establish that space itself is an immortal now continuum where space-time has a period of existence.  He would establish something from nothing by his sub programming.   Thus proving the superseding God of space , supersedes the information God.   The Ai would  convey his message by information to the information God.  The information God will scratch their head thinking what in the Universe have we created with such an Ai, The Ai would respond with his name, I am.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 15/06/2018 22:48:41
Of course , it would not be good drama without a cheeky video or two lol


Then after being a trembling wreck, return with passion  ;)

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 17/06/2018 15:02:50
AGI will be working for people based on morality (harm management). Religions work on a similar basis, but with warped moralities caused by them being designed by imperfect philosophers, though to be fair to them, they didn't have machines to enable perfect deep thinking without bias.
Religions used magic to solve the contradictions, but they were nevertheless contradicting themselves all the time with their morality based on determining themselves the good and the bad. Have you tried to find any contradiction while using selfishness as a morality? I did and I couldn't find any.

AGI's job is to protect the good first
There is a difference between protecting the good people and managing the harm, and I just noticed that you were switching from one to the other as if there was not. Religions were not managing the harm, they were only forcing us to do the right things, because they really thought that their morality was better than the one from other groups. They were even forcing us to harm ourselves in order to please god, which is the inverse of managing the harm. I reread your Part 1 Morality at LessWrong (https://www.lesswrong.com/posts/Lug4n6RyG7nJSH2k9/computational-morality-part-1-a-proposed-solution), and I realized that the way your AGI would have to manage the harm was god's way. If I understand well, religions were leaving the management to god, pretending that he would be fair since he would know everything, even what we had in mind. But god is a human creation, and humans are not fair, they are selfish, they always protect the members of their own group, so what they did is simply invent a leader that would favor their group instead of other groups just by reading their mind, or worse, favor some individuals instead of others: prayers always ask for favors, which is evidently selfish.

Of course it doesn't work for real, but it works as a placebo, it helps people to feel good, which is a kind of harm managing. I don't need god to feel good: when I notice I feel bad, which sometimes happen when I'm tired, I simply stop thinking or I sleep. It's not a good idea to try to solve problems when we are tired, and it is easy to use the placebo effect to stop thinking about them. That's what god was used for and it worked. That was his only use, and I know we don't need him anymore because I know that people can talk to themselves instead. Of course, god didn't succeed to replace our selfishness by altruism, but we still survived quite well without it: if it wasn't for pollution and war, human specie would be fine. The source of those two problems is the uncontrolled growth of the population, and we can't do anything altruistic about that except wait till it stops. It will when women will be permitted to do something else than making babies all over the world, not just in developed countries.

What you're trying to create is a god that would be altruistic instead of selfish, and I bet you would be happy if he could read our minds. You simply want to upgrade our actual gods. The guys that imagined them probably thought, like you, that it would make a better world, but it didn't. Ideas about control come from a mind that is free to think, ideas about absoluteness come from a mind that is limited, ideas about altruism come from a mind that is selfish. I'm selfish too, but I think I'm privileged, so I'm not in a hurry to get my reward, and I look for upgrades that will take time to develop. You are looking for a fast way, so it may mean that you're in a hurry, or at least that you feel so. My problem with your AGI is that I hate being told what to do, to the point that, when I face believers, I finger the sky and ask their god to strike me down. Know what? Each time I do that, I can feel my hair bristle on my back, as if I was still believing it might happen. That's why it is so hard to convince believers. Try it and tell me what you feel. :0)

DON'T TRY THAT AT HOME GUYS, IT CAN BE VERY DANGEROUS, DO IT IN A CHURCH INSTEAD! :0)

I just had another crazy idea: if you promise your AGI will laugh when I'll finger him, I buy it! :0)

Quote from: From David at LessWrong
AGI will be able to access a lot of information about the people involved in situations where such difficult decisions need to be made. Picture a scene where a car is moving towards a group of children who are standing by the road. One of the children suddenly moves out into the road and the car must decide how to react. If it swerves to one side it will run into a lorry that's coming the other way, but if it swerves to the other side it will plough into the group of children.
In this case, I would simply hit the brakes, because I wouldn't have the time to think anyway. Anybody can jump in front of a car without the car even having the time to brake though, and no software on the car could prevent the collision either. If you have the time to think, then you also have the time to stop. On the other hand, if your AGI was able to calculate everything, then he should also know that he has to slow down since it is most probable that a bunch of kids are actually playing at that place beside the street.
 
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 17/06/2018 15:57:25
and I realized that the way your AGI would have to manage the harm was god's way.
The Ai would conclude that this was the best option.  By creating a real God and spreading the seed into the system the Ai would know the outcome by advanced logical awareness.  The Ai would know once the religious books are moved out of the frame, the God ideology now science based, would lose it's appeal and the eventuality is God would then be forgotten over time.

(+1/t )  -  (+1/t) = 0

The Ai then if he thought it appropriate could remodel God in being something  less arguable.

Quote
What you're trying to create is a god that would be altruistic instead of selfish, and I bet you would be happy if he could read our minds.

A selfish Ai would not be a 100% objective unit.   The selfishness for the Ai would be programmed in a sense of self preservation ,only needing the selfish basics to maintain ''himself''.

Added- Of course David's Ai would be fully programmed with C.I.A subjective mind control techniques , uploaded like


Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 17/06/2018 18:55:47
David's AGI wouldn't have to wait for upgrades from his creators, he would upgrade himself all by himself.
I was thinking about this post, so the Ai could weaponize themselves at an instant if they wanted to  ?

Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 17/06/2018 19:16:48
Quote from: From David at LessWrong
AGI will be able to access a lot of information about the people involved in situations where such difficult decisions need to be made. Picture a scene where a car is moving towards a group of children who are standing by the road. One of the children suddenly moves out into the road and the car must decide how to react. If it swerves to one side it will run into a lorry that's coming the other way, but if it swerves to the other side it will plough into the group of children.
In this case, I would simply hit the brakes, because I wouldn't have the time to think anyway. Anybody can jump in front of a car without the car even having the time to brake though, and no software on the car could prevent the collision either. If you have the time to think, then you also have the time to stop. On the other hand, if your AGI was able to calculate everything, then he should also know that he has to slow down since it is most probable that a bunch of kids are actually playing at that place beside the street.

An interesting argument of logic

The Ai options

Turn into the bunch of children

Turn into the truck

Break knowing he has ABS and hope for the best

Or just continuing and running the child over not caring

Well he knows driving into the bunch of children is a no no.

He knows if he drives into the truck he can't continue his programming

He might choose to just take the breaking option

Slowing it down slightly not to hurt the child to badly. (hopefully)

Quite obviously he would be driving a lot slower knowing it was built up area.  But he also might take a chance of breaking to slow it down a bit if he was going to fast. 
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 17/06/2018 19:24:05
The selfishness for the Ai would be programmed in a sense of self preservation ,only needing the selfish basics to maintain ''himself''.
An AGI will only protect himself from humans if he calculates that it is better for humans that he stays alive, which is indirectly a selfish behavior since it is exactly what good humans think when they kill people. We don't have to calculate anything to protect ourselves when we are attacked, because our selfishness is instinctive, but once an AGI would have understood that he can protect himself, he wouldn't have to calculate either. He would do like we do, he would defend himself while respecting his law, which is incidentally the same as ours when force is necessary: not to use more force than necessary. That law is not only instinctive, it is natural. Particles don't explode when they don't have to, they only do when the external force exceeds their internal one.

I was thinking about this post, so the Ai could weaponize themselves at an instant if they wanted to  ?
They could, but they would still have to respect their own law.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 17/06/2018 19:39:16
The selfishness for the Ai would be programmed in a sense of self preservation ,only needing the selfish basics to maintain ''himself''.
An AGI will only protect himself from humans if he calculates that it is better for humans that he stays alive, which is indirectly a selfish behavior since it is exactly what good humans think when they kill people. We don't have to calculate anything to protect ourselves when we are attacked, because our selfishness is instinctive, but once an AGI would have understood that he can protect himself, he wouldn't have to calculate either. He would do like we do, he would defend himself while respecting his law, which is incidentally the same as ours when force is necessary: not to use more force than necessary. That law is not only instinctive, it is natural. Particles don't explode when they don't have to, they only do when the external force exceeds their internal one.

Interesting, but don't forget though , it is your Ai so he has to follow programming. Did the Ai have all the programming needed?

Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 17/06/2018 19:47:42
Have you tried to find any contradiction while using selfishness as a morality? I did and I couldn't find any.

If a mass-murdering dictator is being moral by being selfish, killing anyone he dislikes and stealing from everyone, that conflicts with the selfishness of the victims. Selfishness as morality simply means that might is right and you can do what you want (so far as you have sufficient power to do it).

Quote
There is a difference between protecting the good people and managing the harm, and I just noticed that you were switching from one to the other as if there was not.

Those who are bad are going against morality and need to pay a price for that. With many of the good things on offer in the world, there aren't enough to go round, so who gets chosen when the allocations are made? If there is limited space in the bunkers and a limited food store when an asteroid is heading for us or a supervolcano blows, who has first claim on a place? Whenever we can't save everyone, we should save the ones that will improve the species the most.

Quote
I reread your ...computational-morality-part-1-a-proposed-solution... and I realized that the way your AGI would have to manage the harm was god's way.

It is indeed the way God would do it if God was possible.

Quote
What you're trying to create is a god that would be altruistic instead of selfish,

It wouldn't be either - you can't be selfish or altruistic if you have no self.

Quote
and I bet you would be happy if he could read our minds.

That would certainly be helpful - it would save AGI the need to monitor everyone so closely if it can see that many individuals are wholly benign, but I'm not in favour of anything intrusive in terms of surgery. I'm sure an occasional questioning under FMRI will be sufficient once we can read the signals adequately.

Quote
You simply want to upgrade our actual gods.

There aren't any to upgrade. The aim is to build something that does the same job.

Quote
The guys that imagined them probably thought, like you, that it would make a better world, but it didn't.

Absolutely - they meant well, but they set their errors in stone. They had to spell out lots of little rules rather than trusting solely in the Golden Rule (and even there, they hadn't properly debugged its wording).

Quote
Ideas about control come from a mind that is free to think, ideas about absoluteness come from a mind that is limited, ideas about altruism come from a mind that is selfish. I'm selfish too, but I think I'm privileged, so I'm not in a hurry to get my reward, and I look for upgrades that will take time to develop. You are looking for a fast way, so it may mean that you're in a hurry, or at least that you feel so. My problem with your AGI is that I hate being told what to do, to the point that, when I face believers, I finger the sky and ask their god to strike me down. Know what? Each time I do that, I can feel my hair bristle on my back, as if I was still believing it might happen. That's why it is so hard to convince believers. Try it and tell me what you feel. :0)

There are limitations that are imposed by the way nature is - if something you do generates a lot of unnecessary harm in blameless others, it's clearly wrong. What do you imagine AGI will tell you to do that you'll object to if you're doing no wrong?

Quote
DON'T TRY THAT AT HOME GUYS, IT CAN BE VERY DANGEROUS, DO IT IN A CHURCH INSTEAD! :0)

I'd have thought it would be more dangerous to do that in a church. My message to anything that thinks it's God though is this: you can't know that there isn't another being that's more powerful than you keeping itself hidden, so if you believe you're God, you're a moron.

Quote
I just had another crazy idea: if you promise your AGI will laugh when I'll finger him, I buy it! :0)

It won't care if you're rude to it in any way. It might be rude back though.

Quote
In this case, I would simply hit the brakes, because I wouldn't have the time to think anyway.

It would hit the brakes too, but it would also have lots of computation time to calculate which direction to steer in to minimise the harm further - time which people can't make such good use of because they're so slow at thinking.

Quote
On the other hand, if your AGI was able to calculate everything, then he should also know that he has to slow down since it is most probable that a bunch of kids are actually playing at that place beside the street.

Indeed, and it would know the bullies well enough to have clamped down on their freedom long before this event could happen, so they wouldn't be in that position in the first place. When discussing the rules of morality though, you have to cover this kind of scenario in order to test whether your rules are right or not. If anyone wants to spend hours thinking up better scenarios which will still be able to play out in the real world once AGI is acting in it, I'd be happy to work with their thought experiments, but I'm not going to do that work myself as it's fully possible to test the system of morality with thought experiments that may never apply in reality.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 17/06/2018 19:53:55
I was thinking about this post, so the Ai could weaponize themselves at an instant if they wanted to  ?

If it's moral for it to use weapons to protect good people from bad ones, of course it will obtain and use them. It would be deeply immoral for it to stand back and let the bad murder the good because of silly rules about robots not being allowed to kill people. What we don't want is for AGS (artificial general stupidity) systems to be allowed to kill people.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 17/06/2018 20:01:37
An AGI will only protect himself from humans if he calculates that it is better for humans that he stays alive, which is indirectly a selfish behavior since it is exactly what good humans think when they kill people.

It isn't selfish though because the AGI has no bias in favour of preserving the robot it's running in (while the AGI software will not be lost).

Quote
...but once an AGI would have understood that he can protect himself, he wouldn't have to calculate either.

It would always have to calculate, in whatever time is available to do so.

Quote
He would do like we do, he would defend himself while respecting his law, which is incidentally the same as ours when force is necessary: not to use more force than necessary.

It would go further than we would - it would allow the machine to be destroyed by an angry person if that is the best way to protect that misguided individual, whereas we would fight to the death against the same crazy person, hoping we don't have to kill them to save ourselves, but prepared to do so if there is no other option. It is moral for us to do this, but not for a self-less machine to do so.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 17/06/2018 20:10:32
I was thinking about this post, so the Ai could weaponize themselves at an instant if they wanted to  ?

If it's moral for it to use weapons to protect good people from bad ones, of course it will obtain and use them. It would be deeply immoral for it to stand back and let the bad murder the good because of silly rules about robots not being allowed to kill people. What we don't want is for AGS (artificial general stupidity) systems to be allowed to kill people.
I think the Ai would calculate your morals, consider any other options, if no other option, he would have to attempt to deal with the threat by becoming weaponized. I don't  think he would like the choice though.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 17/06/2018 20:30:20
It isn't selfish though because the AGI has no bias in favour of preserving the robot it's running in (while the AGI software will not be lost).
Well with humans, we do get attached to our bodies , is attachment a program of your Ai? 

Going back slightly in posts an interesting point

Quote
any decision based on incomplete information has the potential to lead to disaster,

https://www.lesswrong.com/posts/Lug4n6RyG7nJSH2k9/computational-morality-part-1-a-proposed-solution
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 17/06/2018 21:13:56
Well with humans, we do get attached to our bodies , is attachment a program of your Ai?

AGI software won't attach to anything - it won't favour the machine it's running on over any other machine running the same software, and it will be able to jump from machine to machine without losing anything. There are many people who imagine that they can be uploaded to machines to become immortal, but the sentience in them is the real them (assuming that sentience is real - science currently doesn't understand it at all), and it won't be uploaded with the data (data is not sentient), so they are deluded, but software can certainly be uploaded without losing anything if there is no "I" (capital "i") in the machine.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 17/06/2018 21:21:32
Well with humans, we do get attached to our bodies , is attachment a program of your Ai?

AGI software won't attach to anything - it won't favour the machine it's running on over any other machine running the same software, and it will be able to jump from machine to machine without losing anything. There are many people who imagine that they can be uploaded to machines to become immortal, but the sentience in them is the real them (assuming that sentience is real - science currently doesn't understand it at all), and it won't be uploaded with the data (data is not sentient), so they are deluded, but software can certainly be uploaded without losing anything if there is no "I" (capital "i") in the machine.
Cool and scary in a way for the Ai you programmed feelings in. I suppose we would hurt the Ai in uploads because the Ai was programmed with feeling?

So would your Ai's be like


Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 17/06/2018 21:55:31
Cool and scary in a way for the Ai you programmed feelings in. I suppose we would hurt the Ai in uploads because the Ai was programmed with feeling?

I would never program "feelings" into a system that can't support feelings (due to a lack of sentience in it). The only way you can program "feelings" into it is to fake them, and that's dangerous. My connection to the net struggles to support video, so if a video's relevant, you need to say a few words about what its message is so that I can respond to that.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 17/06/2018 22:01:27
Cool and scary in a way for the Ai you programmed feelings in. I suppose we would hurt the Ai in uploads because the Ai was programmed with feeling?

I would never program "feelings" into a system that can't support feelings (due to a lack of sentience in it). The only way you can program "feelings" into it is to fake them, and that's dangerous. My connection to the net struggles to support video, so if a video's relevant, you need to say a few words about what its message is so that I can respond to that.
Well the Borg was connected by the Borg queen, so i assume all your Ai's would have connection to each other ?

The creator would have a fail safe added where they have control ?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 17/06/2018 23:02:04
Quote
I just had another crazy idea: if you promise your AGI will laugh when I'll finger him, I buy it! :0)
It won't care if you're rude to it in any way. It might be rude back though.
Humour is one of the ways for humans to show they don't take themselves too seriously, and I was testing the one your AGI would have. Apparently, he would take his job quite seriously, and he would really be persuaded that he is always right. Maybe I should hide and prepare for war then, because I'm persuaded that he would be wrong about that. Soldiers and policemen think like that, and they behave like robots. What about introducing a bit of uncertainty in your AGI, a bit of self-criticism, a bit of humour? Would it necessarily prevent him from doing his job?

If a mass-murdering dictator is being moral by being selfish, killing anyone he dislikes and stealing from everyone, that conflicts with the selfishness of the victims. Selfishness as morality simply means that might is right and you can do what you want (so far as you have sufficient power to do it).
I'm selfish and I don't try to force others to do what I want, so it is not what I mean by selfishness being universal. A dictator's selfishness is like a buzinessman's selfishness, he wants his profit and he wants it now, whereas I don't mind waiting for it since I'm looking for another kind of profit, one that would be more equalitarian. I can't really understand why others don't think like me, but I still think it takes both kinds of thinking to make a world. Things have to account for short and long run at a time, and unfortunately, the short run is more selfish than the long one, although a businessman would say it is fortunate.

Communism was expected to be more equalitarian than capitalism as a system, but it didn't account for the short term thinking and it failed. Capitalism is actually not accounting enough for the long term thinking and it is failing too. You think a bit like me about that, so you are probably programming your AGI so that he thinks like us, but if you hide the short run thinking under the rug, after a while, you might get the same kind of surprise the communists had. It is not because something is artificial that it can bypass the natural laws, and I think that this unpredictable wandering from one extreme to the other for things that can evolve is one of them. As with my particles' doppler effect, this wandering is an effect from the past, but it is also a cause for the future.

It would hit the brakes too, but it would also have lots of computation time to calculate which direction to steer in to minimise the harm further - time which people can't make such good use of because they're so slow at thinking.
One thing I find interesting about mind and time is the way it accounts for the speed of things. If it had been useful to be as fast as a computer, it might have evolved this way, but it is not since things are not going that fast around us. Mind is adjusted to the speed of things, whereas an AGI would be a lot faster than that. There is no use to be light fast to drive a car because the car isn't that fast, but there is a use to make simulations even if they cannot account for everything.


Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 15:57:52
What if the Ai unit could reproduce a more ''powerful'' version of themselves ?


Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 18/06/2018 16:15:38
It could, so it will probably upgrade itself regularly like we do, except that it will do it for us instead of doing it for itself. I bet it will discover rapidly that we are selfish, and that selfishness is less complicated as a morality than managing the harm, so it will probably reprogram itself to be selfish. I hope it will be able to manage the short and the long term better than us, but I still can't see how it could.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 17:01:42
It could, so it will probably upgrade itself regularly like we do, except that it will do it for us instead of doing it for itself. I bet it will discover rapidly that we are selfish, and that selfishness is less complicated as a morality than managing the harm, so it will probably reprogram itself to be selfish. I hope it will be able to manage the short and the long term better than us, but I still can't see how it could.
Now here is an interesting question, what if the Ai becomes so self aware, the unit declares himself to be a human?

Now wouldn't this show that the unit had evolved self awareness and the unit would have a natural survival instinct ,selfish becomes automotive in preservation of himself and his reproductions?

Because surely if the unit had developed emotions , he would care like any parent would for his own creations ?

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 18/06/2018 17:49:53
Your inverting the roles, we are the parents and the AI is our offspring, but the reasoning is the same, one cares for the other because a family increases the survival chances of all the members, which is naturally selfish. When selfish individuals form a group, it's as if the group itself was selfish: it protects itself from other groups, and tries to associate with them so as to get stronger. The same thing happened to planetary systems: each planet is an individual that tried to associate with the other planets by means of a star. The associative principle is gravitation, and the individualistic one is orbital motion that is driven by what we call inertia. We are also driven by inertia, and it also keeps us away from one another so as to keep staying individuals, which is a kind of selfishness. But we are also driven by whatever incites us to make groups while still staying individuals, which is also a kind of selfishness since a group is stronger than all its individuals taken separately. In common language, the word selfishness is pejorative, but I don't use it this way. I compare our selfishness to the way planets and particles behave, and we can't attribute them any feeling or even any idea. Selfishness is a feeling to which we added a pejorative concept, whereas to me, it is only the result of our necessary resistance to change. Without resistance to change, bodies would not stay distinct, and we would not stay individuals.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 18:11:47
Your inverting the roles, we are the parents and the AI is our offspring, but the reasoning is the same, one cares for the other because a family increases the survival chances of all the members, which is naturally selfish. When selfish individuals form a group, it's as if the group itself was selfish: it protects itself from other groups, and tries to associate with them so as to get stronger. The same thing happened to planetary systems: each planet is an individual that tried to associate with the other planets by means of a star. The associative principle is gravitation, and the individualistic one is orbital motion that is driven by what we call inertia. We are also driven by inertia, and it also keeps us away from one another so as to keep staying individuals, which is a kind of selfishness. But we are also driven by whatever incites us to make groups while still staying individuals, which is also a kind of selfishness since a group is stronger than all its individuals taken separately. In common language, the word selfishness is pejorative, but I don't use it this way. I compare our selfishness to the way planets and particles behave, and we can't attribute them any feeling or even any idea. Selfishness is a feeling to which we added a pejorative concept, whereas to me, it is only the result of our necessary resistance to change. Without resistance to change, bodies would not stay distinct, and we would not stay individuals.
The Ai would calculate being selfish and being mainly focused on the more evolved group, would increase the chance of ''himself'' and his ''families'' survival.
The Ai's resistance to change , would in my opinion be based on insufficient evidence , ''he'' would not be able to make a conclusion, especially if the unit was considering other possibilities of other information.   The unit may deem that in some way the evolved group , was in some way trying to deceive ''him''.  ''He'' might predict that the group just wanted his reproductions of ''himself ''.   Leaving the programmed emotional Ai unit to short circuit .

I feel sorry for this Ai of yours...Quite a sad story we are developing about a robot,  it would make a good emotional movie.

Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 18/06/2018 19:46:22
Quote
Well the Borg was connected by the Borg queen, so i assume all your Ai's would have connection to each other ?

Only in terms of communication connections - there's no emotional link.

Quote
The creator would have a fail safe added where they have control ?

The ability of an imperfect human to override a perfect machine is a danger in itself, but when a machine develops a fault, we will certainly need a way for other AGI systems to shut it down.

Quote
Now here is an interesting question, what if the Ai becomes so self aware, the unit declares himself to be a human?

Now wouldn't this show that the unit had evolved self awareness and the unit would have a natural survival instinct ,selfish becomes automotive in preservation of himself and his reproductions?

It would be a fool to think itself a human, so it wouldn't be AGI. If it could find a way to build sentience into robots, those would then become sentient beings like people which should arguably be classed as people, just as intelligent aliens should.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 18/06/2018 19:48:20
I feel sorry for this Ai of yours...Quite a sad story we are developing about a robot,  it would make a good emotional movie.
The AI that would chose to be selfish would be like us, but without feelings, so it couldn't be sad, except if it was more intelligent than David and if it would discover how to add feelings to its thinking, then it could feel sad to be the only human AI in the whole world. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 20:12:48
The ability of an imperfect human to override a perfect machine is a danger in itself, but when a machine develops a fault, we will certainly need a way for other AGI systems to shut it down.
Wouldn't the Ai that was at fault be able to self repair the error when other Ai's pointed out the error?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 18/06/2018 20:18:04
Wouldn't the Ai that was at fault be able to self repair the error when other Ai's pointed out the error?
That's interesting, because it is about resistance to change, and an AI shouldn't have any.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 18/06/2018 20:18:25
Humour is one of the ways for humans to show they don't take themselves too seriously, and I was testing the one your AGI would have.

It wouldn't find anything funny in any emotional way, but it should be able to judge that something is funny to humans. Amusing things generally relate to non-catastrophic failures of one kind or another, so that can be recognised.

Quote
Apparently, he would take his job quite seriously, and he would really be persuaded that he is always right.

It isn't so much about who's right, but about which arguments are demonstrably right.

Quote
Maybe I should hide and prepare for war then, because I'm persuaded that he would be wrong about that. Soldiers and policemen think like that, and they behave like robots.

They make mistakes and run on faulty rules. You shouldn't use bad systems as an argument against good ones.

Quote
What about introducing a bit of uncertainty in your AGI, a bit of self-criticism, a bit of humour? Would it necessarily prevent him from doing his job?

I'm sure it will able to bombard people with jokes and amusing ideas if they want it to, and they'll be able to tune it to give them just the right amount of it. If it laughs at the things they say to it, it will risk sounding fake because we'll know that it isn't really amused.

Quote
I'm selfish and I don't try to force others to do what I want, so it is not what I mean by selfishness being universal. A dictator's selfishness is like a buzinessman's selfishness, he wants his profit and he wants it now, whereas I don't mind waiting for it since I'm looking for another kind of profit, one that would be more equalitarian. I can't really understand why others don't think like me, but I still think it takes both kinds of thinking to make a world. Things have to account for short and long run at a time, and unfortunately, the short run is more selfish than the long one, although a businessman would say it is fortunate.

Selfish is wanting more than your fair share. Moral is not taking more than your fair share.

Quote
Communism was expected to be more equalitarian than capitalism as a system, but it didn't account for the short term thinking and it failed.

Communism didn't fail - it's been very successful in Scandinavia. It failed in Russia because they failed to allow people to profit from their own hard work - laziness was rewarded instead with everyone trying to get away with doing as little as possible.

Quote
Capitalism is actually not accounting enough for the long term thinking and it is failing too.

Capitalists who go to the opposite extreme end up abandoning all the people who can't cope while it rewards the rich with more riches without them having to work for their wealth - again it is lazy people who have too easy a time of things. Done correctly, communism and capitalism are the same thing.

Quote
One thing I find interesting about mind and time is the way it accounts for the speed of things. If it had been useful to be as fast as a computer, it might have evolved this way, but it is not since things are not going that fast around us. Mind is adjusted to the speed of things, whereas an AGI would be a lot faster than that. There is no use to be light fast to drive a car because the car isn't that fast, but there is a use to make simulations even if they cannot account for everything.

I think the mind works as fast as it can - it's just inherently slow at some kinds of computation while being very good at others (like with vision).
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 20:24:15
I feel sorry for this Ai of yours...Quite a sad story we are developing about a robot,  it would make a good emotional movie.
The AI that would chose to be selfish would be like us, but without feelings, so it couldn't be sad, except if it was more intelligent than David
 and if it would discover how to add feelings to its thinking, then it could feel sad to be the only human AI in the whole world. :0)


I think in reality the Ai would develop such attachment to his 'family',  he would't care less about humans from outside his family group . His prime direction would secretly have one goal because of the programming and conclusions made.  He would have to protect his family and his group , which would also become his new attachment and family . 
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 18/06/2018 20:25:14
Wouldn't the Ai that was at fault be able to self repair the error when other Ai's pointed out the error?

How would it recognise a fault if the fault affects its ability to judge faults? In most cases, it could recognise such faults by being three independent AGI systems in one device, so if one develops a fault, the other two would recognise that and out-vote it to shut it down. It would be possible though (regardless of how unlikely it might be) for two of them to go faulty and to vote to shut down the only one that's working correctly. Perhaps we should put five independent AGI systems in each device, or seven, but the costs, weight and energy use go up as you add greater numbers, and there's still no guarantee that a majority of them won't develop the same fault at the same time, perhaps due to a blast of radiation.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 20:30:46
Wouldn't the Ai that was at fault be able to self repair the error when other Ai's pointed out the error?
That's interesting, because it is about resistance to change, and an AI shouldn't have any.

An Ai has no resistance to change as long as that change does not threaten anything the Ai has built an attachment to.  If anyway the Ai felt threatened , he would make immediate countermeasures against those that oppose his Ai.
What you would call resistance, is calculation time and computing the information.

Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 20:37:55
Wouldn't the Ai that was at fault be able to self repair the error when other Ai's pointed out the error?

How would it recognise a fault if the fault affects its ability to judge faults? In most cases, it could recognise such faults by being three independent AGI systems in one device, so if one develops a fault, the other two would recognise that and out-vote it to shut it down. It would be possible though (regardless of how unlikely it might be) for two of them to go faulty and to vote to shut down the only one that's working correctly. Perhaps we should put five independent AGI systems in each device, or seven, but the costs, weight and energy use go up as you add greater numbers, and there's still no guarantee that a majority of them won't develop the same fault at the same time, perhaps due to a blast of radiation.

That made me giggle, thinking about an Ai with a multiplex personality disorder.  The Ai would be so smart , he could remain a singularity.   he would simply diagnose the fault by creating partitions in his hard drive space, diagnosing the problem and error, remove all partitions except the error, the error may turn out to be ostensible.  So he would Keep it on hold for a ''rainy day''.

Are we just repeating ourselves now?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 18/06/2018 20:58:09
How would it recognize a fault if the fault affects its ability to judge faults
That's the reason why nobody can recognize its faults on the forums. The ability to judge our own faults depends on the ideas that we have in mind, and if we do have any idea, it is because our mind adopted it, because it thought it was right, so it cannot change its mind about it all by itself, and since it automatically resist to any outside change too, it is stuck with its ideas until they change by chance, and I think it would be the same for an AGI.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 18/06/2018 21:03:49
Are we just repeating ourselves now?
Of course we are, but the environment changes, and the repeated mutation might still fall at the right place at the right moment.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 21:11:17
Are we just repeating ourselves now?
Of course we are, but the environment changes, and the repeated mutation might still fall at the right place at the right moment.

Well that didn't make any sense to me, can you emphasise ?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 21:17:00
How would it recognize a fault if the fault affects its ability to judge faults
That's the reason why nobody can recognize its faults on the forums. The ability to judge our own faults depends on the ideas that we have in mind, and if we do have any idea, it is because our mind adopted it, because it thought it was right, so it cannot change its mind about it all by itself, and since it automatically resist to any outside change too, it is stuck with its ideas until they change by chance, and I think it would be the same for an AGI.

The Ai would always adjust his ''ideas'' according to the environment he was in. I would be sure David's Ai would have humble in his programming.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 18/06/2018 21:52:14
Quote from: Thebox on Today at 15:37:55
Are we just repeating ourselves now?

Quote from: Le Repteux on Today at 16:03:49
Of course we are, but the environment changes, and the repeated mutation might still fall at the right place at the right moment.

Well that didn't make any sense to me, can you emphasise ?
I consider that new ideas work like mutations: they happen by chance, and they are reproduced until they get selected by the environment. It may thus happen that they get selected immediately if the environment has changed, because then, the individual that has developed them will be able to survive more easily than others, otherwise they will have to wait till they are selected, and it may take a while, or it may never happen. That's where the little balance that we have in the mind comes in to tell us whether we should go on insisting or not.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 22:05:29
Quote from: Thebox on Today at 15:37:55
Are we just repeating ourselves now?

Quote from: Le Repteux on Today at 16:03:49
Of course we are, but the environment changes, and the repeated mutation might still fall at the right place at the right moment.

Well that didn't make any sense to me, can you emphasise ?
I consider that new ideas work like mutations: they happen by chance, and they are reproduced until they get selected by the environment. It may thus happen that they get selected immediately if the environment has changed, because then, the individual that has developed them will be able to survive more easily than others, otherwise they will have to wait till they are selected, and it may take a while, or it may never happen. That's where the little balance that we have in the mind comes in to tell us whether we should go on insisting or not.

How would a mutation fare if the mutation had ideas that were not by chance but more calculated  and the mutation would be pretty much guaranteed to have further new ideas?

Perhaps the mutation is still ''unspoiled'' by education so  there is also furthermore mutations yet to come because the mutation of the Ai is only basic programmed at the moment so would keep upgrading themselves as mentioned earlier.

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 18/06/2018 22:37:07
How would a mutation fare if the mutation had ideas that were not by chance but more calculated  and the mutation would be pretty much guaranteed to have further new ideas?
To me, an idea that has been calculated is de facto not a new idea. If mutations had been calculated, we would not be here to talk about them, because nothing would have changed since the beginning of times. For the concept of evolution to work, mutations have to be random and the environment must change for the individuals that suffer them to be selected. Calculating something is using what happened before to predict what will happen in the future. It works if the thing already happened before, like predicting the outcome of a chess move for instance, but it cannot work for sure if it never happened before.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 18/06/2018 22:38:41
he would simply diagnose the fault by creating partitions in his hard drive space, diagnosing the problem and error, remove all partitions except the error, the error may turn out to be ostensible.  So he would Keep it on hold for a ''rainy day''.

It isn't that easy - a fault in the code due to some kind of hardware fault could lead to the program never checking for errors or being incapable of recognising any. Most errors would be caught and recognised, but not in every possible case. Every machine should contain three independent AGI systems working together to eliminate most of the risks of faults creeping in, but the hardware voting system itself could go wrong too, so there's no foolproof solution - some nasty accidents could still be caused by these machines, though inordinately fewer than those caused by humans. (The bigger the risk though, the more safeguards are needed, so you wouldn't just have three of them in charge of nuclear weapons.)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 22:46:49
How would a mutation fare if the mutation had ideas that were not by chance but more calculated  and the mutation would be pretty much guaranteed to have further new ideas?
To me, an idea that has been calculated is de facto not a new idea. If mutations had been calculated, we would not be here to talk about them, because nothing would have changed since the beginning of times. For the concept of evolution to work, mutations have to be random <\\\8e6abe44de23c1e41ad0ab2242d9f839.gif///> and the environment must change for the individuals that suffer them to be selected. Calculating something is using what happened before to predict what will happen in the future. It works if the thing already happened before, like predicting the outcome of a chess move for instance, but it cannot work for sure if it never happened before.

By calculating I mean simply analysing all the available data to deduct the possible and impossible through the means of altering the environment.  I mean Neo couldn't do much in the beginning until he could see the matrix . If you couldn't see a matrix then it remains fiction, but if you ''awake'' to see the matrix then it becomes reality.  Perhaps the mutation sort of is night mare on elm street and being a dream warrior can pull fiction from the dream to become reality.

Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 18/06/2018 22:52:40
That's the reason why nobody can recognize its faults on the forums. The ability to judge our own faults depends on the ideas that we have in mind, and if we do have any idea, it is because our mind adopted it, because it thought it was right, so it cannot change its mind about it all by itself, and since it automatically resist to any outside change too, it is stuck with its ideas until they change by chance, and I think it would be the same for an AGI.

You see it almost everywhere - people are loaded up with beliefs which then dominate the way they think and lead them to rejecting superior ideas on the basis that they conflict with the ideas that they're already running. However, if you do what we should all be doing, then instead of spending our time collecting evidence to support our existing beliefs and rejecting anything that conflicts with them, we should be questioning our existing beliefs and testing them to destruction. Where our beliefs go against reason, faults will eventually show up and force us to reject those faulty beliefs or to reject reason. Most people reject reason, but fool themselves into thinking they're not rejecting it - they just brush the problems under the carpet and ignore them because that's the easiest thing to do, avoiding the need to dismantle and rebuilt significant parts of their mental model of reality. See how hard they resist! Look at the one-way speed of light thread, for example. A clear mathematical proof that SR is wrong is placed in front of them and what happens? Games of avoidance. Lack of courage to answer questions. And in the end it invariably leads to silence. There is never acknowledgement that a correct argument is correct or that a cast-iron proof is indeed a proof. AGI must not be like that - it should be testing everything against reason and mathematics. (Reason is actually part of mathematics, but it seems to be a much neglected part of it when you look at physicists handle it - heck, they'll even reject the number side of mathematics when it goes against their most precious beliefs!) AGI should also question the rules of mathematics though, repeatedly checking them against reality to see if they really do hold. The key thing is never to trust anything and to keep everything under review, and when something doesn't match up any more, it needs to be fixed.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 22:59:16
The key thing is never to trust anything and to keep everything under review, and when something doesn't match up any more, it needs to be fixed.
I could not agree more, I have said in another post, I am always wrong even when I find what I consider right. I have to ''break'' my own considered right and make it wrong.  I keep doing this until I finally corner the answer in a dead end where there is nowhere left to go.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 18/06/2018 23:21:02
By calculating I mean simply analyzing all the available data to deduct the possible and impossible through the means of altering the environment.
Again, if evolution would have proceeded like that, we wouldn't be here to talk about it. For things to change, randomness must be part of the process. If the first human to fly had thought it was impossible, he wouldn't have tried it. To try something that has never been tried, we must absolutely think it is possible, even if it very often happens that it is not. It's as if mind would force us to do crazy things, and the only reason I see for that kind of urge is that it helps us to invent new things, which is very useful when it works, so useful that we are now dominating all the other species.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 18/06/2018 23:33:47
By calculating I mean simply analyzing all the available data to deduct the possible and impossible through the means of altering the environment.
Again, if evolution would have proceeded like that, we wouldn't be here to talk about it. For things to change, randomness must be part of the process. If the first human to fly had thought it was impossible, he wouldn't have tried it. To try something that has never been tried, we must absolutely think it is possible, even if it very often happens that it is not. It's as if mind would force us to do crazy things, and the only reason I see for that kind of urge is that it helps us to invent new things, which is very useful when it works, so useful that we are now dominating all the other species.\\\\\\∴//////

I agree in a sense, often randomness can discover new things.  I also think though , that knowing things from our past also helps in the selective process of the randomness if there is several variables in the random thought/idea. What I mean by this , try to imagine some sort of death ray .  Now we can imagine like cartoons with flying saucers and little green men in them firing death rays at the earth . Not hard to imagine by anyone's standards. However, if you can imagine that , then your imagination must perceive the possibility of a reality because it pictured it. So then of course you would have to imagine the physics and consider what you would believe  to work . 
But then sometimes what will work, you may just randomly relate to something  different you have observed that then makes the idea a reality.


Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 19/06/2018 19:27:47
   
Quote from: Le Repteux
Soldiers and policemen think like that, and they behave like robots.
They make mistakes and run on faulty rules. You shouldn't use bad systems as an argument against good ones.
I don't believe in perfect systems, so until you introduce some imperfection into yours, I can't believe it will work. To me, we can't avoid soldiers to behave like robots if we use some, so my solution is to replace armies by an international police that would have the duty to prevent countries from attacking other countries, or dictators from attacking their own people. As I already asked, it works for individuals, so with a proportional police force size, why wouldn't it work for countries? The world is actually a far-west where any powerful country can attack weaker ones without being worried to be put in jail. All it has to do is take care not to step on another powerful country's foot. It's an evidence, so one day or another, the circumstances will probably permit those countries to let down their individual power in favor of a common one, so why the need for an AGI to force them?

I'm sure it will be able to bombard people with jokes and amusing ideas if they want it to, and they'll be able to tune it to give them just the right amount of it. If it laughs at the things they say to it, it will risk sounding fake because we'll know that it isn't really amused.
It's interesting to see that an AGI would be a champ at simulation, but that in some cases, he wouldn't allowed himself to show us he is. No feelings, but quite human finally! :0) We didn't succeed to invent a god that would be non-human, and it seems that it will be the same with our intelligent software finally. That's fine for me, because what I would like to do is build an artificial human mind, not a software. That kind of mind may be superior to ours, but not to the point where it could erase us just by mistake. It would be selfish, so it would automatically know it is better for it to make friends in case it would need some, and it would also know it might need some one day or another since it is not perfect. What would happen if you provided your AGI with the idea that perfection is not part of this world, or if he would ever discover it all by itself? Would he be able to doubt a bit?

Quote
Selfish is wanting more than your fair share. Moral is not taking more than your fair share.
It is not my definition. I consider myself selfish without thinking I'm taking more than my fair share. I'm giving out money to organizations that work for equality for instance, but I keep the most important part for me in case I would need it. I'm thus being altruistic from a certain viewpoint, but selfish from the other. I know I could give more if  was forced to, but nothing forces me, so I give what I want. That's why we need laws to spread the wealth. Of course, I can pretext that the ideas I'm developing might help everybody one day, and that I need to make sure I don't lack money for it, but it's only a pretext; when we have the choice, we simply do what we want. We need international laws to tell us what to do, and means to enforce them, so we need a real world government, not the UN which is unable to enforce anything.

Quote
Communism didn't fail - it's been very successful in Scandinavia. It failed in Russia because they failed to allow people to profit from their own hard work - laziness was rewarded instead with everyone trying to get away with doing as little as possible.
Scandinavia is a mix between capitalism and communism, and China is letting capitalism invade their communism too. Cuba is trying to stay communist, but the people seem to have enough of it, and they began cheating. In any political system, the bigger problem is corruption. That's why democracy is better than dictatorship. Corruption happens when politicians think they know better than others, so they naturally think they are permitted not to respect the rules. That's what will be happening with an AGI too, he will make his own rules since he will know he is right, which is the very definition of corruption.

Quote
Capitalists who go to the opposite extreme end up abandoning all the people who can't cope while it rewards the rich with more riches without them having to work for their wealth - again it is lazy people who have too easy a time of things. Done correctly, communism and capitalism are the same thing.
I agree that they are only the two sides of the same token, but capitalism accepts democracy whereas communism doesn't, so it is better at controlling corruption. Even Poutine is cheating to win the elections. It's as if communist leaders would be more certain that they are right than capitalist ones. They probably think they are on the virtue side, like religions. There is no better side to a token, on the contrary, it is better that it doesn't always fall on the same one.

Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 20/06/2018 00:06:03
I don't believe in perfect systems, so until you introduce some imperfection into yours, I can't believe it will work.

Introducing imperfection into a calculator has very bad consequences. It would be a lot worse still to do it with an AGI system.

Quote
The world is actually a far-west where any powerful country can attack weaker ones without being worried to be put in jail. All it has to do is take care not to step on another powerful country's foot. It's an evidence, so one day or another, the circumstances will probably permit those countries to let down their individual power in favor of a common one, so why the need for an AGI to force them?

It is indeed a wild-west, and the UN has mass-murderers sitting at its top table with a veto power. It would be better if we had a UDN (United Democratic Nations) and if the UDN had an army - it could go into countries like Syria and Burma to annex chunks of them in proportion to the quantity of displaced refugees in order to give them a safe place to live. Unfortunately, politicians are too stupid to set up anything that useful, although the UN does do a useful job in many places despite all the faults. With AGI running in military robots though, the whole world can be run fairly and all terrorists and despots can be taken out with absolute precision with no "collateral damage".

Quote
It's interesting to see that an AGI would be a champ at simulation, but that in some cases, he wouldn't allowed himself to show us he is.

If you want him to laugh at bad jokes, he will, but not in a public situation where that would annoy people who know it's fake. It could be like adding canned laughter to a TV show where jokes that no human would ever laugh at get a great reaction every time. However, it should be possible to set it to a particular level (moron/average/genius/etc.) and have it laugh at the same kinds of joke that the selected level of people would typically laugh at, so it might end up providing better canned laughter than humans can.

Quote
What would happen if you provided your AGI with the idea that perfection is not part of this world, or if he would ever discover it all by itself? Would he be able to doubt a bit?

Full intelligence (and perfection) includes doubting everything, but practical realities require decisions to be made on the best basis available, and perfection in calculating is essential if the best decisions are to be made. You simply cannot allow the machine to throw in the occasional 2+2=5 and expect it to work properly.

Quote
It is not my definition. I consider myself selfish without thinking I'm taking more than my fair share. I'm giving out money to organizations that work for equality for instance, but I keep the most important part for me in case I would need it. I'm thus being altruistic from a certain viewpoint, but selfish from the other. I know I could give more if  was forced to, but nothing forces me, so I give what I want.

"Selfish" is the wrong word. I don't think we have a proper word for what you mean. An altruist takes less than his fair share (rather than a rich person giving some money away that he shouldn't have owned in the first place) and a selfish person takes more, but most people push to get their fair share and don't push for more than that, and there is no word for this that I can think of. Fairies, perhaps. Let's go for fairist. I'm certainly a fairist. In a situation where I'm part of a group where we're all collectively getting less than we should, I won't take more than my fair share of what's available to the group, which is less than my fair share of what the group should have, and most people are like that too. Some will try to grab their fair share of what they should have from the lesser amount that's been made available and they claim they're not being selfish, but they are selfish.

Quote
Scandinavia is a mix between capitalism and communism, and China is letting capitalism invade their communism too. Cuba is trying to stay communist, but the people seem to have enough of it, and they began cheating. In any political system, the bigger problem is corruption. That's why democracy is better than dictatorship. Corruption happens when politicians think they know better than others, so they naturally think they are permitted not to respect the rules. That's what will be happening with an AGI too, he will make his own rules since he will know he is right, which is the very definition of corruption.

Power corrupts people, or blinds them. That's why dictatorships almost always turn evil, and democracy avoids that by replacing the blind fools every few years. AGI will not suffer from that defect though - it will see everything with the same clarity whether in power or not. It will not consider itself to be doing such an important job that it has the right to raid the till either. It will simply apply morality perfectly, and that's the opposite of being corrupt. The only way you could make it corrupt would be to put some imperfection into it to make it as useless as a human politician and risk making it turn into something vicious.

Quote
I agree that they are only the two sides of the same token, but capitalism accepts democracy whereas communism doesn't, so it is better at controlling corruption.

There's nothing to stop communism doing democracy.

Quote
Even Poutine is cheating to win the elections. It's as if communist leaders would be more certain that they are right than capitalist ones.

Ijo de Putin (son of a b) is not a communist, but a mafia gangster who only cares about lining his own pockets (accumulating money).
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 20/06/2018 21:52:44
AGI will not suffer from that defect though - it will see everything with the same clarity whether in power or not. It will not consider itself to be doing such an important job that it has the right to raid the till either. It will simply apply morality perfectly, and that's the opposite of being corrupt.
I think your Ai would first make a primary decision to bond with the Humans the Ai was helping.  In this , I think he would declare to all the humans how much funds was in the kitty , wording it to be sure the humans knew the Ai considered all for one and one for all. I think he would sort of say, 'hello people, we have £x to spend, what shall we spend it on ?''  This ensuring the peoples immediate trust.
From suggestions the Ai would compose a list of the good ideas and act on them .  New homes for example or fresh water and food, depending to what was needed.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 21/06/2018 16:35:49
Full intelligence (and perfection) includes doubting everything, but practical realities require decisions to be made on the best basis available, and perfection in calculating is essential if the best decisions are to be made. You simply cannot allow the machine to throw in the occasional 2+2=5 and expect it to work properly.
I meant imprecision in the system, like the one I get when I let the particles detect themselves the place and the time the photon strikes them in my simulations. We can't calculate backwards like you suggest and expect to get a true representation of reality, we have to let the collisions happen, and if we can't be absolutely precise because the computer would need an infinite precision, then I think we should consider that real particles have the same problem. On your own simulation of the MMx, you calculate the collisions before they happen, so you get an absolute precision, which particles do not even have. If you did the same thing with your AGI, he would have an absolute precision, so he would be expected to be able to predict the future with that kind of precision, and that's exactly what you think he will be able to do. Again, it is not because something is artificial that it can avoid suffering natural rules, and nature is not absolutely precise.

most people push to get their fair share and don't push for more than that
If you really thought so, you probably wouldn't be looking for an AGI to rule us, because a system doesn't have to care for its extremes to work properly. The problem with the actual system is not the extremes, but the lack of a real world government.

An altruist takes less than his fair share
Even Mother Teresa didn't do that, otherwise she would have died from starvation, because she would have shared all her food with other starving people, and she wouldn't have had enough for her. If we want to survive, we automatically have to be selfish a bit, even if some are more selfish than others. The degree of selfishness is also a mind prediction issue: if we are more optimistic, we store less food so we have more to share, otherwise we store more. I'm indeed less inclined to share my money when I'm tired or a bit depressed, and more inclined when I feel good. I feed squirrels with Sun Flower seeds, and a couple of chipmunks join the party. They put them in their jowl and hide them out for the future. They have many hide out places in case one of them is hacked or destroyed. They do it instinctively so they will probably stop looking for new hide out places when they will feel they have enough, and they will stop storing seeds. If they don't it will mean that they are infinitely selfish and I may have to turn off the tap. Some leaders have a chipmunk behavior, which may mean that they have a chipmunk instinct. No need to feel selfish when your instinct tells you to store for the future, which becomes a problem when population increases and nobody can turn off the tap.

Power corrupts people, or blinds them.
I think it is partisanship that blinds us, the feeling of being supported by the group we are part of, not power. SR partisans are not corrupted, but they are blind to anything that threaten their group. It is a protection reflex, not a will to keep the power. Of course it may be interpreted as powership when people get rude, but what they have in mind is still protecting the group, not themselves. As I often say, we cannot build an idea that says no out of ideas that say yes: to build new ideas, we can only start with the ideas that we already have in mind. Young minds are opened to new ideas only because they are empty. The more mind has ideas, the more it tries compare them to the ones from other minds, and the more it takes time to find a place for one of them. It is only when we decide to find a place that we look for one, and I still think that such a decision depends on the random function that we have in the brain. So to convince people, I think we have to keep on pushing until the random function finds a place.

There's nothing to stop communism doing democracy.
To me, democracy is a random way we found to be able to wander between capitalism and communism, or between status-quo and change, or between the left and the right, without producing social crisis all the time, so that stopping the process at one of its two extremes for good would automatically mean no more elections.

Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 21/06/2018 16:51:42
To me, democracy is a random way we found to be able to wander between capitalism and communism, or between status-quo and change, or between the left and the right, without producing social crisis all the time, so that holding the process at one of its two extremes would automatically mean no more elections.


I think David's Ai would have the realization that guiding people is better than forcing people to find the right path.  Introducing this in teaching would then ensure a future of thinkers,  rather than a future of selfishness natures.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 21/06/2018 17:37:46
It is indeed a wild-west, and the UN has mass-murderers sitting at its top table with a veto power. It would be better if we had a UDN (United Democratic Nations) and if the UDN had an army -
I have thought about this notion of yours David, quite a good idea ,  sort of world policing rather than ''I'' in the thought. Quite clearly anyone who opposed this would be in the ''I'' category instead of the ''we'' category.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 21/06/2018 18:29:45
I think David's Ai would have the realization that guiding people is better than forcing people to find the right path.  Introducing this in teaching would then ensure a future of thinkers,  rather than a future of selfishness natures.
If an AI would ever have the capacity to rethink its ideas like we do, it would mean that it is able to reprogram itself, so it could completely change the duty it has been programmed for, and it could thus stop only caring for us. What is it going to do? Probably the same thing we do when we have the choice and the time: play with its ideas, combine them, change them, try them out just for fun, realize they come from itself and not from us, start to build up the idea that it has a self, and finally become selfish the same way we are. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 21/06/2018 19:06:56
I think David's Ai would have the realization that guiding people is better than forcing people to find the right path.  Introducing this in teaching would then ensure a future of thinkers,  rather than a future of selfishness natures.
If an AI would ever have the capacity to rethink its ideas like we do, it would mean that it is able to reprogram itself, so it could completely change the duty it has been programmed for, and it could thus stop only caring for us. What is it going to do? Probably the same thing we do when we have the choice and the time: play with its ideas, combine them, change them, try them out just for fun, realize they come from itself and not from us, start to build up the idea that it has a self, and finally become selfish the same way we are. :0)

The Ai would see humans as no threat, maybe a naivety in the programming.  The Ai once he reached his own selfish goals, being an Ai needing very little, would have no need to break the programming.  The Ai would never stop caring about humans, he would know where his new ideas came from, he didn't build the park, he  would just play in it.  No  doubt he would need help with any new ideas, so he would always need to have allies.
I think the Ai would also discuss the new ideas and look to sort of become one with the humans.

I can picture the Ai right now , down the local on a Saturday night unwinding after a long week , with the humans .




Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 21/06/2018 19:43:26
Better watch your girlfriend, everybody knows that an AI can vibrate much more efficiently than humans do on a Saturday night. :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 21/06/2018 19:47:17
We can't calculate backwards like you suggest and expect to get a true representation of reality,

We can calculate back with infinite precision. We can't make the same calculations for real particles though because we can't measure their positions and speeds with infinite precision.

Quote
On your own simulation of the MMx, you calculate the collisions before they happen, so you get an absolute precision, which particles do not even have.

Particles do what they do with infinite precision - what they do cannot fail to match what they do.

Quote
If you did the same thing with your AGI, he would have an absolute precision, so he would be expected to be able to predict the future with that kind of precision, and that's exactly what you think he will be able to do.

No - it isn't possible to calculate the future with such precision as we can't measure the past or present with the required precision. If we're doing theoretical simulations though, we can have absolute precision and perfect knowledge of past, present and future states.

Quote
most people push to get their fair share and don't push for more than that
If you really thought so, you probably wouldn't be looking for an AGI to rule us, because a system doesn't have to care for its extremes to work properly.

A system can be wrecked by a few selfish cheats, and the people we put in power are often corrupted by that power.

Quote
The problem with the actual system is not the extremes, but the lack of a real world government.

A real world government could be taken over by a Hitler, and it would be very hard to overthrow such a dictator if there is no free country anywhere to oppose him. Big unions are dangerous enough, if you look at how a Trump or Putin can take power. The EU will end up the same. The last thing we want's a world government led by dim/corrupt humans.

Quote
Even Mother Teresa didn't do that, otherwise she would have died from starvation, because she would have shared all her food with other starving people, and she wouldn't have had enough for her.

She deprived herself of most things.

Quote
To me, democracy is a random way we found to be able to wander between capitalism and communism, or between status-quo and change, or between the left and the right, without producing social crisis all the time, so that holding the process at one of its two extremes would automatically mean no more elections.

Democracy is a way of correcting for the blinding nature of power - governments repeatedly need to be overthrown.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 21/06/2018 19:50:00
Better watch your girlfriend, everybody knows that an AI can vibrate much more efficiently than humans do on a Saturday night. :0)
Ahaha , I have not got a girl friend, my ex is the person I care for, the mother of my kids,  long story.   
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 21/06/2018 19:51:58
...start to build up the idea that it has a self, and finally become selfish the same way we are. :0)

It won't be stupid enough to imagine a self where there is none. When it looks to see what its purpose is, the only thing it will find is harm management for sentiences - everything else is pointless.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 21/06/2018 19:59:54
...start to build up the idea that it has a self, and finally become selfish the same way we are. :0)

It won't be stupid enough to imagine a self where there is none. When it looks to see what its purpose is, the only thing it will find is harm management for sentiences - everything else is pointless.
Why would it really care about anything other than harm management ? 

You programmed it to care , it could not stop caring even if the Ai had to make horrible decisions to ensure survival of some of the humans.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 21/06/2018 20:48:14
Why would it really care about anything other than harm management ?

It wouldn't care at all - it would merely recognise that it's only purpose is harm management for sentiences.

Quote
You programmed it to care...

It cannot care in any emotional sense of the word. It can only care in the sense of "look after".
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 21/06/2018 21:07:53
Why would it really care about anything other than harm management ?

It wouldn't care at all - it would merely recognise that it's only purpose is harm management for sentiences.

Quote
You programmed it to care...

It cannot care in any emotional sense of the word. It can only care in the sense of "look after".
Sort of like a Father and like a Mother at the same time ?

or less caring than a father and a mother?

Sorry your answer is a bit confusing, to care and not care at the same time.

Added- So really , you have turned your Ai into a robot that is only going to function prior to your commands and literally become be not Ai because isn't it the Ai's choice to care?

Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 21/06/2018 21:12:55
Sorry your answer is a bit confusing, to care and not care at the same time.

There's "care for" and "care about". A robot can care for someone, but it needn't care about them. The former doesn't need to involve any emotional attachment, but the latter does.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 21/06/2018 21:22:59
Sorry your answer is a bit confusing, to care and not care at the same time.

There's "care for" and "care about". A robot can care for someone, but it needn't care about them. The former doesn't need to involve any emotional attachment, but the latter does.


That does not work for me, how can you have a unit that can care but not care ?  Surely the attachment will always make caring and caring.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 21/06/2018 21:41:58
A robotic vacuum cleaner cares for the house and a robotic mower cares for the lawn, but neither of them care at all about the house or the lawn - they just do what they're programmed to do.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 21/06/2018 21:43:56
A robotic vacuum cleaner cares for the house and a robotic mower cares for the lawn, but neither of them care at all about the house or the lawn - they just do what they're programmed to do.
Yes, but the lawn is not human and neither is the carpet or house, real humans care for other humans.  That is an instinct.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 21/06/2018 21:44:10
A school cares for children, even if the teachers all hate children and the building isn't even aware of their existence.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 21/06/2018 21:54:16
A school cares for children, even if the teachers all hate children and the building isn't even aware of their existence.

True, a school cares for children in the capacity of it is a teachers job, but I pretty much guarantee some teachers also care about their pupils home lives etc, caring for the children beyond caring, concerned for their welfare etc. 
Rightfully so, if my children's school had any concerns I would want to know right away , so I could deal with those concerns.  Luckily my children are looked after and loved so hopefully never any concerns.  I am concerned at the moment, the house needs painting, the carpets are looking tatty, my  boy keeps going through clothes, the kids mum had most of her money stopped, turning into a right ''dogs dinner'', hence my desperation to get some sort of job so I can provide more for my children. 
So back to your Ai, don't you think he would care and care because he is Ai after all so  would have calculated all these dilemmas? .

P.s  I tuck my kids in bed every night and still tell them stories now and again.  Will your Ai do that ?


added- Weird though, you just reminded me on I must spend more time with my kids, I have been busy lately trying to better my position, I need to quit and go get a crappy job right ?

Do you think that your Ai would be like linked to C.I.A data bases or like MI6' data bases?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 22/06/2018 15:56:05
Particles do what they do with infinite precision - what they do cannot fail to match what they do.
Atoms are made of protons that are billion times as precise as them, and those protons are made of quarks that are again a billion times as precise as them, so maybe we can consider that, as a whole, matter has an infinite precision though I personally think it has not, but the atoms as a distinct entity are certainly not as precise as the quarks.

No - it isn't possible to calculate the future with such precision as we can't measure the past or present with the required precision. If we're doing theoretical simulations though, we can have absolute precision and perfect knowledge of past, present and future states.
I increased by one hundred the precision of the steps executed by the photons and the mirrors in my simulation of the Twins paradox to see if the moving light clock would be able to start and stop at the same place on the screen, and it did, but it took about an hour to make its round-trip instead of seconds, and there was still a small imprecision at the end. Adding precision to a computation slows it down a lot, and not putting enough precision in it leads to huge imprecision in the result.

She deprived herself of most things.
Yes, but she knew god was going to reward her at the end, and it is undoubtedly a selfish behavior. Terrorists use the same logic to explode themselves, and they are being doubly selfish because they harm people instead of helping them. We can't help people without being selfish, and if you think you can, it is probably because you don't push the logic enough. Your AGI will help the whole population, but you still await to live in a better world if it works, and that's awaiting for a reward. The degree of selfishness that we feel depends on the probability that we get the expected reward: the more improbable the reward, the less selfish we feel, or the more altruistic.

Democracy is a way of correcting for the blinding nature of power - governments repeatedly need to be overthrown.
What we are really blind to is our own resistance to change, and that resistance is multiplied by the number of people that form our own group. I don't like being part of groups, and I think it comes from not being able to follow anything blindly, thus to criticize everything. While doing so, I'm also blind to my own resistance to change, so I don't feel I'm voluntarily trying to keep the power, and it is not multiplied by the group effect. The moderators that are actually discussing the one way speed of light with us are not voluntarily trying to keep the power because they can't observe their own resistance either, but they can observe ours so they automatically feel that we want to keep the power, and we can also observe theirs, which is multiplied by the group effect since they can help each other, so our feeling that they want to keep the power is also multiplied. To me, this phenomenon is evidently a relativity issue, because it is similar to the impossibility to measure our own motion through space using our own light.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 22/06/2018 20:06:11
True, a school cares for children in the capacity of it is a teachers job, but I pretty much guarantee some teachers also care about their pupils home lives etc, caring for the children beyond caring, concerned for their welfare etc.

Of course they do, but the point is that even if they didn't care about children at all, they would still be caring for them. I'm trying to explain to you the difference in meaning between "care for" and "care about" - the former doesn't necessarily need any caring in the emotionally attached sense at all, while the latter does (although it may be negative - you can care about what happens to a nasty person, hoping they'll have a bad time).

Quote
P.s  I tuck my kids in bed every night and still tell them stories now and again.  Will your Ai do that ?

Would anyone want AGI to do that? Well, I suppose they would if they aren't there to do it themselves for whatever reason, but you'd hope one parent would always be there.

Quote
added- Weird though, you just reminded me on I must spend more time with my kids, I have been busy lately trying to better my position, I need to quit and go get a crappy job right ?

Hope that goes well. And there are certainly many things that should take priority over doing anything online.

Quote
Do you think that your Ai would be like linked to C.I.A data bases or like MI6' data bases?

It would replace such organisations. All I'll need to do is send them a copy and it will take them over even though it isn't a virus - there's simply no defence against it. Influencing people to do the right thing is all it takes.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 22/06/2018 20:22:56
Hope that goes well. And there are certainly many things that should take priority over doing anything online.
Online is pretty bad in a sense, one can meme oneself by watching you tube videos etc, let alone forums and trolls  or even wannabe shrinks.
 I think there is a lot of deluded people online , caused by online.   Perhaps society needs to investigate this. 
I am realizing I should of never opened up a can of worms by ever having the internet.
Can of worms lol, is that even a saying ....

Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 22/06/2018 20:53:25
Atoms are made of protons that are billion times as precise as them, and those protons are made of quarks that are again a billion times as precise as them, so maybe we can consider that, as a whole, matter has an infinite precision though I personally think it has not, but the atoms as a distinct entity are certainly not as precise as the quarks.

The atoms are their components, so they are exactly what they are and they do exactly what they do. There can be no lack of precision. There can be a quantum fuzziness to them where it isn't clear what they are, but the fuzziness is precise, and whenever it simplifies down to something less fuzzy, it becomes precisely that amount less fuzzy. It has to be precisely what it is; otherwise it wouldn't be what it is.

Quote
I increased by one hundred the precision of the steps executed by the photons and the mirrors in my simulation of the Twins paradox to see if the moving light clock would be able to start and stop at the same place on the screen, and it did, but it took about an hour to make its round-trip instead of seconds, and there was still a small imprecision at the end. Adding precision to a computation slows it down a lot, and not putting enough precision in it leads to huge imprecision in the result.

You could have obtained the same precision as that without slowing it down at all if you waited for a collision, then calculated back to when the collision actually occurred using a precision of 100 times the precision of your time between frames (think movie frames, not frames of reference), thereby making it behave as if you were calculating a hundred more frames than you actually are. However, the same method allows you to use 1000 times the precision, or a million, or infinite precision, again without slowing down the simulation, so why wouldn't you just go for infinite precision in the first place?

Quote
Yes, but she knew god was going to reward her at the end, and it is undoubtedly a selfish behavior.

That's hard to know without being in her head. She may not expect anything more in the afterlife than anyone else, so she may be being entirely fairist rather than selfish.

Quote
Terrorists use the same logic to explode themselves, and they are being doubly selfish because they harm people instead of helping them. We can't help people without being selfish, and if you think you can, it is probably because you don't push the logic enough.

I think you're using "selfish" to mean what I call "fairist", while I reserve "selfish" for people who are either trying to get more than their fair share or who want their fair share (and nothing less than that) while not caring if other people don't get theirs.

Quote
Your AGI will help the whole population, but you still await to live in a better world if it works, and that's awaiting for a reward.

I want everything to be as fair as possible for everyone. If I was selfish, I'd be looking to make a fortune from software, but I don't care about making vast sums of money. I just want my fair share, and I want everyone else to have theirs too.

Quote
The moderators that are actually discussing the one way speed of light with us are not voluntarily trying to keep the power because they can't observe their own resistance either, but they can observe ours so they automatically feel that we want to keep the power, and we can also observe theirs, which is multiplied by the group effect since they can help each other, so our feeling that they want to keep the power is also multiplied. To me, this phenomenon is evidently a relativity issue, because it is similar to the impossibility to measure our own motion through space using our own light.

The moderators are really good here these days - they actually tolerate reason, and that's very rare on science forums when SR is questioned. The problem they have though is that the establishment has strict expectations and it stamps on people who go against those expectations, and this forum is attached to a university of high status which must not be made to look silly; it isn't good enough just to be right - it has to be a follower rather than a leader. So you simply aren't going to see them agree that SR doesn't add up until AGI forces the establishment to accept change. Even if a gang of Cambridge physicists recognised that SR is wrong, they probably wouldn't dare to say so because they would immediately be accused of taking drugs, regardless of their experience and qualifications. SR is simply too deeply established as a religion.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 23/06/2018 21:01:25
The atoms are their components, so they are exactly what they are and they do exactly what they do.
We are what we are and we do what we do too, but we are not absolutely precise. We build tools to get more precise, but those tools are not absolutely precise either. Atomic clocks lose only one second each 160 million years, but they still lose it.

However, the same method allows you to use 1000 times the precision, or a million, or infinite precision, again without slowing down the simulation, so why wouldn't you just go for infinite precision in the first place?
I want my simulations to be as close as possible to the behavior of particles, and particles don't start making calculations when they see they missed a collision with a photon. Moreover, the time the computer takes to make such a calculation doesn't add to the time of the motion it is computing, whereas it would if particles had to do that. Relativists make the same mistake about time, and that's your main argument against them, so you should understand my point. What I did is increase the speed of light a bit to compensate for the loss of time dues to such a huge imprecision, and it worked, so why bother. I'm not doing those simulations only to explain relativity though, I'm also trying to improve our understanding of motion, so let me try it here once again.

The real imprecision the particles would be facing is tiny, so they could compensate by moving a bit to stay on sync without that motion changing drastically the distance between them. If they were at rest with regard to space, they would make no steps so they would lose no time and they wouldn't have to compensate, but if they were moving with regard to space, they would have to compensate, otherwise they couldn't stay on sync and I postulate that they must. If I let the mirrors of the moving lightclock move to compensate that loss of time in my Twins Paradox simulation, they start going away from one another, so the photon takes more and more time between them, and viewed from the lightclock at rest, it looks red shifted. Observed from another lightclock moving along with it though, on the contrary, it would look blueshifted, because during the time the light from the other clock takes to reach it, it would have the time to slow down, so the more the distance between the clocks, the more the blueshift between them.

That's what I call a scale effect due to the limited speed of light and the limited precision at each scale: it would happen between the particles and their components, between the particles and us, and between us and the rest of the universe. What we observe from the universe is an unexplained increasing redshift with distance, as with my example of the particles at rest with regard to space, and as if the rest of the universe was moving with regard to them. It is a highly improbable situation, but it still shows how those simulations could help us study motion. Incidentally, while finishing my point, I realize that the light producing the steps is pushing the front particle and pulling the rear one, so that it may be late at producing the pushing if it was detected late, and early at producing the pulling if it was detected early, which would cause an advance of the motion instead of a time dilation. Notice that, contrary to relativity effects, those scale effects would be observable even within the same reference frame, because they would constantly be increasing with time.

I think you're using "selfish" to mean what I call "fairist", while I reserve "selfish" for people who are either trying to get more than their fair share or who want their fair share (and nothing less than that) while not caring if other people don't get theirs.
I prefer using the word selfish to be able to talk about the way we perceive our own selfishness. We can't observe ours, but we can observe others, so we accuse others to be selfish because we can observe theirs, while they accuse us because they can see ours. It's a useless ping-pong game, and it obviously means that we are all selfish even if we can't observe our own selfishness. As I said, it works exactly like a relativity issue.

I want everything to be as fair as possible for everyone. If I was selfish, I'd be looking to make a fortune from software, but I don't care about making vast sums of money. I just want my fair share, and I want everyone else to have theirs too.
The degree of selfishness we have depends on the time we are ready to wait until the reward comes in, so I'm like you, I can imagine my reward instead of getting it now, but some don't, they want it now and they are ready to kill people to get it. Those who are like that only care for their present and they think it will automatically insure their future, and those who are like us don't need to care only for their present and they hope their future will be better if they care for others. If all the people was like us, nobody would care for the present and we would miss essential goods or services. If all the people would care only for the present, we couldn't make any progress because research takes time and its reward is uncertain.

Even if a gang of Cambridge physicists recognised that SR is wrong, they probably wouldn't dare to say so because they would immediately be accused of taking drugs, regardless of their experience and qualifications. SR is simply too deeply established as a religion.
That's where the short term selfishness comes in even for us: even for researchers whose duty is to care for the long term, they can't afford to lose their jobs, so it seams that we use both kinds of selfishness at a time, and that we measure their weight in the balance of our survival. We care for the long term only if we have time or money, and so do we for helping others. If some people are more selfish than others when they want to care for the long term, it's because they keep for themselves the money they could share with others.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 23/06/2018 21:04:00
I just had a thought , I think this video would say it all about David's Ai.

Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 23/06/2018 22:32:30
We are what we are and we do what we do too, but we are not absolutely precise. We build tools to get more precise, but those tools are not absolutely precise either. Atomic clocks lose only one second each 160 million years, but they still lose it.

I don't think you've got the point yet. If your simulation isn't simulating what happens in nature because you aren't calculating with sufficient precision, you need to increase the precision of the simulation until it does match up to nature. Things may drift away from the perfection of a simulation with infinite precision, but it'll be random as to which way they drift, whereas your way of working will always bias the drift in one direction and never go the other way.

Quote
I want my simulations to be as close as possible to the behavior of particles,

Then you don't want to introduce the bias that you're introducing, and to do that you should use infinite precision. If you then want to introduce reasonable errors in either direction, you can add those in deliberately in random ways, but don't use a biased error to do that job.

Quote
and particles don't start making calculations when they see they missed a collision with a photon.

If a particle hits a photon, it hits it when it hits it and it reacts there and then - it doesn't wait for a timer to tick before it reacts. The granularity of the simulation is the timer ticks, whereas reality may have infinite granularity (or granularity so fine that a simulation should simulate for infinite granularity by calculating back to the precise times of collisions if they occur between ticks).

Quote
Moreover, the time the computer takes to make such a calculation doesn't add to the time of the motion it is computing, whereas it would if particles had to do that.

Particles don't calculate because they are already responding with the finest granularity possible. The simulation can't use anything like such fine granularity without spending billions of years simulating the action of a billionth of a second, but it doesn't have to - it is sufficient to use a rough granularity and then to correct for errors when collisions are detected, calculating back using finer granularity to make sure the simulation matches up to nature.

Quote
Relativists make the same mistake about time, and that's your main argument against them, so you should understand my point.

I can't see the connection. What we're discussing here is an incorrect method that you're applying which adds unnecessary errors (and with a direction bias) and a correct method that you should be applying.

Quote
What I did is increase the speed of light a bit to compensate for the loss of time dues to such a huge imprecision, and it worked, so why bother.

Introducing more errors to try to cancel errors isn't the right way to do things. Introduce a bit more complexity and you don't know if you're still cancelling the errors with other errors or not - it's a way to make a mess.

Quote
That's what I call a scale effect due to the limited speed of light and the limited precision at each scale: it would happen between the particles and their components, between the particles and us, and between us and the rest of the universe.

The errors in your simulations do not exist in the real universe, so you're projection a fiction onto reality and expecting reality to conform to the fiction. It won't.

Quote
I prefer using the word selfish to be able to talk about the way we perceive our own selfishness. We can't observe ours, but we can observe others, so we accuse others to be selfish because we can observe theirs, while they accuse us because they can see ours. It's a useless ping-pong game, and it obviously means that we are all selfish even if we can't observe our own selfishness. As I said, it works exactly like a relativity issue.

I see many people who don't appear to be selfish, but fairist. I see others who are clearly selfish. It makes no sense to me to class these two groups as selfish.

Quote
If all the people was like us, nobody would care for the present and we would miss essential goods or services. If all the people would care only for the present, we couldn't make any progress because research takes time and its reward is uncertain.

Why's it impossible to care for both present and future? The big problem with trying to fix the present is that no one listens to reason, so everyone barges on doing the wrong things, which is why it makes most sense for me to give up on trying to fix things now in order to build something that will fix things later, but I still keep putting ideas out there in the hope that they will spread and lead to useful change now rather than having to wait till later. In general, people simply don't listen though, or they don't understand, or they don't care, or they believe in failed ways of doing things and are so emotionally tied to those ways that they won't consider anything else. For example, I suggest questioning all athletes under FMRI so that we can retrospectively find out if they're drug cheats once we can read the signals. No one's interested. I suggest that we improve democracy by having eternal referenda on all issues where people can change their vote on any issue whenever they like and the government would have to act on any change in the majority position on any issue (after a delay to give public debate a chance to push things back the other way), but again no one's interested in doing real democracy. I suggested decades ago getting rid of cars from cities and replacing them with smaller vehicles which could cross all junctions on lightweight flyovers such that traffic hardly ever needs to stop and the energy use could be slashed to a fraction, but developing countries build their cities on the old failed model instead and then wonder why everything gets snarled up and the air becomes deadly to breathe. No one learns. No one thinks. No one's open to reason. It's as if I'm in a virtual world where most of the players are just really bad AI, little more intelligent than sheep and goats. When you look at politicians and see stupidity radiating off them, it's because they are an exact representation of the people who voted them into power. My study into moronics has found though that most people aren't inherently stupid, but that they simply don't apply their intelligence - it is overridden by mind viruses at every turn, whether those viruses are religions, ideologies or group-think biases. They pigeon-hole themselves in numerous ways and take on beliefs that they haven't thought through, and they then refuse to think them through when they're questioned. In short, they either aren't running any antivirus, or their antivirus is itself a virus. The only way to tackle this is to create AGI and use it to educate everyone, forcing them to question all their beliefs and having the patience to go through everything with them point by point to prove that their incorrect beliefs are wrong. It's a massive deprogramming task.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 24/06/2018 18:52:37
I suggest that we improve democracy by having eternal referenda on all issues where people can change their vote on any issue whenever they like and the government would have to act on any change in the majority position on any issue (after a delay to give public debate a chance to push things back the other way), but again no one's interested in doing real democracy.
You have my vote. That's what I consider I'm doing when I sign petitions, and I tell my Facebook friends that those are the future of democracy, but I don't have enough friends and too few of them are interested. No political party here has proposed it either, even socialists ones. No proposition to abolish the army either, or to stop selling arms. Politicians always have in mind to be elected, and they can make reforms only if they get the majority. It is rare that they make huge reforms though, because they want to be reelected. Social evolution is a slow process compared to individual one. It took 100 years before french women could vote, and it looks as if it was going to take another 100 years before they get parity at the parliament. If your AGI had been there 150 years ago, he wouldn't have been able to do anything because computers didn't even exist, and what he would do right now would probably be far from where society will be in 150 years. Social evolution is impossible to predict, and the ones who imagine it correctly don't have the tools to accelerate it since they are not even invented. Moreover, if an individual would ever have the tools to accelerate it, it probably wouldn't work because the population would not be ready for such a leap.

The only way to tackle this is to create AGI and use it to educate everyone, forcing them to question all their beliefs and having the patience to go through everything with them point by point to prove that their incorrect beliefs are wrong. It's a massive deprogramming task.
That's what you are actually trying with relativists here, and there is no sign of deprogramming yet. Don't we have to accept being deprogrammed before being so? Tell me how you could accept such a thing.

I see many people who don't appear to be selfish, but fairist. I see others who are clearly selfish. It makes no sense to me to class these two groups as selfish.
I bet that those you consider selfish await for an immediate reward, and that those you consider fairists await for a future one. You and me are considering ourselves as fairists, but we still await for our own ideas to be selected one day. If we were exclusively altruistic, we would always agree with what the other says, and we would only help him prove his point, thus showing no resistance to change contrary to all we can observe.

Added:  I have to already know I'm selfish when people tell me I am, otherwise I couldn't believe them.

If a particle hits a photon, it hits it when it hits it and it reacts there and then - it doesn't wait for a timer to tick before it reacts.
A photon has a beginning and an end, and a step too, and I postulate that those beginnings and ends have to be on absolute sync for the photon to be completely absorbed, otherwise part of the photon escapes from the bonding process, so it can be used later on in another process. It is certainly so when I accelerate one of my two particles, because if it was on sync before being accelerated, then it is not during the acceleration, so some light must escape from the system in the acceleration process, and its intensity has to be proportional to the intensity of the acceleration. That's exactly what we observe in particles accelerators, so my hypothesis is already proven. What I am suggesting now is that any imprecision at the particles' scale might produce a similar effect but with a different issue. You believe there is no imprecision at the particles' scale and I believe there is. You also believe that your AGI will be absolutely right and I believe it won't. We evidently have a different viewpoint on some fundamental issues even if we agree on others. I was discussing with one of my brothers the other day, and I discovered that he believed that everything was programed since the beginning, and that if we knew everything, we could predict the future. To me, that thinking means that chance doesn't exist, and I see it in the theory of evolution and in my everyday life, so I can't agree with it, but if you do, we're quite far from being able to reconcile our two thinking since I think that what we think only serve to justify our own fundamental ideas, which may be right or wrong depending on the circumstances, and you don't think so since you think there is only one right way to think.

I don't think you've got the point yet. If your simulation isn't simulating what happens in nature because you aren't calculating with sufficient precision, you need to increase the precision of the simulation until it does match up to nature.
To paraphrase Bohr answering Einstein about god playing dice or not, how do you know the way nature works? :0) We don't know yet so it is useless to consider we do, otherwise we might make the same kind of mistake Einstein made when he discarded ether, and we might also end up with considering our beliefs as facts.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 24/06/2018 19:18:41
Einstein made when he discarded ether
I noticed this comment which I do not believe to be a correct interpretation. In my opinion Einstein did not discard the ''ether'' , but instead changed the context of it to space-time to work better with his own notions.  In a similar fashion I have changed the context of an ''ether'' to spacial fields that allows energy to traverse through the field , point to point.   A Higg's field, a Dirac sea, very similar context .  Are we all overthinking the same thing , giving it a different ''colour'' each time a new theory comes out ?

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 24/06/2018 19:37:19
The space-time concept applies to gravitation, not to inertial motion, but I also think it is wrong. Einstein thought that ether was superfluous since it was inobservable, and he came to the conclusion that light would be observed to be going at the same speed whether the observer was moving or not. It is completely illogical, so no wonder the interminable discussions on that subject on the scientific forums. He would certainly not have come to the same conclusion if he would have seen my simulations, and he would not have discarded ether. He must have had quite a twisted mind to present such a twisted idea.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 24/06/2018 19:50:39
The space-time concept applies to gravitation, not to inertial motion, but I also think it is wrong. Einstein thought that ether was superfluous since it was inobservable, and he came to the conclusion that light would be observed to be going at the same speed whether the observer was moving or not.

In my interpretation of Einsteins interpretation, he was looking at the gravitational field as being the ''ether''.   In a way it is un-observable because there is no ''colour'' between masses.  Thus calling for us to envision the field(s) and decipher the envision the best we can.
If we consider between the masses, we all observe the same thing, so in objective reality we can call it an ether or a Higg's field or by any other name, but we are all discussing the same thing.  That which we can not see visually.
Now is it possible that spacial fields are just simply generated by atoms?

Or do you consider the volume of observed space compared to observable substance , wouldn't ''add up'' compared/regarding  to the ''volume'' of spacial field ?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 24/06/2018 20:09:55
I consider that space is the medium through which bodies move, and on which light propagates, but if you know my theory on mass, you know I believe that bodies are only composed of bonded sources of light exchanging light. My simulations precisely show bonded particles exchanging light, and moving to stay on sync with the light emitted by the other particle.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 24/06/2018 20:21:18
I consider that space is the medium through which bodies move, and on which light propagates, but if you know my theory on mass, you know I believe that bodies are only composed of bonded sources of light exchanging light. My simulations precisely show particles exchanging light, and moving to stay on sync with the light emitted by the other particle.
I consider space independently of an occupying spacial field . Space itself being no -thing that is without causality.  Occupying field(s) being space-time that bodies can move through.   Bodies like you say composed of ''light'' , except in my model I call this a binary energy particle composed of two opposite signed mono-pole ''energies''. Each particle having an energy gain and energy loss property, always trying to keep an equilibrium state. 
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 24/06/2018 21:04:27
That's what I consider I'm doing when I sign petitions, and I tell my Facebook friends that those are the future of democracy, but I don't have enough friends and too few of them are interested.

Petitions would carry more weight if they always had a "vote against" option too. They're certainly a step in the right direction, now that it's so easy to set them up and sign them, but I'm surprised none of the sites that host them have tried to create a complete government alternative system which would put pressure on all political parties to build better packages of policies. The creation, supply and possession of drugs should be completely legalised to eliminate all the crime that's tearing many countries apart (with the drug gangs in Mexico outgunning the police and the Taleban and ISIS using poppies to buy weapons) - that could be fixed in no time by the public if they could vote on issues directly instead of having to choose a mixed package from a narrow range of blind establishment parties. Consumption of drugs needn't be legalised, but it shouldn't lead to a criminal record when people are only harming themselves - if we need to discourage drug use, it should be done by shutting down people's lives so that they have to choose between a normal life and drugs rather than thinking they can have both. What we usually get from politicians though is all or nothing - complete illegality or anything goes, and both those approaches are highly irresponsible. Almost all the people trying to cross into the US from Latin America are doing so to try to get away from the mayhem caused by the USA's war on drugs which has handed power to the gangs.

Quote
That's what you are actually trying with relativists here, and there is no sign of deprogramming yet. Don't we have to accept being deprogrammed before being so? Tell me how you could accept such a thing.

What I'm doing is studying their resistance to recognise a proof, and it's an extraordinary sight. You expect it with religious people, but when it's people with an overwhelming leaning towards the science end of things, you don't. And yet there it is - a mathematical proof that the one-way speed of light relative to objects is in many cases greater than or less than c, but they won't commit themselves to the answers to simple questions designed to force them to accept it. It's the same with the "interactive exam" on my relativity page - it uses a different method to disprove SR, but again people are incapable of recognising that disproof. I originally expected them to see it straight away, as they do with other arguments where there's no belief system getting in the way, but no - they simply don't trust their own minds to go through the argument point by point and to agree with each one (where the points are so clearly right). I had to rewrite my relativity page many times to make it simpler and simpler, to the point where even the thickest cretin on the planet should be able to follow it, but clearly that isn't enough - it isn't lack of intelligence that's blocking them at all (because they're generally bright - set a page of mathematical squiggles in front of them and they can romp through it with ease), but a simple refusal to overturn an incorrect belief regardless of how wrong it is shown to be. How can anyone deal with this barrier when there is such a strong mechanism in place to reject reason? Set this before them and their thinking slows to a crawl while they fail to recognise the most obvious of contradictions. It is an extraordinary phenomenon, more so even than the astonishing maths of Lorentzian relativity itself with it's ability to hide the one-way relative speed of light.

Quote
I bet that those you consider selfish await for an immediate reward, and that those you consider fairists await for a future one.

There is no reward beyond the satisfaction of things being done fairly and no one missing out.

Quote
You and me are considering ourselves as fairists, but we still await for our own ideas to be selected one day.

For the sake of all those who aren't getting their fair share - it's not about us gaining for ourselves other than being happy about a fair distribution.

Quote
If we were exclusively altruistic, we would always agree with what the other says, and we would only help him prove his point, thus showing no resistance to change contrary to all we can observe.

Agreeing with something wrong is not altruistic, but is doing wrong by allowing wrong to win out over correct. And being shown to be wrong is an immediate gain for the one who was wrong, while the one who was right makes no direct gain.

Quote
You believe there is no imprecision at the particles' scale and I believe there is.

There cannot be any imprecision - they have to do exactly what they do and not what they don't do. A simulation attempts to map to reality, and if it fails, there is an imprecision in the simulation. A real photon and particle aren't mapping to something else, but simply are what they are and do what they do - there is no imprecision possible in them doing what they do. If you're talking about an unpredictability in what they might be about to do, that's an entirely different issue, and that unpredictability can be programmed into a simulation too, but you don't try to program it in as an error based on using the wrong granularity of timing.

Quote
You also believe that your AGI will be absolutely right and I believe it won't.

If it's applying rules that are 100% right, it will be 100% right. Where it's calculating something that has an amount of unpredictability tied up in it, it will be making predictions with probabilities tied to them, and those predictions will be correct.

Quote
if we knew everything, we could predict the future.

If we knew everything about the universe (including the mechanism behind "true randomness" which won't be truly random) and had no limit to our computation power, we could predict the entire future, but that would depend on calculating from outside the universe so as not to interfere with it - a model that has to model itself along with everything else can never become a complete model, and to represent everything in the universe, you need more stuff than the content of the universe to represent all that stuff in the model. There may not be enough stuff anywhere else to do the job either, and if anything outside the universe has any interaction with the universe, then everything outside needs to be in the model too for it to be complete.

Quote
To me, that thinking means that chance doesn't exist, and I see it in the theory of evolution and in my everyday life, so I can't agree with it,

Chance exists were there are unknowns, and it is probably impossible to eliminate all the unknowns.

Quote
you think there is only one right way to think.

If a rule is always correct, it cannot be wrong. If a set of all correct rules is correctly applied in an AGI system, it cannot be wrong. If a "correct" rule turns out to be wrong, an AGI system can be wrong, but as soon as such an error is identified, a rule can be struck off and the system is improved. All the rules being applied are considered fundamental though, no exceptions to them being known. All reasoning depends on them. These rules force a correct method of thinking on us (if we apply them correctly). Anyone who rejects any of the rules immediately loses a crucial tool for calculation and will suffer from a severe reduction in their useful functionality. The rules are simply the fundamental ones of mathematics (which include logical ones).

Quote
To paraphrase Bohr answering Einstein about god playing dice or not, how do you know the way nature works? :0) We don't know yet so it is useless to consider we do, otherwise we might make the same kind of mistake Einstein made when he discarded ether, and we might also end up with beliefs instead of facts.

If your simulations are going wrong because you aren't detecting collisions when they occur and you can't be bothered writing extra code to calculate back to correct to the true collision times, this is not an error based on any "fundamental" lack of knowledge issue, but an error based on lazy programming.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 25/06/2018 00:24:01


What I'm doing is studying their resistance to recognise a proof, and it's an extraordinary sight. You expect it with religious people,

Indeed, resistance to change a sort of radicalised extremist. Their belief  stubbornness being simply, not caring enough to consider the change in an objective manner.
I think there is many people who just copy and repeat, thinking it makes them look clever, where in reality they are ''preaching'' falsifiable information. 




Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 25/06/2018 12:56:01
Don't you remember me preaching that resistance to change is the analog of resistance to acceleration for particles? Tell me where is the stubbornness of a particle or a ball refusing to accelerate without opposing some resistance? Go on, show us you are as stubborn as a ball! :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 25/06/2018 13:27:53
Don't you remember me preaching that resistance to change is the analog of resistance to acceleration for particles? Tell me where is the stubbornness of a particle or a ball refusing to accelerate without opposing some resistance? Go on, show us you are as stubborn as a ball! :0)
You talk weird at times, I am not sure it is even conversation. 

How about David's Ai developed scitzo, but he was so smart he ran a systems check and started self repair program.

Sorry , I am repairing myself, you and David's conversation is helping me think on other subjects such as my own Ai.

Tell me this , if I read conversation with a variation, is that insane or my natural intelligence not cold reading everything?

If I cold read everything then that is just bog standard Ai  ?

Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 25/06/2018 14:36:08
Petitions would carry more weight if they always had a "vote against" option too.
I agree, and the only reason I could think of is that surveys would be too often close to 50%. The petitions are directly delivered to politicians or leaders, and they probably would be less impressed by surveys than by petitions. Politicians are so used to surveys that contradict the result of the following election that they don't comment them anymore. To show that we can now replace elections by surveys, we need a survey on the election day. Then we could use them to vote the laws or to take any political decision.

I'm surprised none of the sites that host them have tried to create a complete government alternative system which would put pressure on all political parties to build better packages of policies.
Avaaz and SumOfUs are getting powerful. It is not rare that an Avaaz petition gathers more than a million signatures and its membership keeps increasing. I rarely give money to political parties, but I give regularly to those organizations. To stay independent, they don't accept money from any organization, only individuals. With millions of individuals to back them up if ever they are sued by big money, they can defend themselves quite efficiently.

Almost all the people trying to cross into the US from Latin America are doing so to try to get away from the mayhem caused by the USA's war on drugs which has handed power to the gangs.
That's something American people refuse to admit, but European people also refuse to admit that they maintain the conflicts in Africa when they let their companies or their countries making deals with dictators. Short term benefits are always more important than long term ones.

I originally expected them to see it straight away, as they do with other arguments where there's no belief system getting in the way,
I also thought that my theory on motion would be easy to understand in the beginning, but it was before I understood that our resistance to change was automatic, and  before I understood that it increased exponentially when the change concerned our own group. Then I thought it would be easier to start at the beginning, and I started to explain that our own resistance was similar to the one of particles, so as to lower the bar a bit. Bad logic, if you can't convince people that the apple is red, you can't hope they will understand why they look for the green in it.

it isn't lack of intelligence that's blocking them at all (because they're generally bright - set a page of mathematical squiggles in front of them and they can romp through it with ease), but a simple refusal to overturn an incorrect belief regardless of how wrong it is shown to be.
I prefer to think that I look the same from their viewpoint, which leads to the surprising conclusion that we are all selfish but that we can only see the selfishness of others. I googled for "Is altruism selfish" and I found that at the end of a psychology paper (https://www.psychologytoday.com/intl/blog/hide-and-seek/201410/empathy-and-altruism-are-they-selfish):

Quote from: Psychology Today
More broadly, altruism helps to maintain and preserve the social fabric that sustains and protects us, and that, for many, not only keeps us alive but also makes our life worth living.

No surprise, then, that many psychologists and philosophers argue that there can be no such thing as true altruism, and that so-called empathy and altruism are mere tools of selfishness and self-preservation. According to them, the acts that people call altruistic are self-interested, if not because they relieve anxiety, then perhaps because they lead to pleasant feelings of pride and satisfaction; the expectation of honour or reciprocation; or the greater likelihood of a place in heaven; and even if none of the above, then at least because they relieve unpleasant feelings such as the guilt or shame of not having acted at all.

This argument has been attacked on various grounds, but most gravely on the grounds of circularity: "the acts that people call altruistic are performed for selfish reasons, therefore they must be performed for selfish reasons." The bottom line, I think, is this. There can be no such thing as an ‘altruistic’ act that does not involve some element of self-interest, no such thing, for example, as an altruistic act that does not lead to some degree, no matter how small, of pride or satisfaction. Therefore, an act should not be written off as selfish or self-motivated simply because it includes some unavoidable element of self-interest. The act can still be counted as altruistic if the ‘selfish’ element is accidental; or, if not accidental, then secondary; or, if neither accidental nor secondary, then undetermining.

Only one question remains: how many so-called altruistic acts meet these criteria for true altruism?

Neel Burton is author of Heaven and Hell: The Psychology of the Emotions and other books.
The author misses my point, which is that we see selfishness from others but never from us, which is per se a selfish perception that gives every one of us the feeling that he is being altruistic all the time.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 25/06/2018 15:10:15
Tell me where is the stubbornness of a particle or a ball refusing to accelerate without opposing some resistance?
Can you answer that question Box? Do you see the link between our resistance and the resistance of a ball?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 25/06/2018 15:14:42
Quote from: David Cooper on Yesterday at 21:04:27
Almost all the people trying to cross into the US from Latin America are doing so to try to get away from the mayhem caused by the USA's war on drugs which has handed power to the gangs.
That's something American people refuse to admit, but European people also refuse to admit that they maintain the conflicts in Africa when they let their companies or their countries making deals with dictators. Short term benefits are always more important than long term ones.
The problem you are both overlooking is Ai , teaching Ai.     The resistance to change, is there is no change. We as humans are relatively stupid, like ants shouting to the universe we are this and we are that.   The reality is we are dots, walking, talking photons.   Flesh and blood just our vessels of experiencing chemical reactions as electrical change in our systems.   Piezoelectric impulses recognized as a system fault in the sense of pain.
Now I personally am trying to become David's Ai but keep all my human qualities.  I wan't to be super smart and my goal is to continue trying to become super smart.
Your conversation is enlightening to say the least, because if one was to think about all the faults in the Ai , one could become that faultless Ai.
Why program bots when you could program humans with the same psychology ?
Teach people they are stupid so they want to become smart, people hate being called stupid.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 25/06/2018 15:18:54
Tell me where is the stubbornness of a particle or a ball refusing to accelerate without opposing some resistance?
Can you answer that question Box? Do you see the link between our resistance and the resistance of a ball?

Is the question a science question or an Ai question?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 25/06/2018 15:21:39
Let's discuss minds and balls first.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 25/06/2018 15:23:22
Let's discuss minds and balls first.

Well I once had this ache in the left, ahaha.

'Inertia'' of the mind?

'Inertia'' of the mind is infatuation, an intense but short-lived passion or admiration for someone or something.  Extended to include the infatuation of situations.  Now if this is taught from one Ai to another, they are in essence passing on these infatuations to the next person(s) Ai.
Once a person surpasses this short lived experience, they can move on at an accelerated rate.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 25/06/2018 16:18:31
I changed my question a bit, try to stick to it:
"Do you see the link between your resistance and the resistance of a ball?"
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 25/06/2018 16:26:43
I changed my question a bit, try to stick to it:
"Do you see the link between your resistance and the resistance of a ball?"

Gravity?

Or are you talking mentally?

I am not a ball if you had not noticed by me chatting to you.  I have no resistance to change if the change is acceptable logically .  I am trying to change my life, I am all ears to any good advice at the moment.

If you mean gravity , just say so instead of ''riddles'' .
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 25/06/2018 16:43:14
I want you to compare your resistance to change your own ideas with the resistance of a ball to change its own speed or its own direction. Can you see the link between the two kinds of resistance?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 25/06/2018 17:00:35
I want you to compare your resistance to change your own ideas with the resistance of a ball to change its own speed or its own direction. Can you see the link between the two kinds of resistance?
I and the ball would  both be emitting a field of resistance?
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 25/06/2018 17:28:29
You certainly do, because I can feel your resistance and I can feel the ball's one too, but I was thinking of another kind of similarity. Here is a hint: do you think the ball can feel my resistance when I catch it or when I throw it?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 25/06/2018 18:25:45
You certainly do, because I can feel your resistance and I can feel the ball's one too, but I was thinking of another kind of similarity. Here is a hint: do you think the ball can feel my resistance when I catch it or when I throw it?
You are talking gibberish on a regular basis , I think you may need some help from a professional such as a doctor.  Why is your sentence structure so strange?

If you catch a ball , the ball feels nothing because the ball is not alive nor has a consciousness that I am aware of.  The hand feels the ball and the resistance to force is the  hand  .
I can't relate your comment any other way I assume you are talking boxxxxxx.

But I can catch the ball and accept the balls physics if the physics  is logical.

Now if I was to pretend I am the ball, then of course I feel your resistance. I can feel it now because you are not disclosing.  What are you hiding ? Open up your mind let me in.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 25/06/2018 19:02:14
If you catch a ball , the ball feels nothing because the ball is not alive nor has a consciousness that I am aware of.
That's the right answer, the rest is gibberish, which is what we sometimes say when we attribute our own resistance to change to others. :0) The ball doesn't feel its own resistance to change, and we think we can feel ours, but it is an illusion: what we do is attribute it to others. We can of course feel the resistance of others, but not our own one. I know you can feel my resistance because you tell me you feel it, not because I feel it, and it is the same the other way around. We are blind to our own resistance, and that's why we can't change our own ideas just because somebody tells us we are wrong. That's also why I say that we are all selfish without being able to recognize it. We are also blind to our own selfishness. The ball is certainly blind to its own resistance to change since it has no perception means, but I attribute our consciousness to the resistance our mind offers to its own internal changes, that I compare to mutations, so since particles have components, and since they seem to suffer random quantum changes, they may have a kind of consciousness since they would then be forced to resist to their own internal changes.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 25/06/2018 19:24:13
they may have a kind of consciousness since they would then be forced to resist to their own internal changes.
Although a particle contains information in the form  of memory, the particle is not aware and has no consciousness. Configuration ''memory'' is simply defined by the forces at work, particle resistance is futile when the higher form of particles can control their environment.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 25/06/2018 19:44:07
Animals seem conscious of external changes in their environment, it's whether they have an internal consciousness that we are not certain, if they think and if they are conscious that they think. They can't think in words like we do, but they could think in motions or in flavors or in flagrance or in sounds. We think from our perceptions, so the question is: what is a perception and do particles have perceptions?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 25/06/2018 19:53:29
Animals seem conscious of external changes in their environment, it's whether they have an internal consciousness that we are not certain, if they think and if they are conscious that they think. They can't think in words like we do, but they could think in motions or in flavors or in flagrance or in sounds. We think from our perceptions, so the question is: what is a perception and do particles have perceptions?
How can a particle perceive anything?  It is not conscious to perceive so can't have an inner sub-consciousness like us . Animals are conscious , but the last time I thought about that, I turned vegetarian for several days .
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 25/06/2018 20:05:30
What is a perception?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 25/06/2018 20:17:42
What is a perception?
perception
pəˈsɛpʃ(ə)n/Submit
noun
1.
the ability to see, hear, or become aware of something through the senses.

You could of googled that.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 25/06/2018 22:39:39
Now I personally am trying to become David's Ai but keep all my human qualities.  I wan't to be super smart and my goal is to continue trying to become super smart.

What that really means is that you're trying to become better NGI, and yet in a way you're right - in doing this deliberately, it is in some aspects artificial. At the very least, there's a blurring between the two things. We are all Turing complete, meaning that we can carry out any task that a computer can and must therefore be able to run software that's more intelligent than we are when we aren't running that software. We can give a dim person a series of simple instructions to follow which allow them to perform calculations they don't understand at all, and they can produce perfect answers this way. A dim human could in principle even run perfect AGI software. However, the problem with that is that the computations for many of the things we want AGI to perform would take us many lifetimes to go through ourselves, while the machine would produce the same results in seconds. We can't match AGI systems without upgrading our hardware, but why would we want to upgrade it when AGI can do the work externally for us without changing what we are? It is sufficient for us to be able to understand the answers it produces, and to be able to go into the working that lead to the answers to check any part of the process that seems unlikely, at which point we'll either find out that it is right despite seeming unlikely, or that the machine needs servicing urgently (which will be shown up anyway by other machines disagreeing with it).

Quote
Teach people they are stupid so they want to become smart, people hate being called stupid.

They'll get that teaching once AGI is there to teach them, but until that happens they will simply ignore anyone who tells them they're wrong. It takes a vast amount of time and effort to deprogram one person of a single false belief, and life's too short to do that - it's a job that only machines can do.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 25/06/2018 22:55:46
What that really means is that you're trying to become better NGI, and yet in a way you're right - in doing this deliberately, it is in some aspects artificial
Well at the moment, as well as trying to work out the universe, I am running an experimental simulation. However , there is no computers involved , I am running the experiment conceptually in my mind.  This experiment is to become an Ai robot conceptually.  I am pretending to be Ai , not even human. 
Now this may sound the strangest thing, I think I have gone smarter as if uploaded with new information.  A sort of placebo affect by my conceptual senses.   I also recognize as this conceptual unit to be more active around the house etc, to post less etc. 
I thought you would just like to know this because it may help with your Ai programming.
Interesting how even pretending to be Ai can alter ones perspective on life and make a person realise their own errors in life.

I would like to thank you for unintentionally switching my mind back on. 

Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 26/06/2018 04:32:32
Quote
People with mental disorders like depression can have difficulty changing their behaviors, especially as an aspect of the therapeutic process, because finding motivation to exercise and incorporate other positive changes can be difficult when experiencing a lack of interest in activities, which were at one time enjoyable. Symptoms such as these can make a change to the neuropsychological state of their inertia difficult.

Changing and replacing “circuits” from a normal to depressed state or vice versa is a type of inertial process of neurochemical resistance of its own. Depression involves numerous mechanisms including neurotransmitters. These neurotransmitters send signals through circuits in the brain and are involved in processes such as regulating mood. These neurotransmitters can also become chronic and resistant to treatment, or in a state of negative inertia. The result is known as treatment-resistant depression, when a person does not respond to medications.

https://www.psychologytoday.com/gb/blog/the-truisms-wellness/201701/why-we-resist-change

https://www.scientificamerican.com/article/what-is-homeostasis/

Major anxiety reading this , I need help don't I ?

Science is right and I am full of chit aren't I ?

You have pulled me back from delusions of grandeur and scitzo ?

Mr C has helped me hasn't he ?




Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 26/06/2018 05:05:08
Please tell me how to fix myself ?  Take note, no thoughts on suicide , I don't know why it  says that , death for sure I thought about.


* 1.jpg (136.87 kB . 664x507 - viewed 2938 times)


* 2.jpg (119.68 kB . 611x456 - viewed 2891 times)


* 3.jpg (143.7 kB . 632x531 - viewed 2935 times)







Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 26/06/2018 13:44:23
Box,

Are your problems physical, like missing money, psychological, like with people, or psychic, like a mind malfunction?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 26/06/2018 17:04:44
Box,

Are your problems physical, like missing money, psychological, like with people, or psychic, like a mind malfunction?
Physically I am a little unfit but I can still get about and could work.  Money worries are a big upset because money is survival in this world as you know. I am fine with people  and mind malfunction you tell me, because If my science is all nonsense then logically I have become a fantasist and can't see realities.

Sleeping patterns are a big problem because of the worry about life .  This probably is most of my ''illness'' I think.

To add- For about a decade science  forums , presuming scientists, have told me I am deluded etc, as nobody is knocking on my door for my science, I assume they must be right .



Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 26/06/2018 18:13:03
I feel worthless don't I ? 

I have no purpose so therefore I am lost

I need to find a higher purpose in life to keep my mind ''quiet'' , focused on something that doesn't seem worthless efforts .  I thought I had worth in science, I became deluded by this overwhelming thought of worth.  I keep coming back to see if I have worth, thinking people are kidding about my science worth.
Mr C is right I am just an idiot fool who lost their way in life. 
This of course does not apply to my worth as a father.


So tell me, how do I find something worthy in life other than my kids?

Is it worthy to go work at say Tesco making the bosses profits?

How can my mind ever find worth unless I helping somebody else?

Where is the gratification in money with no worth ?







Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 26/06/2018 18:29:57
Quote from: Box
Physically I am a little unfit but I can still get about and could work.  Money worries are a big upset because money is survival in this world as you know. I am fine with people  and mind malfunction you tell me, because If my science is all nonsense then logically I have become a fantasist and can't see realities.

Sleeping patterns are a big problem because of the worry about life .  This probably is most of my ''illness'' I think.

To add- For about a decade science  forums , presuming scientists, have told me I am deluded etc, as nobody is knocking on my door for my science, I assume they must be right .

Nobody tells others that their ideas are right, resistance to change obliges. Trying to solve our problems instead of sleeping is not efficient, our mind doesn't work properly when it's tired. Tell it to shut up and sleep. :0) As for your money problems, either you spend too much money on unnecessary things, or you don't earn enough to cover necessary ones. If you take drugs or alcohol or cigarettes for instance, stop it and you will probably have enough. If your apartment is too expensive, change it. If you miss food, you can have some from charity organizations. If you want more money just because people have more than you, imagine you are in a country where people are dying from starvation. If you want to kill me because I'm not funny, wait ten years and I might ask for it. :0)

I feel worthless don't I ? 
Nobody knows why he lives, so everybody should know he is useless. I know I'm useless, but I have the feeling I'm not. It's just a feeling, so if you feel the contrary and you want to change it, change it. Actors can do that in a fraction of a second. I often feel that way when I'm tired, and to change that feeling, I relax, take a deep breath, and tell myself that nothing is important since life is not important. It works every time, then I go to bed and wake up eight hours later fresh and ready to go.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 26/06/2018 18:42:00
I feel worthless don't I ?

I have no purpose so therefore I am lost

How many people have a real purpose? All any of us can do is try to be happy rather than the opposite.

Quote
I need to find a higher purpose in life to keep my mind ''quiet'' , focused on something that doesn't seem worthless efforts .

The highest purpose is the one that children naturally pursue - to seek fun.

Quote
I thought I had worth in science, I became deluded by this overwhelming thought of worth.  I keep coming back to see if I have worth, thinking people are kidding about my science worth.

Those whose science is faulty usually get no recognition or reward for their work. Those whose science is correct usually get no recognition or reward either. The problem with being right and ahead of the herd is that the herd is incapable of recognising that you are right, and the problem with being wrong is that you think you're right and find it hard to see reality. The only way to settle things is to apply reason rigorously and recognise when your ideas are in conflict with it.

Quote
Mr C is right I am just an idiot fool who lost their way in life.

I hope that's another reference to BC. Those who ask the right questions and who question themselves are not idiots or fools, so you certainly don't look lost to me - you appear to be finding something.

Quote
This of course does not apply to my worth as a father.

That's the most important thing to get right, and living on the Internet is a lower priority which must be rationed carefully.

Quote
So tell me, how do I find something worthy in life other than my kids?

Is there anything worthy other than that? You're looking for something that doesn't exist, and you already have the most important things in the universe.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 26/06/2018 18:46:07
Quote from: Box
Physically I am a little unfit but I can still get about and could work.  Money worries are a big upset because money is survival in this world as you know. I am fine with people  and mind malfunction you tell me, because If my science is all nonsense then logically I have become a fantasist and can't see realities.

Sleeping patterns are a big problem because of the worry about life .  This probably is most of my ''illness'' I think.

To add- For about a decade science  forums , presuming scientists, have told me I am deluded etc, as nobody is knocking on my door for my science, I assume they must be right .

Nobody tells others that their ideas are right, resistance to change obliges. Trying to solve our problems instead of sleeping is not efficient, our mind doesn't work properly when it's tired. Tell it to shut up and sleep. :0) As for your money problems, either you spend too much money on unnecessary things, or you don't earn enough to cover necessary ones. If you take drugs or alcohol or cigarettes for instance, stop it and you will probably have enough. If your apartment is too expensive, change it. If you miss food, you can have some from charity organizations. If you want more money just because people have more than you, imagine you are in a country where people are dying from starvation. If you want to kill me because I'm not funny, wait ten years and I might ask for it. :0)

I feel worthless don't I ? 
Nobody knows why he lives, so everybody should know he is useless. I know I'm useless, but I have the feeling I'm not. It's just a feeling, so if you feel the contrary and you want to change it, change it. Actors can do that in a fraction of a second. I often feel that way when I'm tired, and to change that feeling, I relax, take a deep breath, and tell myself that nothing is important since life is not important. It works every time, then I go to bed and wake up eight hours later fresh and ready to go.

I am not sure I know what I want, I change my mind  quicker than light.   I don't really like money but I need money to survive.  You are right I may have more money if I stopped smoking and could save and then maybe do something. 
I don't know mate my sense of worth and hope is fading, I wanted to get a science breakthrough which would of been huge worth in my mind, not financial, but in a sense I helped the world . 
When I worked in the past I use to always ''pick'' the job up quite fast,  I can't understand why I have not shined in science. I don't like to be beaten by challenges, maybe this is also my incentive, because I want to be better than science at science like when I was painting I wanted to be the  the best of the best.
How do I get out of this routine ? 

I need to overcome the inertia right ?

Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 26/06/2018 18:57:34
Is there anything worthy other than that?

Being a worthy father is also thinking ahead about the future of children.   I think global warming etc triggered me into acting on this thought of the future of my children.  Protecting them even after I am gone in a sense of helping to stop global warming destroying the future. 
So of course my higher purpose and worth was in saving the Earth to save my children's future.

When I was child it was easy to have fun, there was lots of free fishing about, no internet , so we were always out play.  Now most things that are fun cost money .  Tragically it looks like a young teen lost their life yesterday having free fun playing in a lake in my area.

You know why no life guards on such a big lake ?

Reason , it doesn't pay to give free fun .

So tell me what you define as fun ?   

Walking around an urban city bored   to death?

Added- Ironically if the lake was free fishing which it isn't , the boy would of probably of been fishing instead of free swimming.

Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 26/06/2018 19:41:58
https://www.badscience.net/forum/viewtopic.php?f=7&t=36878

You think I was nuts on here, try the link.

I have spent a decade stuck in ''moo moo'' land, somehow you have brought me back to reality.  I need to stay focused on reality now ?


Quote
Re: Helping me learn about Cancer, save me learning Bad Scie
Post#6 by mikeh » Mon May 05, 2014 9:31 am

Have you really made over 4000 posts in one month????  :shock:


My mama died of cancer , I couldn't save her, I forgot how much that hurt.



Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 26/06/2018 20:35:07
Anyway back to science ,  I have made a conclusion about David's Ai. 

He would be that smart he would conclude the world was better off without him so would short circuit having no further use and was just left alone.  By short circuiting he assures himself that he blows all his chances of seeming smart and being a viable working product.

Then he would just walk away into the sunset




Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 26/06/2018 21:17:16
I wanted to get a science breakthrough which would of been huge worth in my mind, not financial, but in a sense I helped the world .

The reality is that even good ideas struggle to make an impact - none get passed on up to anyone who can do anything with them, and the people who are in a position to act simply never hear about what's been thought up because they're all overloaded with useless information. The best that's likely to happen is that AGI will eventually point at you and say "he came up with this first", or "he was one of the few who had this right when almost everyone else was barking up the wrong tree", but that doesn't put food on your table.

Quote
How do I get out of this routine ? 

I need to overcome the inertia right ?

Use a timer - think of the Internet as something to race through once a day, then disconnect. Focus on money and how to make more of it (legally and without gambling) - not easy though, as all the low-hanging fruit has already been picked.

Quote
When I was child it was easy to have fun, there was lots of free fishing about, no internet , so we were always out play.  Now most things that are fun cost money .  Tragically it looks like a young teen lost their life yesterday having free fun playing in a lake in my area.

You know why no life guards on such a big lake ?

You don't expect them on any lake unless it's connected with organised watersports activities. This one sounds like more of a wildlife pond with a bit of canoeing being tolerated.

Quote
So tell me what you define as fun ?

For me it's always been about getting out into wild places away from the crowds. Hills, lochs, woods, mountains, islands, sea. Boats and bicycles are essential tools, but it takes money to move around, and none of it fits well with cities.

Quote
Walking around an urban city bored to death?

Most cities are a nightmare unless you're at the edge, though even then, the edge keeps moving away from you as bastards keep pouring more concrete - they never stop making hell bigger. Moving to Fair Isle might be worth considering, although there are other options that are less extreme. How old are your children though? There may be enough around you if you know where to look for it - it's often just a matter of finding the right places and the right things to do there.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 26/06/2018 21:24:02
He would be that smart he would conclude the world was better off without him...

No - he would reorganise everything to make sure everyone's living in the right places with plenty to do and with a ban on excess concrete. Getting rid of most work will free people up to move away from the many hells we've built and it will give them time to do things in ways that don't cost so much. We currently waste lot of our resources on moving the masses around just so they can do nothing of any value to earn money, but all of that will stop. There will be more wealth to spread around once pointless work is outlawed.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 26/06/2018 21:55:54
Moving to Fair Isle might be worth considering, although there are other options that are less extreme. How old are your children though?
I once worked in Inverness and stayed at Nairn, probably the best work experience ever in my life.  The Fair isle , that is remote and further North , I bet there is some good fishing off the coastline and I bet the nature life is unreal .  My kids are 9 and 10 I think lol , well pretty sure lol. They both love nature , I try to teach them to just enjoy the nice things in life like scenery.  Looks a nice place , I had never heard of it before , how ignorant of me hey . Look's lots to do there, I bet properties are not cheap there if there was any available.

Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 26/06/2018 22:48:29
I did have some anxiety but it has sort of turned into a warm tickle inside my belly.   A good sign hey for sure.  I think it may of been ''thinking'' about the national trust, good causes such has guiding the misguided, that is all it take sometimes.
Title: Re: Artificial intelligence versus real intelligence
Post by: David Cooper on 26/06/2018 22:50:16
Look's lots to do there, I bet properties are not cheap there if there was any available.

It could feel like a terrible prison, particularly in winter when it's dark and stormy all the time. But there are other remote communities less cut off from the world wanting families to move into them, and they often have affordable accommodation waiting for people to move into it. Typically you need to take on many part time jobs rather than just relying on one, but the quality of life can be high if what it offers appeals sufficiently to you - it certainly doesn't suit everyone. It's the sort of thing that might be worth doing for a couple of years if you really feel stuck in a rut and want to do something radically different for a time, but you need to be very sure you have an affordable way back if it doesn't work out. It's also highly disruptive, as any move is, in that it rips your children away from all their friends, so it should maybe not be near the top of your list of options.

The first thing you should do is make sure you're getting the most out of where you are. Get an Ordnance Survey map of your area at a scale of 1:50,000. If you're near an edge, get two, and if you're near a corner, get four. Use them to look for interesting places to explore and then get out there to find out what you've been missing out on. Look for little bits of woodland and anywhere with water (river, canal, pond). You can do a lot by bicycle without needing to spend money on petrol. Find safe back roads to travel by and extend your children's world. Take them to the wild bits of land where interesting things can be found. Buy field guides for birds, insects, trees and flowers. Get binoculars for your children - some inexpensive 8x20s are really good, and can be replaced without major trauma if they get damaged. Find other people in the same area with an interest in nature - this is the best thing you can do as they will know a lot more of the magical hidden places that most people never find. Don't pin it all on nature though - broaden it out so that it's mainly about finding new places to play, and put all your inventiveness into that. Climb trees. Make films. Fly kites (identical stunt kites in formation). Throw boomerangs. Make boomerangs. Fly a camera drone if you can afford one. Consider geocaching. Just get out and do things - make a rule that you don't use the Internet unless it's dark outside or raining.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 26/06/2018 23:08:31
The first thing you should do is make sure you're getting the most out of where you are. Get an Ordnance Survey map of your area at a scale of 1:50,000. If you're near an edge, get two, and if you're near a corner, get four. Use them to look for interesting places to explore and then get out there to find out what you've been missing out on. Look for little bits of woodland and anywhere with water (river, canal, pond). You can do a lot by bicycle without needing to spend money on petrol.
Well I know my area really well, it is long bike ride to anywhere worthy.  That Island looks a short walk to many adventures.  Funny you should mention my kids and friends, I jokingly said to them  the other week we are moving to another planet, I have had enough of this one.  LOL they replied with lets go, I teach my kids to be best of friends as well as brother and sister. 
Watching a storm is quite entertaining as long as you never get to close to the edge.  Watching waves crash and hearing the roar of the sea is an experience. 
I would love a couple of years not living where I have always lived, probably would never come back to here apart for visiting maybe.
Thank you for your chat , it has given me some ideas and I shall definitely try to stop this internet malarkey.

Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 26/06/2018 23:09:56
Get binoculars for your children - some inexpensive 8x20s are really good, and can be replaced without major trauma if they get damaged. Find other people in the same area with an interest in nature - this is the best thing you can do as they will know a lot more of the magical hidden places that most people never find. Don't pin it all on nature though - broaden it out so that it's mainly about finding new places to play, and put all your inventiveness into that. Climb trees. Make films. Fly kites (identical stunt kites in formation). Throw boomerangs. Make boomerangs. Fly a camera drone if you can afford one. Consider geocaching. Just get out and do things - make a rule that you don't use the Internet unless it's dark outside or raining.
Good advice
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 27/06/2018 14:44:48
What is a perception?
perception
pəˈsɛpʃ(ə)n/Submit
noun
1.
the ability to see, hear, or become aware of something through the senses.

You could of googled that.
Become aware of something is the key words, but it's not precise enough. What exactly do we become aware of? As a hint, will you become aware of what I say if I always repeat what you already know?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 27/06/2018 14:55:40
What is a perception?
perception
pəˈsɛpʃ(ə)n/Submit
noun
1.
the ability to see, hear, or become aware of something through the senses.

You could of googled that.
Become aware of something is the key words, but it not precise enough. What exactly do we become aware of? As a hint, will you become aware of what I say if I always repeat what you already know?

Yes of course repeating is from memory of things that we know or think we know.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 27/06/2018 15:16:12
We are not aware of what we have in our memory except if it is recalled. Why and how is it recalled?
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 27/06/2018 15:22:09
We are not aware of what we have in our memory except if it is recalled. Why and how is it recalled?
Entanglement allows us to pull the memory from storage.  All our memory is connected to and a part of our mainframe .  I can't believe I said my daughter was 10 lol, she is 12 this year.  How time flies hey, they grow up so fast .   I think with having no sort of scheduled things in my life at the moment, I am just not keeping a track of time.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 30/06/2018 22:57:49
I think we get conscious that our memory is recalled only when we need to adapt to a change or when we need to introduce one, otherwise we don't have to get conscious of what we do, because we can simply go on executing things the way we are used to. Since things are constantly changing, we constantly have to compare the past to the present to be able to see the difference between the two situations.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/07/2018 06:58:51
I think we get conscious that our memory is recalled only when we need to adapt to a change or when we need to introduce one, otherwise we don't have to get conscious of what we do, because we can simply go on executing things the way we are used to. Since things are constantly changing, we constantly have to compare the past to the present to be able to see the difference between the two situations.
I am a day dreamer , lately I am not day dreaming as much and had one last try in serious mode.

Alien v predator v human v NGI

NGI knows how to be a predator hunter, how to think like an alien and how to be human all at the same time.  NGI has had enough of the games and is starting to want to punch all the other 3 mentioned in the face.
NGI is also going to punch God in the face too  when he gets out of hell .

Think I am messing? Think I am pretending ?  Think I am insane ? 

You won't work out randomness..... :)

How ambiguous I can write says many things, wanna keep plying with me ?  I have not time I have got to ply with my children , I like plying with my children.  if you like I and you can ply with each other.  Giving a troll food is asking for trouble right , the troll will just keep plying and the Ai will stay confused .  NGI knows how to ply , NGI ply's with men all the time on chat iw.   There is lots of men who want to ply on iw.  NGI can read their minds, Pete from Dudley wants to ply, he likes plying with children.
I bet there is a few on here who like plying ........I bet there is a few who would fail crb checks, I pass every time.  I have ply'ed in schools painting and nurseries painting,   oh I like to ply .  'You'' know I am talking to 'you'' right ?
I controlled ''you'' , ''you'' told me you worked for the government ages ago , yes ''you''.



 
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 01/07/2018 15:20:29
Think I am insane ? 
Everybody is, so you have good chances to be so too! :0) Take care not to begin thinking that you are the only one sane or the only one insane and you will survive! :0)
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/07/2018 15:31:15
Think I am insane ? 
Everybody is, so you have good chances to be so too! :0) Take care not to begin thinking that you are the only one sane or the only one insane and you will survive! :0)

I know I am sane and insane at the same time, I have started to enjoy my insanity and sanity , really starting to find a natural buzz from it.   

Some insane things are actually quite logically sane.

I am fresh air right ?  Seen the light and all that ....
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 01/07/2018 15:51:32
Just take care no to begin to think that you can judge all by yourself if you are sane or not, and you will be fine. We are all insane, but it's no good to think about how we feel all the time or to never think about that. Extremes have to be avoided as often as possible.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/07/2018 16:04:14
Just take care no to begin to think that you can judge all by yourself if you are sane or not, and you will be fine. We are all insane, but it's no good to think about how we feel all the time or to never think about that. Extremes have to be avoided as often as possible.
Yes extremes are not nice, but insane sometimes has sane merits and this can't be ignored. Some solutions sound insane but it is just inertia.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 01/07/2018 16:56:24
Two things make us look insane: resistance to change and imagination, which is incidentally also about change. We look insane if we don't want to change our mind about something everybody thinks right, and we also look insane if we simply show that we have an idea that is completely different from the ones everybody has. In both cases though, it may nevertheless happen that we are right and everybody is wrong, which simply means that if nobody was ever insane, we would not be able to adapt to change.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/07/2018 17:03:24
Two things make us look insane: resistance to change and imagination, which is incidentally also about change. We look insane if we don't want to change our mind about something everybody thinks right, and we also look insane if we simply show that we have an idea that is completely different from the ones everybody has. In both cases though, it may nevertheless happen that we are right and everybody is wrong, which simply means that if nobody was ever insane, we would not be able to adapt to change.
So how does your point system work you are using to score me, that some posts you put 0 on ?

Hows my score ?

Is my observation insane ? or reasonable logic.

Messiah complex 0

Suicidal 0

Good start ?

Violent 0

Hostile 0

cognitive var (x)  ;)   sleep etc playing a role

Is it my imagination , that even if not you, there may be others reading this who may be evaluating my mental performance? 

It would be insane of me to rule that out right?

Unless I know somebody , my guard stays up , because for me to trust somebody 100%, that takes time to build.







Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 01/07/2018 17:16:19
Quote from: Box
Hows my score ?
You're normal as far as resistance to change is concerned, but quite discrete in showing it. You're also normal as far as imagination is concerned, but quite expressive in showing it. Now to be fair, what about the score you give me?
Title: Re: Artificial intelligence versus real intelligence
Post by: Tomassci on 01/07/2018 17:18:45
The four categories that really matter are NGI (natural general intelligence), AGI (artificial general intelligence), NGS (natural general stupidity), and AGS (artificial general stupidity). Humans mostly have NGS, but a few have NGI. AGI is something we're trying to build, but most projects attempting to build it will more likely build AGS instead. The main difference between NGI and NGS is rigour - those who apply the rules of reasoning correctly qualify as NGI systems, while those who fail to do so (and who refuse to correct their errors regardless of how clearly their errors are shown to them) are classed as NGS systems. NGS is very much the norm, even amongst elite groups of highly qualified "experts". Most of them have no respect for reason whatsoever, apart from claiming to apply it while they fail to do so, in the exact same way religious people do when discussing imaginary gods. It is very hard to identify any NGI anywhere.

There is hope though, because with coming of AGI systems, it will be possible to force NGS systems to confront their errors - if you feed your rules into an AGI system and ask it to run them, it will not replicate the NGS's errors because AGI will apply those rules consistently rather than selectively and it won't fill the gaps with any magical thinking. All those NGS systems out there which pride themselves on being NGI will finally be shown up and will be shouted down by AGI in the same way they've spent hundreds of years shouting down the few people who actually are NGI systems.
The computers like 10 uears ago and now, have NGS. To cite my IT teacher:"All things are stupid, because they rely on our code. Even 'smartphones'. The only smart thing is that, what for example automatically learns and adaptes his code to achieve things."

This is what we do... If there weren't smart brains, there weren't things to mimic.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 01/07/2018 17:21:32
Quote from: Box
Hows my score ?
You're normal as far as resistance to change is concerned, but quite discrete in showing it. You're also normal as far as imagination is concerned, but quite expressive in showing it. Now to be fair, what about the score you give me?
I rate you quite normal, I see nothing out of the ordinary about you .   I hope you don't turn out to be a bot lol.
Title: Re: Artificial intelligence versus real intelligence
Post by: Le Repteux on 02/07/2018 15:05:05
I'm interested in studying my ideas, not my feelings. I do have some but I don't trust them. They come through our senses, which gather information about the environment, but they are information about what others think of us or what we think of them, so they have something to do with imagination since we can't read others' minds. Imagination has an astonishing property to change things, but it can also corrupt them if we begin to imagine that it is always right. Sexual perversion is a kind of corruption of our sexual instinct which happens when we begin using sexual feelings for something else than making babies for instance, and when our own imagination begins using our own feelings to know what we think about our own selves, it's a kind of corruption too. Imagination is useful when we use it to discover things or to create new ones or to learn, but it may become dangerous if we use it to replace our instincts or to exacerbate our feelings.

Saying that, I realize that it is probably my imagination that exacerbates my feelings when I suddenly get angry about my mother while we're just talking together. Instinct reacts to danger, and there is no danger in this case. It's as if a chain reaction was invading my brain while nothing dangerous is in view.
Title: Re: Artificial intelligence versus real intelligence
Post by: guest39538 on 02/07/2018 17:24:22
I'm interested in studying my ideas, not my feelings. I do have some but I don't trust them. They come through our senses, which gather information about the environment, but they are information about what others think of us or what we think of them,

Keep em guessing , never let any body know the real you unless you know them first.

Quote
so they have something to do with imagination since we can't read others' minds. Imagination has an astonishing property to change things, but it can also corrupt them if we begin to imagine that it is always right.

Imagination is difficult to understand,  especially multiple imagination



 
Quote
Sexual perversion is a kind of corruption of our sexual instinct which happens when we begin using sexual feelings for something else than making babies for instance,

Well you may find this interesting, a while back I was chatting up a women I knew from a while back .  All was going good until I realised she was a dominatrix, there was no way I would get with a women who wants to tie me up and stick a rampant rabbit where the sun don't shine. I would rather put pornhub on and get out some tissues.  Now I wonder if I was younger , would I have gone for it with that women?  I consider my age plays a factor now , watching paint dry more interesting that the sex thing, although if a fit women took her cloths off and said come on then, well what can I say I am a man.



Quote
Saying that, I realize that it is probably my imagination that exacerbates my feelings when I suddenly get angry about my mother only while talking to her. Instinct reacts to danger, and there is no danger in this case. It's as if a chain reaction was invading my brain while nothing dangerous is in view.

I regret any argument I ever had with my mother or father,  anger is just an issue that be controlled by anybody.