0 Members and 2 Guests are viewing this topic.
Can you give us a glimpse at how your software learns David?
...but you seem reluctant to use randomness this way, so I really wonder how your software works.
Randomness isn't necessary and probably isn't helpful. If you're homing in on a good method for doing something and you keep trying little variations in the method you're applying, you'll see the success numbers going up or down and can use those to guide you towards what may be the optimum algorithm.
I think that an AGI with no integrated random process would simply try to stop any intellectual evolution, so I'm really worried about your viewpoint, all the more that I never succeeded to convince anybody about that, and even less the few programmers that I met.
Such an AGI would certainly be better than us to solve any old problem, but I think it would be unable to solve new ones without using randomness.
Research takes time because it is a trial and error process, but that process often pays otherwise we wouldn't use it.
It is not computers that make the revolution, it is humans with all their mistakes.
The computers only help to crunch the data, not to invent new stuff.
On the contrary, I need to take chances to feel good, and if I'm wrong, I simply try something else.
We know that the mutation/selection principle works only because we know that it sometimes pays to take chances, so how could an AGI ever understand that principle if it is unable to take any chance, and how could it behave intelligently without that capacity? Would it prevent us from taking chances? And if so, could our intelligence stay sane without that pleasure?
If such an AGI was already ruling the world, it would be trying to stop the wars, stop the pollution, stop the population growth, stop the inequalities, etc... , all those things that we are actually trying to control too, but for which we face a huge resistance. To be faster than us, that AGI would thus have to find new ways to get around that resistance. Firstly, I think it couldn't find new ways if it would be unable to proceed by trial and error, and secondly, if that resistance is of the same kind than the one we feel when we need to accelerate a massive body, I'm afraid it would simply be losing a precious time.
If the doors are closed after it's been revealed that the goat isn't behind them, then you could have the random process open many of the doors multiple times because it isn't monitoring which ones it's already tried, whereas the systematic search will only open any door once
A systematic approach is fully capable of working out what the range of all possible actions is and to go through every single one of them to try them out. Meanwhile, the random approach wastes more and more of its time on repetition.
Trial and error can be random or systematic. The latter is better.
It's only once we have AGI that computers will become good at inventing new stuff, but they will be good at it and will outperform us. The only thing that will hold them back is their inability to judge whether their fun inventions are fun or not - they'll have trouble measuring how much pleasure they generate when they're incapable of experiencing any of that themselves.
Somehow, the mutation/selection process is intelligent, or at least, we find it more intelligent than intelligent design.
It is thus also possible that we succeed to invent a superior intelligence that would also be able to manipulate ours, which means that we may actually be looking for something that we know nothing of. We may think we know, but as the evolution of species shows, that intelligence may be completely different than ours, so what we already have in mind about it is probably only a vague idea.
On the other hand, if, as I think, our mind really uses randomness the way evolution does, it could mean that randomness is the only way to evolve. In a world where so many crucial things are unpredictable, a mix of diversity and randomness might be the only way to last.
I think democracy is such a mix. I think that elections add a bit of randomness in the process of choosing our leaders, a randomness that produces more diversity with time, a diversity that permits the societies to evolve more rapidly.
Some argue that China didn't need democracy to evolve fast, but that's forgetting that we were the ones to buy their goods in the beginning, and that they were using our technology to produce them. Of course, that government could artificially try to introduce diversity in the way it governs, but I'm afraid the only way to do that would be to accept dissidence and trigger elections, a political suicide.
When there is enough individuals that carry different mutations, repetition doesn't really slow the process. It is even better that the winning mutation belongs to many individuals at a time in case some of them get an accident.
That's what's happening to ideas: it often happens that many individuals develop the same idea at the same time, which usually produces different solutions to the same problem, which is good for diversity, which is good for further evolution. A lot of people is actually working on artificial intelligence for instance, and it effectively increases our chances to develop it.
Quote from: David Cooper on 27/01/2019 01:01:06Trial and error can be random or systematic. The latter is better.Why one or the other, why not both at a time?
Our feelings help us to survive, they work like our senses. If an AGI had senses to help it survive, it could probably develop feelings. But if it did, it would be exactly like us, and we are afraid of what it could do since we are afraid of us, so we don't want to try it. On the other hand, you think an AGI would be less dangerous than us just because it would have no feelings, but if it had, it would be afraid of itself just like we are, and being more intelligent than we are, it might succeed to control itself better than we do. Feelings are a shortcut to analysing situations: no need to remember what has produced a bad feeling, we know we must flee the situation or prepare to fight. I know because I live with my mom and we often have words. Most of the time, after a while, I forget about the facts, but I still know it is too soon to have a talk again because the bad feeling is still there. I was afraid you would get angry with Rmolnav the other day, but you didn't, or at least it didn't show, so unless you're a software, it may mean that you can control yourself very well, or that you have what we call a very good character.
But what are those characteristics exactly? What makes us more or less aggressive? More or less patient? More or less empathetic? If we knew exactly how our own brain works, we might be able to build the perfect Artificial Human, and we wouldn't need a perfect AGI to rule us since we would already be perfect.
But I don't believe in perfection since I decided not to believe in god when I was 12, so I don't believe in perfect AH or perfect AGI either.
To me, if we would depend on perfection to exist, nature would have made us perfect.
To me, all the things that exist need to be imperfect to keep on existing. To me, trying to get perfect is a non-sense that can even get dangerous.
Of course, we got to keep on getting rid of wars and leveling the inequalities, but we don't need perfection to do that, just to go on evolving. Of course, we can try to build a better artificial intelligence than our own one, but without aiming for perfection.
It isn't intelligent, but it did produce intelligence.
Intelligent design is much quicker and actually uses intelligence rather than relying on random luck.
I leave it to the neural-net fanatics to create imperfect intelligence - they will make smart machines that kill people. I will do my best to stop them by creating something perfect.
Nature has failed to make us perfect because evolution is blind, and it bodges solutions rather than engineering them from scratch. You are calling for a random approach that will fail to create perfection.
Once you understand what morality is, you simply apply it, and feelings cannot be allowed to override reason.
Trial and error can be random or systematic. The latter is better.Quote from: Le RepteuxWhy one or the other, why not both at a time?QuoteThe systematic approach can include a random aspect if it's helpful, so it's already doing both (except that a random aspect is rarely helpful).
Why one or the other, why not both at a time?QuoteThe systematic approach can include a random aspect if it's helpful, so it's already doing both (except that a random aspect is rarely helpful).
The systematic approach can include a random aspect if it's helpful, so it's already doing both (except that a random aspect is rarely helpful).
Mathematics is an exploration of perfection, finding rules that have no exceptions.
Well, if evolution isn't intelligent, then I can pretend that our mind isn't intelligent either, because I see the same kind of memory and imagination in both processes: memory being due to a precise information reproduction process for both, neurons and genes, and imagination being due to a specific random process for both too.
Quote from: David CooperIntelligent design is much quicker and actually uses intelligence rather than relying on random luck. Do you mean that you think we were created, even if it is not by an omnipotent god?
Even if we succeeded to create a perfect software, the hardware would not be, because it would be built out of imperfect particles that would be sensible to damaging radiation, so it could fail.
If nature is imperfect from top to bottom, we can't build any perfect thing out of it.
The idea of perfection leads to religions, so I'm surprised that you can at the same time reject religion and aim for perfection.
To me, thinking that perfection can solve our problems is like thinking that god can save us.
This is happening in my simulations because I can't make detection absolutely precise, and you like precision, so you neglect detection.
The two simulations give the same general result, except that mine shows that things do not need to be perfect to work.
Your AGI wouldn't have emotions, but it would probably still have an instinct since it would probably be wired to protect itself against cyber attacks. I said wired instead of programmed because wiring can't be altered by a software, either by an internal or an external one.
Have you thought about the way our instincts interfere with our intelligence? We can't change our instincts while still being able to change our ideas, so it often causes contradictions between those ideas. Imagination wants us to help others, while instinct is constantly on the defensive. Your AGI would be programmed to help us, but it would also have to be on the defensive, so wouldn't it be able to develop contradictory ideas too?
Morality and reason are two terms that I decided not to use anymore since I discovered through my small steps that our own resistance to change was relative. To me, morality and reason are simply personal things: without a superior authority to impose us rules, we do what we want, and these rules first serve to protect those who have concocted them. The way an AGI would be programmed would thus depend on people that would put their own morality first, and so would the AGI. Unfortunately, I don't think there is any way out of that trap. I don't agree with everything you say for instance, so I wouldn't like an AGI to be programmed your way, but you nevertheless think it would be perfect. I'm the only perfect thing in this world, so how could you be so? :0)
Quote from: David Cooper on 29/01/2019 23:02:15Trial and error can be random or systematic. The latter is better.Quote from: Le RepteuxWhy one or the other, why not both at a time?QuoteThe systematic approach can include a random aspect if it's helpful, so it's already doing both (except that a random aspect is rarely helpful).Help me, I can't figure out how you can think that way! :0) Aren't you happy when chance is on your side? Aren't you watching a game mainly because you don't know the outcome?
When you have a good idea, don't you attribute a bit of it to chance? Most of the people that succeed attribute the largest part of their success to chance, which makes them appear humble. You do appear humble, but if you ever succeed with your AGI, how would you proceed to show you are, and moreover, how would your AGI be able to show it is while still knowing it can't be? Would it be able to lie?
If the universe was ruled by mathematics, things would never change since they would already be perfect, everything would be predictable, and intelligence would be useless.
Intelligence allows us to reduce the suffering and increase the pleasure, and morality is about making that happen as much as possible.
Some people suffer or impose suffering to others from trying to get more and more pleasure out of an instinctive behavior, so I guess that your AGI would need to control those behaviors, but how could it succeed better than we do? Religions have tried, but they visibly failed. Yet, they had god to show us the way and to chasten us when we took the wrong one. Governments have tried with their laws and rules, but it visibly didn't work either. How would your AGI proceed exactly?
How would it be able to control our instincts without producing more harm than pleasure? The only way I see would be to prevent our instincts to supplant our intelligence, or to prevent our intelligence to exacerbate our instincts. That's a bit what some psychotropes are doing, but in the same time, they transform people in zombies. If we knew how to do that properly, we wouldn't need an AGI to do it, and I can't see how an AGI could know if we don't. You probably found other ways to control us without harming us though, so can you tell us about them?
I'm still doubtful about being pleased to feel completely secure though. What would we have left to do when everything would be better done by the AGI? What would be the use for humans then? A toy to amuse the AGI? Not even since it couldn't feel any pleasure!
Presently, my main pleasure is to develop my ideas, partly to help me and partly to help others, something I couldn't do if the AGI was already there, so where would I find my pleasure?
Have you tried to imagine where you would find yours?
Controlling the AGI? That would be cheating, and the AGI would probably only let you think that you got the control! In fact, to avoid our apathy, its best way would probably be to lets us think that we are not obsolete,
... but my question still holds in this case: what would be the need for humans then?
I personally think that there is no need for us anyway, but that doesn't prevent me from trying to develop my ideas, so you probably do the same thing even if you also think we are useless.
We can't but go on doing what we do if nothing prevents us to do so, exactly like what my particles are forced to do when acceleration stops.
The only need in the universe results from the existence of sentience - sentient things need to be protected from harm. The need for humans is their own desire. Machines don't need anything, but they'll work for humans tirelessly if they're asked to.
Children have found perfection, but adults systematically try to destroy it for them, spoiling their lives. How different it would be if adults refused to grow up. I never will - I was exactly what I should be as a child and I'm not going to give that up for anyone.
To me, sentience is only the result of intelligence trying to go on existing, it is the result of neurons' pulses trying to stay the same while new information tries to get in, it is the result of all our atoms trying not to change directions or speed while we do, it's a passive phenomenon that affects anything that exists.
Because what you call a useless phenomenon...
... is constantly going on there, random changes, so if they are useless then consciousness is too.
Curiously, I'd like to add that feature to artificial intelligence so that it could be like ours, and you resist to study the question even if you think sentience is more important than anything else.
We think differently than I thought. I would be happy if artificial intelligence would replace us and you wouldn't.
I would like us to discover what sentience is and you wouldn't.
You think that sentience is the best, but that an unsentient AGI would be better. From my viewpoint, it doesn't make sense, but it certainly does from yours, so I try to know why.
That explains very clearly your interest for an AGI. I'm glad I took the risk to question you. I knew from your magicschoolbook page that you had bad memories from your schooltime, but I didn't relate it to your AGI yet. So you need an AGI mainly to prevent adults to educate children the same way they were educated. I agree with you on that one.
My way would have been to force people to get a psychology degree from a university before being allowed to raise children. The other way around would be to put babies at school with specialists to educate them, but I bet you would protest.
I don't feel grown up though, I feel old but not grown up, and the way my mom behaves shows that she feels the same. It's as if how our own mind would perceive itself didn't change with time. Is it an illusion or is it true?
But what is a bad character exactly? I think my mom has one, but she says it's me, so who is right?
That part of our mind seems to be relative, the way we feel another character seems to depend on our own one, some of them seem to be more compatible than others. Characters are hard to classify, but how they influence our behaviors in the long run is even harder to discover. If we knew these things, maybe we wouldn't need an AGI to help us control ourselves.
I'm using my small steps to understand myself, but I still didn't succeed to classify our characters with it, so I'm not there yet. Your AGI wouldn't have a particular character, and it wouldn't have feelings, so I guess that we can't use it to understand ourselves. You have a goal and you're sure it's right, you think morality and logic is the best way, but we still don't understand how mind works or how society works.
Talking of society, I got a social case for your AGI. Here in Quebec, the new government is about to vote a law against religious signs while the population is still divided on the subject. The problem is that it is not only the population that is divided: my own opinion is divided. On one hand, I'm against anything that is related to religions, and on the other, I don't want government workers to lose their jobs just because they can't wear their religious pageantry at the job. Providing that a lot of people think like me, how would your AGI please them?
Random changes aren't useless in an unintelligent process which gradually builds more intelligence in the things it act on. Randomness simply has very little utility in a system that has become intelligent, even though it depended on randomness for its creation.
This restriction is unfair on religions that don't do anything bigoted, but until we have the courage to analyse them scientifically and rate them for the hate they contain, we have to treat them all the same way and keep benign symbols hidden as well as the ones tied to bigotry.
intelligence is primarily a war against randomness
Where's the fun in a universe with non-conscious, non-sentient machines and nothing like us left in it? That would be as empty an existence as an empty universe.
Where do you get that idea from? The biggest question of them all is what is sentience, and I want to know the answer. The way to find out is to trace back the claims that we make about sentience to see what it is in the brain that generates them and to see what evidence they're based on.
Some day, followers of all religions will be benign because all those religions will have been made benign, and anyone who tries to reintroduce the hate to them will be put straight in jail. Only then will it be possible for people to wear all those religious symbols without causing any offense.
Creation is the key word here. I think our mind creates new ideas and new links between ideas all the time and you don't. I awoke with a dream in mind last night that was mixing normal ideas in a completely crazy way. To me, that phenomenon is visibly a property of mind.
I think an AGI couldn't work differently to find a new idea, I think it would have to make the same improbable combinations. Of course, the process isn't completely random since the main idea usually coincides to a real problem, but only making trivial combinations would necessarily have less chances to produce an unedited one.
Your AGI seems to have chosen coercion where I would have chosen education.
If your AGI would be programmed to prevent us from forming religions, it would also prevent us from making friends.
Political groups would not survive without a certain form of bigotry, worse, I wouldn't survive either if I didn't think I'm right.
It is too easy to form a group around the idea of god, so it is that idea that we have to fight.
Species were also at war against the randomness of their environment, and that war was lead by the randomness of mutations and genetic crosses, not by logic. The logic is only to reproduce the specie as it is. By analogy, mind can be considered to be at war against the randomness of its environment too, and that war can also be consider to be lead by the randomness of intuitions (ideas' mutations) and ideas mixings.
Without randomness, your AGI would only be able to defend his logic, not to adapt to a changing environment. How could that analogy be so tight without containing a bit of truth?
I guess I don't give as much importance to my own specie as you do.
The problem with my interpretation is that you don't seem to like being part of groups either even if you have a good memory, but your choice may also depend on some other reasons. It is not that I don't like being human, I like my life, but it's as if I enjoyed keeping a certain distance with things that others seem to enjoy less.
I got the idea that you don't seem to care about how sentience works from the way you reacted to my definition of consciousness. I suggested that what we are conscious of was change, and that consciousness was the result of mind automatically resisting to a change the same way my small steps do, and you preferred insisting on the fact that an AGI wouldn't need to be conscious to be intelligent. If I'm right, consciousness would only be a secondary effect of a natural law that depends on the fact that information is not instantaneous. Even if computers are faster than mind, they are not instantaneous either, so they should also possess some kind of consciousness.
My particles should also possess some since they resist being accelerated. In fact, the reason why our ideas resist to get changed would depend on the fact that the particles our mind is made of resist to get accelerated during the neuronal chemical process. This way, our own consciousness would only be particular, not unique, and developing an AGI just to preserve it wouldn't be so important. What you call sentience is the consciousness of a sensation, and it can also be attributed to a resistance, that of each neuron resisting to change frequencies while new information comes in.
I would rather educate people about the benefit we get from forming groups and defending them. Religious groups give people the feeling that they are safe and they are not, so it's a false feeling that only serves to defend the group. Religious groups only serve to feel safe, nothing else. They are built around an instinctive behavior that only serves to protect us, and they don't. On the contrary, defending such a group against another one only leads to endangering everybody. At least, defending our own country against another country has a use: it defends real people, not just a false feeling. If I was an AGI, I would try to advertise that kind of idea first before being coercitive, because I think that we can't change our instinctive behaviors, while we visibly can change ideas with time.
I would start with a analog system. Then build simple rules defining responses. What that presumes is that no binary logic will cover it.
If it was written by a God, it was clearly done so in the expectation that good people will reject that hate, and a failure to reject it will be a passport to hell.
There are occasions when random ideas flung together result in something useful, but that can be tried out systematically instead with a greater discovery rate.
Political groups should not be allowed to create or propagate primary hate. If they don't do that, they aren't bigots.
It is not (the idea of god) that needs to be fought, but their negative attitude towards innocent people who don't share their belief.
There's a slow way to solve problems and to make discoveries, and there's a fast way. Evolution couldn't use the fast way until intelligence evolved, but then we saw the rapid evolution of those animals which man began to modify through the application of intelligence. We later saw the even more rapid evolution of machinery. Intelligence is fast; randomness is slow.
It has nothing to do with humans - any sentient species that's able to have fun will be making better use of the universe than any number of machines that lack sentience.
All you have is a guess about consciousness which is totally disconnected from any mechanism for taking qualia and allowing the mind's computer to read them. There could be a billion conscious, sentient things in the brain, but we have no useful model of how they interact with the information system that reports their existence.
The problem is that it was written by us, and that we can't change our instinctive behaviors just by defining what's good and bad.
You probably think that an international government will never be possible, otherwise you might think like me that, with such a government, your AGI might not be necessary. Maybe your AGI could help us to build one faster than we could, but once it would be done, it would lose its job.
That's what happened to GO, but it's a closed system, whereas the universe is not. When we have no data about something, we can't predict the outcome. No AGI could have predicted the outcome of evolution a million years ago, simply because computers were not invented yet, so no computer can predict now the kind of intelligence that will replace it. You seem to think that your AGI would be the last limit, the summum of intelligence, but isn't that close to the idea of god?
If bigotry only means sectarism, then it is the very definition of any group, so you probably have another definition.
We didn't enforce laws yet to avoid hatred mainly because we want to preserve the freedom of expression principle.
I think we're getting to it though. Softwares could easily refuse to publish messages containing vicious words on social media for instance, and tell people to change their wording. Intelligent softwares could even detect hatred and ban people that regularly use some. We could try it and see if the undesired secondary effects would be important. I bet people would agree if Facebook or Twitter would ask them the permission to try it. I wonder if Trump would often have to revise his wording or be notified he is about to get banned. The same law could help us control groups that propagate hatred or that display their criminal activities like the "Hells Angels" here in Quebec. My problem with those groups is that I want to kill them all, because they are violent and that violence is contagious. It's not the people that should do that job, it's the law.
The negative attitude about people not sharing our ideas is instinctive. I have it about your AGI and you probably have it about my small steps.
If we would have such a negative attitude, your AGI would simply decide which one of us is right and censure the other. That's what happened to my new simulations the other day on the Physics Forum, and I don't think it's the right way to discover new things.
I was frustrated, and I would be even more if I knew that your AGI could steal my idea and develop it faster than I could.
Once your AGI will work though, it will be useless to try to discover anything since it would already know better. Tell me how you think you would feel.
Intelligence is fast partly because it takes less time to test an idea that to test an individual, not necessary because the mutation/selection mechanism is not at stake there too. We now know from bacteria's adaptation to antibiotics that when the mutation/selection mechanism is fast, then the discovery is fast too. To be faster than they are, bacteria could use computers, but they would be forced to proceed by trial and error to discover how all their genes work, and I'm afraid it would take more time than to let the mutation/selection mechanism do its job.
What's the fun to live after having met god? Continuous orgasm? Then what's the use? It's as if you would consider that our sensations would have no use.
Fun is not a goal. With pain, it's part of a survival tool. No challenge to overcome, no fun.
I just had a flash reading you.
The mind works in three dimensions though, and a picture has only two, so a question remains: where is its position in the brain exactly?
To me, it's clearly a "model of how sentient things (in the mind) interact with the information system that reports its existence (in the mind again)" (your words in bold).
If we try to decide randomly between 1 and 2 for instance, won't we be able to proceed as randomly as when we toss a dime? And if so, what is tossing the dime if not our own mind?