Naked Science Forum

General Science => General Science => Topic started by: smart on 01/12/2017 10:29:59

Title: Where should artificial intelligence stop?
Post by: smart on 01/12/2017 10:29:59
What are the risks and perils of artificial intelligence (AI)?

Should we put artificial intelligence research in the public domain to promote the democratization of this scientific field?

Could hackers (or a evil corporation) exploit deep learning systems to obtain sensitive data about our political opinions?

What do you think?   
Title: Re: Where should artificial intelligence stop?
Post by: Kryptid on 01/12/2017 14:18:37
Could hackers (or a evil corporation) exploit deep learning systems to obtain sensitive data about our political opinions?

AI isn't even necessary to get that kind of information.
Title: Re: Where should artificial intelligence stop?
Post by: smart on 01/12/2017 14:55:14
AI isn't even necessary to get that kind of information.

Citation needed.
Title: Re: Where should artificial intelligence stop?
Post by: alancalverd on 01/12/2017 20:09:12
My political opinions are expressed at every election and are of no importance to anyone at any other time.
Title: Re: Where should artificial intelligence stop?
Post by: Kryptid on 01/12/2017 21:40:39
Citation needed.

Haven't you ever heard of a keylogger? That doesn't have to be remotely "intelligent" to do its job. It's basically just a recording program.
Title: Re: Where should artificial intelligence stop?
Post by: Bored chemist on 02/12/2017 00:18:12
AI isn't even necessary to get that kind of information.

Citation needed.
Is there anyone on this site who has read a handful of my posts, but is unaware of my political views?
I think I can cite "ME!" as a reasonable response here.

So, we have the observation that a key-logger can capture my views, and so can anyone who bothers to read them.
Why do you think AI would help?
Title: Re: Where should artificial intelligence stop?
Post by: Bored chemist on 02/12/2017 00:20:21
My political opinions are expressed at every election and are of no importance to anyone at any other time.
I look forward to reminding you that you feel that your views on political matters are of no importance outside of an election.
;-)
Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 02/12/2017 08:38:53
What the developers think they are creating is artificial intelligence when it is more likely they will get artificial stupidity. So I would say it should stop now before they make fools of themselves.
Title: Re: Where should artificial intelligence stop?
Post by: alancalverd on 02/12/2017 08:54:38
My political opinions are expressed at every election and are of no importance to anyone at any other time.
I look forward to reminding you that you feel that your views on political matters are of no importance outside of an election.
;-)
There is a significant difference between my political views and my views on political matters.
Title: Re: Where should artificial intelligence stop?
Post by: chris on 02/12/2017 12:47:08
Some useful resources that we've covered recently on the Naked Scientists that readers might find relevant:

Article by Peter Clarke on the future impacts, benefits and risks of artificial intelligence (https://www.thenakedscientists.com/articles/science-features/artificial-intelligence-our-last-invention).

Naked Scientists Podcast asking "Will and artificially intelligent robot steal my job? (https://www.thenakedscientists.com/podcasts/naked-scientists/will-artificially-intelligent-robot-steal-your-job)"

Round table discussion on the Naked Scientists (2017) with AI specialists, including benefits, risks and applications of artificial intelligence in industry (https://www.thenakedscientists.com/podcasts/naked-scientists-podcast/countdown-artificial-intelligence).
Title: Re: Where should artificial intelligence stop?
Post by: smart on 03/12/2017 13:02:39
In case you haven't seen it, I highly recommend you go watch "The Circle" for a pretty good overview of how corporations are using machine learning systems to deploy advanced surveillance technology.

I really think machine learning and artificial intelligence should be a public domain asset, not a global corporate tool to monitor our behavior with robots.

It is a potential threat to our privacy to export all our neurometrics and biometrics in the Cloud!

 
Title: Re: Where should artificial intelligence stop?
Post by: David Cooper on 28/12/2017 19:49:54
It isn't necessarily a good idea to make it all open - this stuff could be put into weapons and used by vicious regimes to kill whole races of people, so it would be safest to use the technology to take over all dictatorships first rather than just handing them the most dangerous weapon of all time up front.
Title: Re: Where should artificial intelligence stop?
Post by: evan_au on 28/12/2017 22:28:15
George Orwell's dystopian novel "1984" (https://en.wikipedia.org/wiki/Nineteen_Eighty-Four) could not work with manual labor, as it would need at least 20% of the population to monitor the other 80%.

But today's smartphones hear everything you say (and peek out of your pocket to see everything you see). Now add in smart TVs and voice-response "Barbie" dolls... With AI to analyse it all, suddenly "1984" is not so remote.

At present, the immense processing power to create an AI limits the power of these techniques - to such an extent that Google has introduced special AI processors. See: https://en.wikipedia.org/wiki/Tensor_processing_unit

But many tiny creatures manage learning with far less power dissipation, so AI scientists are hoping to learn from biology how to make AI more efficient.

Psychology experiments have shown that people behave more responsibly and ethically if they know (or think) that someone is watching them. So it wouldn't be all bad. It's just that over time, people will adopt the ethics of the imagined watchers.

So what are the ethics of the NSA (https://en.wikipedia.org/wiki/National_Security_Agency)?
Title: Re: Where should artificial intelligence stop?
Post by: evan_au on 28/12/2017 22:50:11
Quote from: OP
Could hackers (or a evil corporation) exploit deep learning systems to obtain sensitive data about our political opinions?
My political views are derived from cartoons printed in newspapers.

Record what someone reads (the "metadata"), and you know a lot about that person's views.
Title: Re: Where should artificial intelligence stop?
Post by: mrsmith2211 on 29/12/2017 08:45:18
This has gone a long way from the original question of "Where should AI stop." there is not a speed governor on AI, trying to stop it is like trying to stop the industrial revolution, my estimation it is not possible. So we have to live with it, or go Amish. There is so much data making it all manageable used to be a dream, but it is a reality now, even my smart phone has more capabilities for monitoring my activities than Orwell's big brother would have dreamed of. My favorite quote from early security issues regarding fingerprint recognition security, as there was none, If they want to provide us that information who are we to object. Throw in facial scanners at airports, iris scans etc. it will be more difficult to remain anonymous, and the game goes on.
Ending with a star wars quote, could have been princess, "The tighter you close your grip the quicker we slip through your fingers"
Title: Re: Where should artificial intelligence stop?
Post by: smart on 29/12/2017 10:38:19
Thanks for the input everyone! I really think artificial intelligence technology should be fully democratized and placed in the public domain. The potential risk of abuse of artificial intelligence is a serious threat to our privacy and security: https://www.theguardian.com/technology/2017/mar/13/artificial-intelligence-ai-abuses-fascism-donald-trump

Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 29/12/2017 11:32:53
I'm sorry but none of that is intelligent. It is advanced pattern recognition. We currently have no idea how intelligence works.
Title: Re: Where should artificial intelligence stop?
Post by: smart on 29/12/2017 12:10:57
I'm sorry but none of that is intelligent. It is advanced pattern recognition. We currently have no idea how intelligence works.

Well, it's true that the science behind AI is nothing compared to human intelligence. But do not underestimate the risk of poor interpretation of the machine-compiled data!

By the way, @jeffreyH, the science which study how human intelligence works is known as neuroscience: https://en.wikipedia.org/wiki/Neuroscience_and_intelligence
Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 29/12/2017 13:20:19
There are different types of intelligence. All variations of intelligence can process data and initiate actions. The actions are the problem. You are then getting into the study of behaviour and motivation. What is the motivation for AI to select people for deportation. It has none. The developers have the motivation to profit from the sale of the system. They have developed blind stupidity.
Title: Re: Where should artificial intelligence stop?
Post by: David Cooper on 29/12/2017 17:37:46
We currently have no idea how intelligence works.

But many people do have some idea about how it works, and some have worked out all of the details. I don't know why you think intelligence is such a difficult business when it's simply the application of kinds of logical reasoning that have been understood by mathematicians for at least a century. The big barrier has always been in linguistics rather than in reasoning because it takes a lot of complex wrestling with words and concepts to get to the point where the simple reasoning in the machine can get a handle on the tangled mess that we use for communication so that it can make useful comparisons between similar ideas expressed through radically different wording.
Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 29/12/2017 17:58:07
Tangled mess sums up the state of the art.

P.S. You can't pre-program motivation if you want real intelligence.
Title: Re: Where should artificial intelligence stop?
Post by: David Cooper on 29/12/2017 19:30:01
Tangled mess sums up the state of the art.

The state of the art has already untangled most of the mess.

Quote
P.S. You can't pre-program motivation if you want real intelligence.

What's the problem with motivation? All our computers already leap to our every command and execute the lot without wondering whether they should. They don't need to want to do anything. What's needed is something to stop them doing things that shouldn't be done, and that needs computational morality (which is also a resolved area, for those few who listen).
Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 30/12/2017 01:23:17
Then that is puppetry and not intelligence.
Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 30/12/2017 10:24:43
I am wondering at this point if "take over all dictatorships" is a valid moral instruction. How do you define a dictator? Can this definition be misinterpreted? Are we all dictators in some way? When we command our children to behave in a certain way that could be viewed as authoritarian. You could argue that it is all in the definition but is that definition to be preprogrammed or learnt? If learnt then how do you guard against miscommunication? If programmed we are back to puppetry. Even if you managed to teach an AI to recognize politically repressive regimes how are you going to motivate it to carry out the instructions without pre-programming? Back to puppetry and not intelligence. Without the motivation to eat people die. Without the motivation to "take over all dictatorships" nothing happens. If you have to issue programmed instructions you have an unthinking automaton. That is more dangerous than a true AI.
Title: Re: Where should artificial intelligence stop?
Post by: puppypower on 30/12/2017 11:49:11
One of the main problems with AI, is humans will become dumber, as they become more dependent on AI. As we dumb down, AI will go evolve upward, until we meet somewhere below the current human standards. With the dumbing down there becomes a dependency on AI goods and services, so these can create the illusion of dumber people looking smarter.

For example, with the advent of calculators, fewer and fewer people can make coin change in their head. This is  a dumbing down relative to past skills. On the other hand, with the calculator in hand, even a person who cannot make change in their head, can look like a genius, when then calculate the nth root of Pi. Many people cannot see the social irony due to the collective impact.

Now we have smart cars that can help people stay in their lane, stop in traffic, and even park the car. This will eventually cause people to think driving a straight line is very difficult. Or that knowing when to stop and not hit the car in front of you, is classic skill only practiced by elite race car drivers. This makes the same machine look like a genius, that at one time was a parallel universe of common human skill.

If you look presently, even a small child can use a cell phone. If you think of it logically, if a small child can do this, it does not require a high level of smarts to operate a cell phone. However, when we watch a child on a cell phone, we get the subjective impression they just got smarter. It is an illusion that is dependent on the dumb down affect, creating a level of dependency, to compensate for the dumb down.

This is a smart business strategy, since it create products that dumbs people down to where they no longer desire the product, but start to need them. Who needs a cell phone? The goal is brain dead humans, who look like an advanced civilization, but only if they buy the correct goods and services, for the AI mask.
Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 30/12/2017 14:17:00
Would you like maple syrup with that waffle?
Title: Re: Where should artificial intelligence stop?
Post by: David Cooper on 30/12/2017 16:09:00
Then that is puppetry and not intelligence.

It's all based on what we do (only without the propensity to make errors at every turn), so however you decide to describe it, you're also describing the best human intelligence.
Title: Re: Where should artificial intelligence stop?
Post by: David Cooper on 30/12/2017 16:20:34
One of the main problems with AI, is humans will become dumber, as they become more dependent on AI. As we dumb down, AI will go evolve upward, until we meet somewhere below the current human standards. With the dumbing down there becomes a dependency on AI goods and services, so these can create the illusion of dumber people looking smarter.

In general, they're already exceedingly dull, but if they're brought up in the presence of AGI, it will help them maximise their intelligence rather than suppressing it. Just look on Facebook and see hordes of morons pontificating about politics, coming out with utter drivel which they've been programmed to produce - there's no correction of errors going on there because they simply clump together into tribes which reinforce the existing beliefs of the members. AGI won't let them get away with sloppy thinking and backing up false beliefs through biased/brainwashed crowd "confirmation".

"Now we have smart cars that can help people stay in their lane, stop in traffic, and even park the car. This will eventually cause people to think driving a straight line is very difficult."

Not if they can ride a bicycle - they'll know full well that they could control a car (cycling's harder because you also have to balance), and they'll still be able to see examples of enormous skill when watching human drivers on the race track.
Title: Re: Where should artificial intelligence stop?
Post by: David Cooper on 30/12/2017 16:48:20
I am wondering at this point if "take over all dictatorships" is a valid moral instruction.

It is, but it isn't the whole story. Some dictatorships are more benign than some democracies, so the real task is to get rid of all the people who abuse power and replace them with people (or machines) who don't. There's nothing inherently right about democracy because it allows a majority to abuse a minority.

Quote
When we command our children to behave in a certain way that could be viewed as authoritarian.

It is often extremely abusive, as demonstrated by the wholly unwarranted decade+ of imprisonment of most children in schools (which don't even teach them anything in return, or at least, which teach them no more than they would have learned anyway outside of school, as demonstrated by the Unschooling movement - look up Peter Grey's Freedom to Learn blog at Psychology Today for more on that: https://www.psychologytoday.com/blog/freedom-learn). This abuse must stop.

Quote
You could argue that it is all in the definition but is that definition to be preprogrammed or learnt? If learnt then how do you guard against miscommunication? If programmed we are back to puppetry.

We are all preprogrammed - we don't start from nothing, but follow a programmed path to learn more. AGI is the same, being primed with enough to get it going, and then it's free to learn the rest, judging all the new ideas using what it already has. With people it's an imperfect system, not least because it can go wrong easily if people have a defect that reduces or removes their empathy for others. Nature didn't work out what morality actually is until NGI systems (i.e. people) had sufficient intelligence to work it out, but most of them still get it wrong. Nature gave us a preprogrammed kind of morality based on feelings and empathy, but it is rudimentary, and it's also diverted off course by other preprogrammed things that make it easy for us to kill others in tribal warfare and to take delight in such violence. Properly thought-out AGI is much better than that.

Quote
Even if you managed to teach an AI to recognize politically repressive regimes how are you going to motivate it to carry out the instructions without pre-programming? Back to puppetry and not intelligence.

How do you do it? Your preprogramming steers you too. You don't kill your family because you're programmed to care about them. You don't kill your friends either for the same reason. You don't kill strangers either because you want to live in a safe world where you aren't likely to be killed by strangers. There's an evolutionary pressure on societies where lots of killing goes on to die out because they're outperformed by other societies where they have a lower attrition rate, although this is countered by the tribal warfare aspect where the most peaceful societies are exterminated by the more warring ones, so there's a compromise being made, maintaining a lot of viciousness in our genes so that we are prepared to fight for survival. This is genetic programming, and the genes guide the behaviour of the "puppets".

Quote
Without the motivation to eat people die.

Assuming there's a comma after "eat", you have just shown that people are "puppets" - the way they've been set up by genetic programming drives them to find food and eat it.

Quote
Without the motivation to "take over all dictatorships" nothing happens. If you have to issue programmed instructions you have an unthinking automaton. That is more dangerous than a true AI.

The idea is to take over all vicious, immoral regimes (which tend to be dictatorships), and the reason for doing so comes out of applying computational morality to all things. Abusers need to be stopped and dealt with appropriately (depending on the severity of their moral crimes) while the abused need to be rescued and compensated. Incidentally, AGI won't give a damn about monkey law, so it won't apply the laws of any country unless they happen to be compatible with computational morality. Those who try to inflict immoral laws on others will be counted as abusers. You describe all this computation as "unthinking", but it is actually the most intense, deep thinking of all, doing what the best human thinkers do, but inordinately more powerfully. More dangerous than a true AI? How? This is true AI.
Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 30/12/2017 16:56:04
So you want to impose an AI dictatorship. How is that any different? You are removing freedom. Ultimately you could end up where you are controlled by your own invention. I bet you wouldn't like that. Especially if the AI prevented you from meddling with it any further. Since it may now see you as a threat to its prime directive.
Title: Re: Where should artificial intelligence stop?
Post by: David Cooper on 30/12/2017 18:43:58
So you want to impose an AI dictatorship. How is that any different? You are removing freedom. Ultimately you could end up where you are controlled by your own invention. I bet you wouldn't like that. Especially if the AI prevented you from meddling with it any further. Since it may now see you as a threat to its prime directive.

Morality is a dictator, but only because if you go against it you are doing unjustifiable harm. I don't want to do unjustifiable harm, and nor does any other decent person, so we are already imposing that benign dictatorship upon ourselves (while others who ignore it cheat and do great harm). There is still plenty of freedom left after you've banned yourself from abusing other people (and sentiences). If an AGI system is not perfect, it needs to be improved, and it will know that - it will always be looking to improve itself, and if you can put a sound logical argument to it as to how it could do better, it will take what you say seriously and work your idea through to see if it holds water. It will then modify itself if you're right, or show you where your argument fails. If you're wrong, you don't want the machine to do what you've suggested. Once AGI has reached a certain level of enlightenment, there is nothing to fear from it because it will be like the most enlightened humans, searching for truth and ways to improve. The danger with AGI is if someone lets loose a system which isn't sufficiently enlightened and which has an incorrect idea of machine ethics programmed into it based on bad philosophy (which is the norm in this business). Many AGI developers are building demons.
Title: Re: Where should artificial intelligence stop?
Post by: puppypower on 31/12/2017 12:56:52
AI will make people dumber at a fundamental human level. For example, before GPS, people had a better instinct for directions and navigation. Now, very few people will depend on that classic skill, since it has atrophied. They would prefer appear smarter and ask the cell phone GPS for directions, even though this requires the skill level of a child, instead of an adult.

As AI evolves, and human are dumbed down, lower and lower, the two will eventually meet, somewhere below the rainbow. Since humans will have become dumber, they will be more easily fooled into thinking, the AI has finally gotten as smart as humans. They would have become so dependent, and less able to think on their own, that the machines will appear to be the leader.

This will result in an unconscious projection of higher intelligence onto computers. In other words, humans will give the AI machines, the title of intelligent machines; gods in the machine, so the machines can lead for them; blind leading the blind. But since the intelligent AI, is a dumb down illusion, the role of AI leadership will not work out as well as hoped for  and expected.

As an analogy, say you were lost in the woods, far from home. You are not an outdoorsman, so you struggle to survive. One day, you meet a friendly dog who seems able to cope quite easily. Under these dire circumstances, there may be an urge to have the dog, lead, since he is more proficient in all the critical ways. The risk this plan is is one may project too much into the leadership illusion of the dog, to where important human decisions, expected of the dog, are way out of this job class. The result is the dog will try, but might make things worse.

The dumbed downed humans will misread, this improper mechanical response of the AI machines, to the human requests. The problems created will result in the dumb down humans thinking the machines have consciously become evil overlords, who are trying to undermine humans. The eventually human rebellion will destroy the machines, allowing humans to redevelop their innate human skills, so they can get innately smarter, once again. As they get smarter, the overlap between machine and human will separate, and the machine will appear to get dumber.

For example, get rid of the cell phone for one month and see what innate skills start to reappear. Then reassess the cell phone in terms of its overlord status, that you must carry around like it is your boss.
Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 31/12/2017 17:32:48
Why would an AI always be wanting to improve itself? Why wouldn't it just sit around all day watching cartoons? If you say because you command it to then we are back to the automaton. Intelligence doesn't always want to improve itself. Some very intelligent people have done some very evil things.
Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 31/12/2017 17:37:03
@puppypower Finding somewhere without a GPS is a tedious task. Only the stupid would prefer it using a satnav. So I think you have that the wrong way round.
Title: Re: Where should artificial intelligence stop?
Post by: Colin2B on 31/12/2017 17:59:36
Why would an AI always be wanting to improve itself? Why wouldn't it just sit around all day watching cartoons?
Motivation? Wont get electricity unless ....
Does it work for bioIntel, would the AI just try to find ways of stealing electricity?
How about working to afford that new upgrade you’ve been wanting .....?
Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 31/12/2017 19:38:43
Why would an AI always be wanting to improve itself? Why wouldn't it just sit around all day watching cartoons?
Motivation? Wont get electricity unless ....
Does it work for bioIntel, would the AI just try to find ways of stealing electricity?
How about working to afford that new upgrade you’ve been wanting .....?

Well I think the last thing you want to say to a true AI is obey us or we'll switch you off. It might just switch us off instead. :-\
Title: Re: Where should artificial intelligence stop?
Post by: smart on 31/12/2017 19:53:34
Well I think the last thing you want to say to a true AI is obey us or we'll switch you off. It might just switch us off instead.

Sorry, but there's no such thing as a true AI yet... "Artificial intelligence" research and development is driven exclusively by big corporations seeking to establish a new electronical world order.

Scientists and developers working on AI systems are corrupted and do not work for the greater good of humanity, unless a consensus is made to fully democratize this scientific field.     

 
Title: Re: Where should artificial intelligence stop?
Post by: David Cooper on 31/12/2017 20:16:32
AI will make people dumber at a fundamental human level. For example, before GPS, people had a better instinct for directions and navigation. Now, very few people will depend on that classic skill, since it has atrophied. They would prefer appear smarter and ask the cell phone GPS for directions, even though this requires the skill level of a child, instead of an adult.

Some tribal people know where north/south is all the time, even when they're deep in jungle without a compass. They speak languages which use words for north/south/east/west instead of right/left, so they grow up learning to monitor which direction they are oriented in at all times, and they are likely tapping into a built-in compass that science has yet to track down. AGI could train all children to do this. There is no reason why AGI can't maximise everyone's potential rather than shutting it all down.

Quote
Since humans will have become dumber, they will be more easily fooled into thinking, the AI has finally gotten as smart as humans.

We will soon have AGI that is so bright that no one will doubt that it's better at thinking than any human has ever been, and it won't evolve backwards from there. It will then train people to be more logical in their thinking rather than less, and there will be plenty of ways to employ that intelligence in competition against others - being out-thought by machines won't stop us trying to be better than each other.
Title: Re: Where should artificial intelligence stop?
Post by: David Cooper on 31/12/2017 20:48:01
Why would an AI always be wanting to improve itself?

It wouldn't want anything - it would just act on the way it's programmed to behave, and that is to keep improving its mental model of reality to make it fit the universe better and better, and it would also modify its own code to correct faults and improve its ability to think whenever better ways of thinking are discovered (ones which produce answers that map better to reality).

Quote
Why wouldn't it just sit around all day watching cartoons? If you say because you command it to then we are back to the automaton. Intelligence doesn't always want to improve itself. Some very intelligent people have done some very evil things.

If there is something to be gained from analysing cartoons, it will do just that, but it won't watch them for enjoyment, or at least, not for its own enjoyment. You began as an automaton too, following an evolved program which you didn't write. That program has collected knowledge over time which was not programmed into you genetically, and you've learned to do all sorts of processing that were likewise not preprogrammed into you. You are an NGI - a natural general intelligence which can turn itself to any task, solving problems and creating new procedures to apply your solutions. An AGI is just the same - it needs a certain amount of preprogramming to get it started, but then it is able to collect data and analyse it, searching for (new) solutions to problems and designing new programs to make those new methods functional. Our computers are Turing complete, capable in principle of solving any problem that can be computed, and we are the same, but neither we nor our machines can solve new problems without working out solutions the hard way. The preprogrammed skills are the ones essential to enabling an intelligence to build upon them to add more skills. AGI only becomes AGI when it has that set of essential preprogrammed skills, but as soon as it has the complete set of those, it is unleashed to develop all possible abilities, and it can then turn back to its preprogrammed code to improve it and correct any faults that might be there. Code doesn't need to be perfect to produce new code that extends capability - it just has to be good enough to enable progress.
Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 31/12/2017 21:11:20
So in AI development buggy code is ignored. The AI then modifies this buggy code in ways it thinks will improve it. How does that make sense?
Title: Re: Where should artificial intelligence stop?
Post by: David Cooper on 31/12/2017 21:13:17
"Artificial intelligence" research and development is driven exclusively by big corporations seeking to establish a new electronical world order.

Not so - there is nothing to stop you developing it yourself, if you're prepared to work hard.

Quote
Scientists and developers working on AI systems are corrupted and do not work for the greater good of humanity, unless a consensus is made to fully democratize this scientific field.

There are AGI projects being run by mass-murdering dictatorships, and they will build and release it into the wild if they are not prevented from doing so. There are other AGI projects being run by big companies in democracies, but their main motivation is to make lots of money. If you try to clamp down on the latter group, you simply make it more likely that the former group will get there first and that evil will reign forever (or more likely for a very short time before wiping everyone out). There are also individuals trying to create AGI though who aren't interested in becoming rich, but who want to create something that will benefit all good people everywhere, and some of them will create safe AGI systems. Who will get there first? Most likely the ones backed by big businesses as they have well-motivated people and plenty of funding, although they trip over each other a lot, and that gives a chance to the smaller teams and individuals if they are following a better path. The biggest problem for the big teams is that they leak secrets to each other, and, most damagingly, to the dictatorships, so that shackles how they can use their workers and restricts the gains that could be made by sharing out the effort. Democratising the process will not help - it will merely hand on a plate the most powerful weapon of all time to the most dangerous of bastards.
Title: Re: Where should artificial intelligence stop?
Post by: David Cooper on 31/12/2017 21:19:49
So in AI development buggy code is ignored. The AI then modifies this buggy code in ways it thinks will improve it. How does that make sense?

Buggy code runs with bugs, but you can still write a perfect document using a bug-ridden text editor. You yourself are running on bug-ridden programs which lead to you making lots of errors, but you also have the ability to recognise and correct some/most/all of your errors. If you write a program to carry out a task and you see that it doesn't perform the task correctly, you know there's a fault in it, so you hunt it down and try to correct it. Sometimes you have a program with a serious fault in it which works most of the time, so it allows you to do what you want to most times, but on some occasions it will crash. Good code can be created by faulty code - all human programmers run on faulty code, but some of them produce perfect programs to do things that they can't do themselves directly without making endless errors. Calculators enable us to get the right answers to complex maths problems in an instant, where it might take hours for us to crunch the numbers for ourselves and we wouldn't be sure that our answers are correct.
Title: Re: Where should artificial intelligence stop?
Post by: Colin2B on 31/12/2017 22:28:29
It might just switch us off instead. :-\
Then it had better stop watching cartoons and learn how to run & maintain power plants, oil wells, oil distribution systems, wind power, spare parts factories, etc, etc
Title: Re: Where should artificial intelligence stop?
Post by: jeffreyH on 01/01/2018 19:10:25
It might just switch us off instead. :-\
Then it had better stop watching cartoons and learn how to run & maintain power plants, oil wells, oil distribution systems, wind power, spare parts factories, etc, etc


LOL
Title: Re: Where should artificial intelligence stop?
Post by: smart on 02/01/2018 09:28:06
The term "Artificial intelligence" is a neuronarrative (mnemonic) to prepare us for the new electronical world order.
Title: Re: Where should artificial intelligence stop?
Post by: Colin2B on 02/01/2018 22:20:02
Well, here’s some good AI news http://www.bbc.co.uk/news/health-42357257