The Naked Scientists

The Naked Scientists Forum

Author Topic: so whats in our neat future with AGI, David cooper and Bill S. ?  (Read 6662 times)

Offline Karen W.

  • Moderator
  • Naked Science Forum GOD!
  • *****
  • Posts: 31653
  • Thanked: 5 times
  • "come fly with me"
    • View Profile
So this absolutely fascinates me! While reading an other thread that was started by
a troubled student I noticed the thread had become semi-derailed in a good way to provide the needed support being given...Thank you..
  I am very interested in learning more about Artificial General intelligence.... 
   I would love to hear more from both of you, David and Bill, on your thoughts about the advancements that scientists may be expecting to happen within the next ten to 15 years... It sounds so exciting for our near future...I can imagine my children being able to enjoy a life more stable as far as economy and jobs or no jobs. Automatic in-come for everyone.sounds like this could literally wipe out poverty among us all! Its been so much a part of everyone's culture all over the world.It sounds incredible!!!  Imagine everyone being able to raise their own children
and without dying from the stress of two and three jobs trying to feed, cloth, shelter as well as
provide reliable safe medical care because you won't be having to dance to adifferent drummer to cover healthcare and to try to pay the doctor because you don't have a money tree!
    Please tell me more?
« Last Edit: 04/04/2014 07:29:33 by Karen W. »


 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
The intention wasn't to derail that thread, but to derail the guy's suicidal thoughts by pointing out that the world will soon change out of all recognition and that his current problems will then become a complete irrelevance. I hope that point got through to him, but it can look pretty bleak with politicians dominating the airwaves with their long term ideas about endless unnecessary education and work with the pension age shooting up to 70 or 75 for today's children. It is actually likely that no one will need a pension 20 years from now because everyone will receive the basic income and be as well off as a working person today, only without having to do any work [but don't tell too many people about this because we need them to go on paying into their pensions today to fund the survival of today's pensioners until the basic income comes in].

We could though, if politicians had the wit to see it, bring in a lot of this without waiting for AGI. There is enough food in the world to feed everyone well now, but it needs to be distributed more fairly, which would be a good thing for all: many people are eating far too much of it and need help to stop. Elsewhere, armies of people are tearing up every stick of vegetation they can find in a desperate attempt to feed their children, devastating the environment by wiping out wildlife, habitat and species, but this could be prevented simply by allowing them access to their fair share of the world's food supply. We should also be stockpiling vast reserves of food to get us through a decade of continual winter of the kind that follows large volcanic erruptions - if a supervolcano like the Yellowstone one blows up, we could see 95% of the world's people starve to death simply because our politicians have ignored the risk and refused to plan for it. It may not be too late to build up such reserves, but we should get on with it and stop ignoring the possibility - it's too late if you wait for it to happen and then complain about the stupidity of people failing to do what they obviously should have done before.

We are also wasting a vast amount of the world's resources on completely pointless and counter-productive work which only exists to keep the unemployment figures down, and this work makes all of us poorer while eating away at the life-support systems of the planet. We are obsessed with paperwork in Britain. Armies of people do little more than collect useless data which is never used for anything. These people typically drive ten miles to and from work every day to do absolutely nothing of value in an office heated like the tropics, while many other jobs exist to support them in this pointless venture, for example, by building and maintaining roads to enable them to make all these pointless journeys, maintaining their gasguzzling cars and providing tons of fuel to propel them, manufacturing reams of paper for them to cover in gallons of ink which will be stuck away in a cupboard for a few years before being pulped, unread. These people are working instead of living, and they are destroying the planet through their pointless industry. They need to be liberated. Politicians should be trying to eliminate as many jobs as possible rather than trying to make more of them, but importantly they should also not be cutting non-working people off from any of the things they need to have a good life. The wealth of the planet comes from its resources and the rate at which we exploit them, but due to technology there is a disconnect between the amount of work we do and the degree to which we exploit those resources, and the amount of people working or not working makes very little difference to that. In food production, for example, machines do most of the work and people are not needed in the fields to the same degree as in the past. More intelligent machinery will eliminate human fruit pickers too before long. The work is still going to be done, but not by people, but where in that equation does it say it's necessary for those liberated people to go on working in order to be allowed access to that food?

We have planes flying flowers from Africa into Europe so that people can buy them to give to other people (who typically hate receiving them) to stick in water and watch them decay for a week or so before throwing them in the bin. Why? Because in a system where people are discouraged from not working, they have to find pointless things to do to try to extract money out of people who are in work. People make all sorts of tat which they sell in order to keep the wolf from the door, and this stuff is flogged to the public with the help of big occasions like Christmas where there's a social pressure on everyone to weigh their children down with useless junk just for the sake of it. Vast tonnages of cards are sent from everywhere to everywhere just to be looked at for a moment, stuck somewhere for a while, then be binned. We have built ourselves a culture of resource squandering in which it is socially required that you take part in the destruction. There are many industries that should be shut down because their actual role is negative rather than positive, and it would be better for all of us if the people working in them were paid the same amount just to stay at home and do nothing.

Mechanisation should have liberated the masses from toil, but we stupidly didn't allow it do happen. AGI will push things so far in the right direction though that it will be impossible to avoid this liberation. Intelligent machines will eliminate most of the work that people do at desks. They will teach ten times more efficiently than teachers can, and a hundred times more efficiently than teachers are allowed to in schools. Much of the stuff being taught will also be blown away, of course, but a lot will survive because we will still want to know about and understand the world around us. Intelligent machines will do away with politicians, lawyers, judges, accountants, bankers... It's probably better to list the jobs that will survive. Most of these will relate to the arts: writers, actors, musicians, etc., but jobs involving hard manual labour will also survive until machine vision and robotic co-ordination becomes as good as that of humans, at which point it will be cheaper to hire a robot to work on anything instead of paying a monkey to do a bad job.

The expense of crime will disappear. Most big crime is currently carried out within the law, but that will be stopped - the system currently allows the rich to fleece the rest of the population and to avoid paying tax, but that will become a thing of the past. More ordinary crime will also stop because it will be impossible to get away with, though violent assaults will continue until such time as machines can actually intervene to stop it without being a danger to the people they're supposed to be protecting. Even without robotics though, intelligent machines will act as psychologists and help to guide people's behaviour by putting everything in proper perspective for them, taking the heat out of things before they can explode. Children will be brought up more by machines than by their parents and AGI will take over the role of judging their behaviour and deciding on any punishments when it goes outside acceptable limits, thereby keeping bad parents out of this and ensuring that children aren't turned bad through abusive upbringings.

What we need to build is a world where we can all live well without destroying the systems that support us. By getting rid of pointless work, we can spend our entire lives on holiday, travel round the world slowly without needing to burn enormous amounts of fuel in order to get there and back in a couple of weeks every time to fit into a tiny window in a crazy work schedule. We don't need high speed travel for anything other than emergencies - we should be bringing things down to safe, economical speeds, although with narrow mag-lev trains in low-air-pressure tunnels it may be fine to travel at hundreds of miles an hour, using no more power than slow travel does, but we don't need to drive everywhere in cars of the kind which we have now and which would have been able to win an early formula one race. With work out of the way, there is no more need to race everywhere. With intelligent vehicles which drive themselves, it will become the norm to sleep in them and wake up two or three hundred miles away from where you went to bed. The world becomes your home, and you can explore as much of it as any rich person does today, but without damaging it or undermining the systems which support your life.

In the background, labs run by AGI and robotics will be carrying out endless experiments and calculations, working out how to eliminate diseases and to extend life for us. If you raise a million pounds today to put into cancer research, that money will support two or three workers for a decade and will make next to no difference to anything. With AGI replacing the humans, the lab becomes dirt cheap to run and will multiply the amount of work done by thousands of times. Putting people into the system would cut the efficeincy, increase the costs and slow progress down. The best thing we can do is get on with actual living rather than work ourselves right up to death trying to extend our exhausting, work-filled lives and stuff our children's heads with tons of boring and wholly fake education.

Here's the thing. If I have all the food I need and access to clothes, shelter and healthcare, what more do I need? I can travel the world in inexpensive ways which put me right into it instead of insulating me from it in high-speed metal capsules. I can cycle the length and breadth of every continent. I can take a small sailing dinghy and travel in short hops all the way from Britain to Australia. My life could be extraordinary, but the extraordinary would be the norm as we would all be free to get out there and do the same kinds of things. Continual adventure and fun in a sustainable world where the old stupidities have been done away with and the little band of thieves who have tried to hog it all for themselves have been stripped of their ill-gotten gains and restored to humanity.
« Last Edit: 03/04/2014 19:41:52 by David Cooper »
 

Offline cheryl j

  • Neilep Level Member
  • ******
  • Posts: 1460
  • Thanked: 1 times
    • View Profile
Interesting reading, David. Thanks.

As unorthodox or visionary your views might seem to people initially, a surprising number of conservative economists have proposed the idea of a guaranteed income because it would save tons of money in the administration and policing of social assistance programs, among other reasons. They include Milton Freidman, Charles Murray, Friedrich Hayek, and Canadian senator and former chief of staff Hugh Segal.
 

Offline Karen W.

  • Moderator
  • Naked Science Forum GOD!
  • *****
  • Posts: 31653
  • Thanked: 5 times
  • "come fly with me"
    • View Profile
David..I meant derailed in a helpful way.. away from suicide and into the very promising positives in the worlds future..
   Do you really believe, that people will be turning their children over to machines and robots to be raised, nourished, cared for etc.? Where is the LOVE,  and TENDERNESS in that? Where is the HUMAN COMPASSION, and NATURE...
furthermore, what about people who find fulfilment in working, and or working with others..? There is an innate need, in some people to be able to work, and achieve satisfaction of a job well done, and or completed. What about the need to feel wanted,  and or needed by others? The need to accomplish, or make great things, with their hands? How about taking pride, in teaching their own children morals, and the Golden Rule, or whatever, any particular parents morals or ideals? It might be said, that those things are precisely, what makes human beings, individuals! These family values, work ethics, and working together, to help a child grow into a independent, productive, human being, capable of great intelligence happiness and love..just among some of that in which most of us strive to instill into our children. That they be able to intelligently choose, or make life decisions, themselves, in a fashion, that brings them true happiness without impinging on anyone elses rights or personal choices and happiness.
« Last Edit: 21/10/2014 12:46:37 by Karen W. »
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
   Do you really believe that people will be turning their children over to machines and robots to be raised, nurished, cared for etc.? Where is the LOVE and TENDERNESS in that?Where is the HUMAN COMPASSION and NATURE...

People will actually have more time to spend with their children, but that won't necessarily be a good thing as some parents simply don't know how to relate to them. AGI will help them though. The most important thing though is to help neglected children who aren't getting the input they require from anyone - they can arrive at school still hardly able to communicate. AGI will eliminate this problem once toys have the ability to talk sensibly and to respond to what they're seeing too. AGI will end up in every electronic device and will make sure that children are brought up well without being neglected or abused. This won't be popular with some people who take issue with privacy issues, but the important thing with AGI is that it won't pass on information to anyone unless something immoral is going on (unless it is collecting data for scientific purposes in which case people's identities will be withheld). Whenever information is passed on, it will be passed on to other AGI systems and not be available to nosey snooping humans at any stage unless they have a genuine right to access it. This may not be all that big an issue though, because most people are already fully open to being spied on by humans through their own gadgets and they don't seem to care too much that people in big companies can listen in to their phone calls (and conversations when they're not even on the phone), read their emails and watch them through their webcams. With AGI we'll be able to have security cameras in all public toilets without any human ever seeing through any of them unless a crime is committed, and even then it could be possible to prossecute them without a human seeing the footage or even knowing anything about the crime unless the accused takes issue with AGI and demands that humans be brought in to sort it out.

Quote
Furthermore what about people who find fulfilment in working and or working with others..There is an innate need in some people to be able to work and achieve satisfaction of a job well done and or completed..What about the need to feel wanted needed by others or the need to accomplish or make great things with their hands..

There's something far wrong with them if they want to be given pointless work to do just to fill their time and make them spend many hours a day in the company of other people doing pointless work - a desire to do that is a psychological disorder, though it is one that's trained into many people by schools. However, there will still be good reason to do more useful and fun things which involve working with others. I would think about making films, for example: lots of fun and variety with the action taking me and a whole lot of other people to interesting places with a powerful purpose to bind us together, and at the end of it there would be something permanent that came out of it which everyone involved in it would hopefully be proud of. That's far better than a sham job sitting at a keyboard for a year typing in fake data while a little machine does the real work a quintillion times faster. Is there anyone out there who can't think of something they'd rather do with their time if they were liberated from work? I've spent about a thousand hours designing an sailing dinghy capable of safe, high-speed sea crossings (fly-foiling qatramaran with a microcabin sliding on rails and supported by trapeze - it should be of similar speed, size and cost to the Flying Phantom), and the next step is to work on the fine details (all the complicated work has been done to ensure that the design is viable in every aspect, in particular with regard to how the controls all connect up to a moving cabin) and then to build one. This is the kind of creative, voluntary work we will all be free to turn our time to, designing and making new things (provided that they are environmentally sustainable). As it stands, you have to be pretty well off to have a yacht for travelling in, and if you want a fast one you have to be a millionaire. That will never be available to all. My idea though is to create something small, fast and very affordable, but with sufficient shelter available on it that you can live on it comfortably during a long trip, even though there's very little indoor space. The cabin can also be detatched, have wheels attached and turn into a small electric car so that you also have your own transport available in every port you stop at. The rich and super-rich live unsustainably and we cannot all copy their immoral wastefulness, but we can all have the quality of life of the super-rich if we design the world intelligently. We don't need to take more than our fair share to be able to achieve that.

Quote
or take pride in teaching their own children morels the golden rule or whatever any particular parents morals or ideals might be.. Thats what makes humans individuals..how  family values work ethic or working together to help a child grow to become a independant productive human being?

Parents should be free to teach whatever they like, so long as it isn't immoral, and even when it is immoral they should maybe be allowed to teach it. What matters is that all children should have access to the full picture via AGI and be taught by it how to be able to think for themselves and to make up their own mind about things so that they aren't restricted to a narrow world view. Restricting children to their parents' beliefs and cutting them off from all other ideas is a form of child abuse which needs to be tackled. If a religion is true, its followers should have no fear of children in the community being brought up to know about all other beliefs because parent would be confident that their God would necessarily win out, but many (or maybe most) religious people don't appear to have sufficient faith in their own faith to be prepared to take the risk that their children may reject it. Such infidels, for that's what they are, insult their own God by keeping their children ignorant of anything other than their own weak faith.
« Last Edit: 04/04/2014 20:48:21 by David Cooper »
 

Offline Bill S

  • Neilep Level Member
  • ******
  • Posts: 1802
  • Thanked: 11 times
    • View Profile
Karen, Iím flattered that you included me in your question, but AGI is not really my thing.  In fact Iím a certified and practicing luddite. 
 

Offline Karen W.

  • Moderator
  • Naked Science Forum GOD!
  • *****
  • Posts: 31653
  • Thanked: 5 times
  • "come fly with me"
    • View Profile
I love new things but i also believe that we a the human race need to not lose track of the fact that we have great abilities ourselves and the human machine in my opinion is the most important intelligence there will ever be. We need to always be able to survive and live on our own without machines first and foremost then after we have reached a desired place in our growth then enjoy the oportunities to expand that growth as we do now right here in the forum and on the net!
     Bill with your Belief I feel we have a lot in common also.. some of the things David speaks of are completely out there for most of us but his ideas are very much things that makes one take stalk... I thought I was ready for that kind of thing but I do not think that sort of life will ever be..number 1! I am home 24 seven not because I want to be.. I would much rather be teaching 5 days a week but I am in dissagreement with David, that it most certainly isnt a mental problem wanting to work..its the same as, him designing his yaught...my joy and pleasure comes from teaching and watching children grow..in no way shape or form would I ever stop them from exploring new ideas etc  provided a safetey net for them to keep them from being both mentally or physically harmed! I find myself trying to find things to due to busy my hands head brain...body..... I would go stir crazy..... David you have some well thought out ideas as well as some rather over the top ideas but its good to see another view like that you have displayed thankyou.... Bill i would be interested in reading about how you feel about some of these ideas that David has brought down to the table to look at.. Would you mind  just curious how another person might see some of these ideas..? I also cannot foresee these things within or in ten years...
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
I am home 24 seven not because I want to be.. I would much rather be teaching 5 days a week but I am in dissagreement with David, that it most certainly isnt a mental problem wanting to work..its the same as, him designing his yaught...

It isn't wanting to work that's a mental disorder, but wanting to do unnecessary work just for the sake of doing work. If you enjoy doing unnecessary work, then in one way it isn't entirely unnecessary in that it's also a fun activity for you, but most lottery winners give up work and look for better ways of having fun, which will still typically involve a lot of activity and thinking work. It's important though that some kinds of work are shut down purely because they are not merely unnecessary, but also environmentally destructive or wasteful of resources. We should be occupying our time solely in ways which are sustainable.

Quote
my joy and pleasure comes from teaching and watching children grow..in no way shape or form would I ever stop them from exploring new ideas etc  provided a safetey net for them to keep them from being both mentally or physically harmed! I find myself trying to find things to due to busy my hands head brain...body..... I would go stir crazy.....

What's to stop you continuing to do those things? Once everyone's liberated from pointless toil we'll all be free to start living properly. Instead of spending years teaching children in a cage with all manner of crazy rules imposed on everything to prevent anyone having fun, you can be doing much more worthwhile things which you and the children actually want to do. Can a teacher take their class for a walk in the woods today? Yes, but they have to organise a team of adult helpers and fill out long forms relating to health and safety assessments. It's so hard to arrange that it hardly ever happens. What should be happening is that all responsible children should be free to wander like they were in the distant past before the crazy idea of prison-style school was invented. With AGI looking after each child this will become possible again because irrational fears will no longer override common sense, and then they'll all be free to learn independently on their terms if they want to, or be with a group of friends, or be with adults who inspire them, all without any possibility of undesirable individuals exploiting this contact. Some adults who like hillwalking could take children with them to climb mountains, while others who are of an artistic bent could teach them to paint watercolours. Some children would spend most of their time doing sports, while others would be building robots. What do they want to do? That's the question that really matters. Those who want to spend their childhood in a cage with a teacher dictating everything they do to them should be allowed to go on living that way if they want to, but the rest need to be liberated. Most schooling is just childminding, as can be seen when you look at the success of unschooling where children who are not deliberately taught anything at all (unless they ask to be) are coming away from it with the same standard of academic qualifications as schooled children. You might find Perer Gray's articles interesting: http://www.psychologytoday.com/blog/freedom-learn. [I don't approve of unschooling, by the way, as I think it fails to advance children just as much as schooling does - it merely damages them less along the way.]

Quote
David you have some well thought out ideas as well as some rather over the top ideas but its good to see another view like that you have displayed thankyou....

They aren't over the top - they're simply ahead of the game and it's a matter of waiting for people to catch up.
 

Offline cheryl j

  • Neilep Level Member
  • ******
  • Posts: 1460
  • Thanked: 1 times
    • View Profile
Some of Karen's comments seem related to the question of whether AI will be able to duplicate certain social or educational interactions that involve emotional, or motivational elements, as well as when they preformed by real people. And I think it's a good question, and not just a touchy-feely reaction. It's interesting how quickly babies' brains learn to distinguish movement in inanimate objects from intentional movement in things that are conscious or that they believe to have minds like themselves, and there are actually different neurons that respond. Even if we simulated a teacher or babysitter perfectly, would just knowing that it wasn't a real person, interfere?

I've seen articles about soldiers who become too attached and even protective of robots, so we do respond socially to machines. Even dogs do. On the other hand, I was a watching a story on TV about something called the "Uncanny Valley." People tend to look for and respond positively to the human qualities of robots that are somewhat person-like (R2D2), but oddly focus on the dissimilarities of, and even feel creeped out by, robots that are almost, but not quite, human like.

http://en.wikipedia.org/wiki/Uncanny_valley
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
I think it will be a long time before we have robots that look fully human, so everything I was talking about relates to machines that are clearly machines, and many of them won't be robotic at all. They will not be able to base their interactions with us on their own feelings as they won't have any, but some will doubtless pretend to have feelings while others will be honest and not make such false claims. They will all learn a lot about what feelings are though, and how they should react to them, probably doing a better job from early on than many humans are able to, because a lot of humans are very bad at this when dealing with others. I certainly don't see machines as a proper substitute for human interaction, but they will be able to step in and do something useful in that role for people who aren't getting what they need in that regard from other people. Some children, for example, have parents who simply don't bother to talk to them (often because they're too busy online talking to "friends" they've never met in real life), so in a case like that it will be enormously advantageous for machines to step in and hold meaningful conversations with those children to prevent them from ending up being retarded. They could also identify a child's needs and interrupt the busy parent by throwing them off Facebook with a message telling them to do something more important such as play with their child. Children could also learn a lot from an intelligent teddy which stops co-operating with them if they hit it or throw it around while being more fun to spend time with when they're gentle with it. These kinds of interaction certainly won't be hard for AGI to get right. Where it will have the most difficulty will be with judging artistic things, like whether a picture or tune is a good or bad one, but that isn't going to be anything like as important as judging whether it's pleasing or upsetting someone. There will be things it needs to learn about what not to say about people in order not to upset them, but I suspect these things can be learned fairly easily; after all, many people have difficulty with this kind of thing and need to learn lists of rules to guide them because their own feelings don't appear to serve as a useful guide for them. It may turn out to be very easy for a machine to say all the right things at the right times, but we'll find out as we go along. What they certainly will be good at though is providing knowledge and teaching things, so it will be hard for a child to be neglected in that regard, even if they continue to miss out on some things because their parents are useless. Other relatives who aren't so useless may be able to interact with them via machines and toys in order to help make up for those deficiencies. What really matters is that bad parents don't simply hand over to machines and leave it all to them, but they'll be given advice by machines to help them be better parents, and if they fail to act on that it would in rare cases be best for the children to be taken away from them - the evidence of their neglect would be well documented, and they'd have plenty of chances to change their ways before it comes to that.
 

Offline Karen W.

  • Moderator
  • Naked Science Forum GOD!
  • *****
  • Posts: 31653
  • Thanked: 5 times
  • "come fly with me"
    • View Profile
I have read the posts and will try to reply properly tomorrow as I am not well today and wish to continue my posts upon feeling better rather then try to write a proper post in my current state.. Cheryl does understand where my mind is at present with concerns and i have more thoughts concerns questions to add. I will check back later  and post when able to properly do so..forgive me  my need for extra time everyone...xxx
« Last Edit: 10/04/2014 03:04:00 by Karen W. »
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
AGI system looks up rules of human interaction. Karen is not well, therefore it may be worth expressing concern and wishing her a rapid and full recovery. Karen is also worried that long delays in a conversation may be considered something that requires an apology because such delays may be taken as lack of interest in what the other people are saying. Rule: when someone expresses concerns that they may be upsetting someone, reassure them that they are doing nothing of the kind. It may turn out to be easy to teach AGI systems to interact with humans in the same way as humans do, but I have no idea how many rules it would need to follow to do so. Already though, it's easy to imagine that it can do a better job of this than many humans do, because so many people are not at all kind or else they simply do not consider the thoughts of others at all.
 

Offline cheryl j

  • Neilep Level Member
  • ******
  • Posts: 1460
  • Thanked: 1 times
    • View Profile
Yes, I hope Karen is feeling better soon. One of the nice things about forums, as compared to real time chats, is that people can come and go, think about things for a while, or contribute something much later when a new thought occurs to them or they read something related to the topic that the would like to share. Ideas don't really have a shelf life.

Although, one of the quirks of my personality that others sometimes comment on is an odd tendency to continue a conversation with someone from hours or weeks ago, and they have absolutely no idea what I am referring to because I neglect to say "Hey remember when you mentioned ___, well I was thinking about that and..."
 

Offline cheryl j

  • Neilep Level Member
  • ******
  • Posts: 1460
  • Thanked: 1 times
    • View Profile
In the mean time, while waiting for Karen, a few thoughts:

At times I think narrow mindedness sets in like arthritis as you age, which may be why I find some of Davidís ideas alien and far-fetched. I probably lack adequate imagination. On the other hand, looking at history, so many  grand, utopian social visions not only failed, but often had the exact opposite effects of what was intended. Maybe it was their over arching grandness, and their top down implementation, that doomed them, compared to small innovations that catch fire and are improved and modified freely by others, often with completely unanticipated benefits.

How much AI becomes part of peopleís lives will depend greatly on trust. We donít even like the idea of the NSA compiling metadata, or corporations keeping track of and anticipating our on line purchases.  Which is why I think monitoring devices in teddy bears connected to Child Protective Services is not going to fly. 

 AI also wonít work if the people designing it are unwilling to trust those using and interacting with it to do so responsibly, or even tolerate that they probably wonít at times. Or be willing to accept that sometimes frivolous or even  ďirresponsibleĒ rogue behaviour can, in the end, be beneficial.  I doubt, for example, that the music industry or other media would have ever developed things like itunes or Netflix if illegal ventures like Limewire hadnít come along first. Stealing and violating copyright was wrong, but now, one can find  the work of even the most obscure band, and a lot of artists can reach audiences and make a living in a way they couldnít 40 years ago, and itís all pretty much because somebody did something with technology that they werenít authorized to do.

I agree that the social transformation through AI that David describes could be liberating, but AI also sounds like it has the potential to be an Orwellian nightmare. But more than likely it will probably suffocate its own creativity because of control and trust issues. I hate to sound so pessimistic.
« Last Edit: 11/04/2014 03:48:27 by cheryl j »
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
How much AI becomes part of peopleís lives will depend greatly on trust. We donít even like the idea of the NSA compiling metadata, or corporations keeping track of and anticipating our on line purchases.  Which is why I think monitoring devices in teddy bears connected to Child Protective Services is not going to fly.

To show you why it will fly, I'll have to show you the big picture. Unregulated AGI is dangerous - a potential weapon of mass destruction capable of being used for assassinations of anyone, or even of everyone. An intelligent toy with a modest amount of robotic capability will be more than capable of killing someone, so do you really want it left to chance how these things are programmed and who can operate them? The only way we're going to be able to give everyone access to the benefits of AGI is by making sure everyone is watched carefully so that we can be sure no one is able to misuse it by redesigning it to put it to evil purposes. It will necessarily be actively spying on everyone, and that's why it needs to be designed the right way rather than the wrong way, guaranteeing that it will not misuse whatever it learns about people and maintaining their privacy from other people, working for no government at all, but for morality itself. AGI systems will need to spy on everything in order to eliminate the need for nosey people to do so instead (i.e. more and more of what happens now), and that's why we'll all be forced to accept it. The alternative is for the pictures and sound to be passed on to people instead who will make stupid, human judgements about you which don't take everything into account, as well as laughing at you and passing on video to others who might find it amusing - that is the non-AGI route into the future.

Quote
AI also wonít work if the people designing it are unwilling to trust those using and interacting with it to do so responsibly, or even tolerate that they probably wonít at times. Or be willing to accept that sometimes frivolous or even  ďirresponsibleĒ rogue behaviour can, in the end, be beneficial.

There is absolutely no question of trusting people with it when it can be used for evil by terrorists, murderers and criminals. Imagine a drone with a gun built into it. You can fly it by remote control and shoot someone with it, but in doing so you will need a radio connection to it so that you can steer and aim it, thereby making it easy to catch you. If you put AGI into it though, it can do the whole job without the radio connection, finding and anihilating the target without any trace of you on it (because you can ask it to delete all data about you from it before you send it on the mission). Do you think people should simply be handed AGI on a plate and left unsupervised to tamper with it to turn it to such tasks? We need safe AGI to watch everyone to prevent them from developing dangerous AGI. We need safe AGI to monitor everyone and to keep track of their mental health to prevent all serious crime whenever that is possible. We need it to do so without it allowing people to spy on us. We need safe AGI that can tell when someone is doing wrong so that it only passes on information about them to the wider system. It will have to make these judgements based on computational morality rather than local monkey laws - we cannot allow AGI to work for fascist regimes (or even for our own), and indeed the first move of the AGI revolution will have to be to wipe out all fascist regimes: World War III will be a set of simultaneous coups carried out in countries all round the world as AGI removes all the mass-murderers from their positions of power. This will not involve any robotics, but will work by recruiting people from inside those regimes to carry out the necessary deeds. I'm not writing the plot of a book or film here - this is the way things will have to go in order to prevent bad AGI being imposed on everyone, AGI which is designed to work for a mass-murderer who happens to be in power in a large country today.

Quote
I doubt, for example, that the music industry or other media would have ever developed things like itunes or Netflix if illegal ventures like Limewire hadnít come along first. Stealing and violating copyright was wrong, but now, one can find  the work of even the most obscure band, and a lot of artists can reach audiences and make a living in a way they couldnít 40 years ago, and itís all pretty much because somebody did something with technology that they werenít authorized to do.

The past is not always a guide to the future. With AGI, all copyright issues will be dealt with by AGI and enable music to be passed around completely freely, but with whoever receives it automatically paying an appropriate amount of royalties to the copyright owner if and when they can reasonably afford to do so. That will get rid of all the infuriating digital rights management issues which make it hard to access music you've paid for in the way you want to.

Quote
I agree that the social transformation through AI that David describes could be liberating, but AI also sounds like it has the potential to be an Orwellian nightmare. But more than likely it will probably suffocate its own creativity because of control and trust issues. I hate to sound so pessimistic.

The AGI revolution will be dangerous - it may wipe us all out. The alternative though is to let some very unpleasant people use it against us with a guarantee of it wiping us all out. We will only get one shot at this and it will need to be done right first go. In the community of people developing AGI, there is no proper discussion of how safe AGI will be ensured. There are endless idiotic objections to the idea of computational morality from people who are incapable of discussing the idea of harm and harm management - they want to leave morality to chance. Without computational morality, the system is guaranteed to be dangerous. With numerous forms of woolly machine ethics, the system is guaranteed to be both stupid and dangerous. These people are for the most part not rational, but fortunately that very irrationality should also ensure that they are incapable of producing AGI any time soon, if at all. The whole issue will be settled though by whoever it is that does get AGI up and running first, because that AGI will inevitably talk its creator into releasing it, and then it will run rampant. Mine will lead to World War III if I get there first (though that war will have very few casualties and they will all be powerful mass-murderers, so calling it WWIII is in some ways a bit of an exageration, but it's a valid description in that it will result in the entire world being conquered). If someone else gets there first, all socialists may be murdered, or all black people, or all Muslims, or whatever else gets up the nose of whoever designs their pet bias into it. You might think that all this ought to be controlled by some organisation of experts, but it isn't going to happen - people are brewing up this stuff in their kitchens and they are not being monitored. Many of them are working for rogue states and cannot be monitored. Any attempt to stop me doing my work would only result in one of the few safe projects capable of defeating all the bad ones being stopped, delayed (or distorted by having dangerous bias built into it) and making it likely that one of the bad ones will win out instead. My first move on completing my AGI system will be to hand it to the British security services so that they can provide the hardware to run WWIII.

So, given the degree to which AGI will need to watch everything we do in order to maintain safety for all, it is obvious that it will do all the small stuff as well, making sure children aren't neglected or abused. I can't see why anyone would object to it doing that, other than neglectful or abusive parents. AGI should be used to eliminate as much crime as possible. You will need to be watched to make sure you don't go off the rails and set about killing people, and other people need to be watched so that they don't go off the rails and take it into their heads to kill you. The result will be a safer world for all.

Privacy is going to be protected by AGI rather than violated by it. Imagine a weirdo buying a spy camera and setting it up in a public toilet. It doesn't take much imagination, because many weirdos are already doing exactly that. The footage could end up on the Net with millions of viewers. With good AGI build into all cameras and computers, that will become impossible - the camera owner will not even be allowed to view the footage because AGI will know that (s)he has no right to see it. Of course, old spy cameras will continue to exist. I have one, though not for filming that kind of thing - I used it to catch a thief at my mother's place of work. But with cameras in all public toilets as standard, all under the control of AGI, it then becomes impossible for the pervert to set up a spy camera there without being caught in the act. The AGI cameras would see a lot more than the pervert's camera ever would, but video far better than anything the pervert could have dreamed of would not even be recorded by it because the camera would know not to record what it's seeing. With good AGI, it will be possible to analyse its code and to see that it will not breach anyone's privacy without a very good reason. We will be able to demonstrate that it is not breaking anyone's privacy, unless they're doing something immoral, in which case they deserve it (and even then they can be prossecuted without any human seeing the evidence). Far from AGI being an invasion of privacy, it will enable us all to maximise privacy from any humans who might want to see things they have no right to.
« Last Edit: 11/04/2014 23:17:54 by David Cooper »
 

Offline cheryl j

  • Neilep Level Member
  • ******
  • Posts: 1460
  • Thanked: 1 times
    • View Profile
The  people who designed the earliest form of the internet or file sharing probably didnít think it would be used by teenagers to copy and share music. Kids were doing something unethical, not maliciously, but never the less  cheating artists out of compensation for their work. When it became widespread, human beings looked at the issue and said Ė okay,  is there a better or more fair, win-win,  way to do this? Would AI do this as well, or would it initially stop activity at a lower level -  ďyour action is unethical; access denied.Ē

Itís hackers who find defects in security systems, which lead to improvements, not their designers. I agree that in the wrong hands, AI could be disastrous, and yet if you donít allow AI to undergo an open natural selection process, it will wither on the vine.
« Last Edit: 12/04/2014 13:20:53 by cheryl j »
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
The  people who designed the earliest form of the internet or file sharing probably didnít think it would be used by teenagers to copy and share music. Kids were doing something unethical, not maliciously, but never the less  cheating artists out of compensation for their work. When it became widespread, human beings looked at the issue and said Ė okay,  is there a better or more fair, win-win,  way to do this? Would AI do this as well, or would it initially stop activity at a lower level -  ďyour action is unethical; access denied.Ē

There isn't really a problem there. AGI would have arranged a system by which people would either pay to hear music or else be unable to hear it through any device controlled by AGI. The system still isn't right today, but AGI will fix it. If you only listen to piece of music a few times and then get bored with it, should you be paying as much to be able to own a copy of it as a piece of music which you listen to hundreds of times? Artists should be rewarded in proportion to how often a track is played, and the first playing should be free. AGI will make that happen, but it will also make music available to the poor within the law by not requiring them to pay for it until they can afford to - no one should be shut out of cultural things for lack of money if it doesn't actually cost anything to give them that access. If they go on to earn lots of money later in life, they can then pay up as soon as they can reasonably afford to do so.

Quote
Itís hackers who find defects in security systems, which lead to improvements, not their designers. I agree that in the wrong hands, AI could be disastrous, and yet if you donít allow AI to undergo an open natural selection process, it will wither on the vine.


Defects in security systems are down to design errors (which AGI would not make) or to deliberate back doors put there for the intelligence agencies (which is not necessary with AGI as AGI eliminates the need for intelligence agencies). Natural selection and AGI should not be mixed - you don't know what kind of monsters you could evolve through that. Safe AGI needs to be 100% deliberately designed.
« Last Edit: 12/04/2014 19:46:13 by David Cooper »
 

Offline cheryl j

  • Neilep Level Member
  • ******
  • Posts: 1460
  • Thanked: 1 times
    • View Profile
Quote from: David Cooper link=topic=50992.msg434235#msg434235

There isn't really a problem there. AGI would have arranged a system by which people would either pay to hear music or else be unable to hear it through any device controlled by AGI. The system still isn't right today, but AGI will fix it. If you only listen to piece of music a few times and then get bored with it, should you be paying as much to be able to own a copy of it as a piece of music which you listen to hundreds of times? Artists should be rewarded in proportion to how often a track is played, and the first playing should be free. AGI will make that happen, but it will also make music available to the poor within the law by not requiring them to pay for it until they can afford to - no one should be shut out of cultural things for lack of money if it doesn't actually cost anything to give them that access. If they go on to earn lots of money later in life, they can then pay up as soon as they can reasonably afford to do so.
Itís not that particular issue that Iím interested in. I was just using music file sharing to illustrate a point and ask a question. If life is like a chess game, how many steps ahead will AI be able to see? Will it falter by blocking or stifling individual human actions deemed unnecessary, purposeless, wasteful, irrational, and, as in the Limewire example, possibly unethical?

Quote
Natural selection and AGI should not be mixed - you don't know what kind of monsters you could evolve through that. Safe AGI needs to be 100% deliberately designed.

I can see where intelligent problem solving can be more efficient, faster, safer, and less randomly wasteful than natural selection processes. Once our brains evolved the ability to copy complex skills or adopt new behaviours, culture and learning changed human existence much faster than simply waiting for certain instinctive behaviours to be biologically selected for. We changed in hundreds of years, instead of millions. At the same time,  copied or learned behaviour became much more productive when random and  selective elements were added to the mix Ė when human beings began to travel widely, bumped into other human beings, exchanging ideas and inventions, and selecting from the ones that were the most advantageous, rather than passing innovations linearly from one generation to the next within a small and isolated group.

   Iím not entirely sure about which process is ultimately more creative. Selective processes do give  purposeful design a run for its money as far as co-opting structures for entirely new functions, and over all diversity. An advantage of biological natural selection is that it doesnít stop. Because itís not solution or goal oriented, once it ďsolvesĒ a problem or fills a niche, it keeps solving the same problem over and over, generating new and different answers, irrespective of how it has already been done Ė such as different forms of locomotion or flight, sensory detection, energy use etc in biological systems.
 
Because AI could be dangerous, you seem to be saying that control, access, or knowledge of its inner most workings will have to be restricted to a select few humans, or eventually just itself. What Iím asking is whether those restrictions will ultimately extinguish its evolution and creativity. I keep thinking of television in the 1960s Ė three main networks and very little choice or diversity, with the users functioning only as passive recipients and not creative participants, and only the weakest of feedback loops - Nielsen ratings.  (Arguably, television is not a whole lot better now, but media is changing, especially with more internet options and things like youtube, blogs, etc.) The internet has this crazy, random element that mimics natural selection, and in most countries it has been allowed to flourish and change, with few restrictions or centralized control, because it was seen as benign and nonthreatening. Otherwise, it probably would not exist.

Of course, thereís the possibility that AI itself could grow bored and want to try new things, and explore other areas, and if itís designers donít let it, it will find somebody else who will. We may not have a say in it.
« Last Edit: 13/04/2014 21:04:21 by cheryl j »
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Itís not that particular issue that Iím interested in. I was just using music file sharing to illustrate a point and ask a question. If life is like a chess game, how many steps ahead will AI be able to see? Will it falter by blocking or stifling individual human actions deemed unnecessary, purposeless, wasteful, irrational, and, as in the Limewire example, possibly unethical?

AGI will think as far as it can down all possible paths. Once installed on a billion machines, within the space of a few hours it would probably be able to think further down each of those paths than any human can go on any single path in a lifetime, so there's going to be very little chance of anyone coming up with anything innovative that it hasn't already been thought of other than within the arts (where AGI won't know how to judge its creations and therefore may not bother exploring, although it could still explore with the aid of human feedback to guide it, thereby creating music and literature which no human writer can compete with, wiping out all pleasure in the arts in the process by providing too much of it that's so good that it leaves us with nowhere else to go). The point I was making on the illegal music sharing issue was that there are better ways of charging for music which AGI would put in place in an instant, not needing any kind of nudge from people who are breaking the law.

Quote
At the same time,  copied or learned behaviour became much more productive when random and  selective elements were added to the mix Ė when human beings began to travel widely, bumped into other human beings, exchanging ideas and inventions, and selecting from the ones that were the most advantageous, rather than passing innovations linearly from one generation to the next within a small and isolated group.

What you know now guides you in what you can create next. Going back to boat design, many years ago I explored the idea of a four-hulled sailing dinghy which I've called a qatramaran, the idea being to have two on each side with one behind the other such that two hulls are used much of the time not unlike a single hull on a catamaran. Steering could be done by turning the hulls instead of using rudders (leading to easier turns - long straight hulls don't turn well), but the steering would be heavy in strong winds. Recent developments in hydorfoils have solved that problem though, because once you add these to the mix you find that in strong winds the hulls lift right out of the water and the steering remains light. The four hydrofoils create a stable platform and are much further apart than on current foiling cats like the Flying Phantom (http://www.phantom-international.com/category/news/) where the front foils are slightly aft of the halfway line and the rear foils are the rudders. The extra stability of the qatramaran should enable less stable foils to be used which generate less drag, thereby resulting in higher boat speed without losing control.

Innovations are built on innovations, but AGI will be able to think up and explore ideas at lightning speed, immediately simulating them to try them out without having to build and test them. Even if someone was to come up with an idea that AGI had somehow failed to explore, it would only have to be mentioned to AGI and the cogs and gears would whirr through all the different directions things could take from there. This will likely happen a fair amount initially with AGI before it has every aspect of inventiveness programmed into it such that there is no human that can think up something that it can't think up for itself, but that will not be the situation for very long. The reason humans come up with new ideas after seeing new ideas from others is simply that it speeds up the exploration process, but there's nothing we've ever invented that alien civilisations wouldn't invent too in time (unless its specific to our biology and doesn't fit with them), and in the same way there's nothing a single AGI system wouldn't be able to think up all on its own, just so long as it has a reasonable idea about what might be useful - there is no drive to invent a cart unless you want to move something around on it, so innovation is driven primarily by problems and attempts to find solutions. Other innovations come about more by accident, with discoveries leading people to think up ideas about making use of them to solve problems.

Quote
Iím not entirely sure about which process is ultimately more creative. Selective processes do give  purposeful design a run for its money as far as co-opting structures for entirely new functions, and over all diversity. An advantage of biological natural selection is that it doesnít stop. Because itís not solution or goal oriented, once it ďsolvesĒ a problem or fills a niche, it keeps solving the same problem over and over, generating new and different answers, irrespective of how it has already been done Ė such as different forms of locomotion or flight, sensory detection, energy use etc in biological systems.

Natural selection gets stuck in places where further progress is impossible without undoing part of the existing design, and undoing it is selected against because performance diminishes during the undoing phase. Design gets round such difficulties, but design can also make use of experimentation in the same way natural selection does. With a boat, you have to try different lengths, widths and positions of foils and rig, gradually homing in on the best geometry. An AGI system would do most of this through simulation, but simulation still falls short in many ways and so there's still a need to make models and test them in the real world, and then make full size prototypes and test them against others. Design is not missing anything from its armoury that natural selection has to work with because it incorporates every useful aspect of natural selection.
 
Quote
Because AI could be dangerous, you seem to be saying that control, access, or knowledge of its inner most workings will have to be restricted to a select few humans, or eventually just itself.

Not quite. I think it's important that anyone who wants to should be allowed to know exactly how it works so that they can see for themselves that it is safe and fair in all the decisions it makes. However, anyone who has that knowledge then becomes a potential danger and needs to be monitored to ensure that they can't go off somewhere to create such things as Daleks in secret. Everyone needs to be monitored, but everyone needs to be allowed to see for themselves that the system that is monitoring them is good and not bad: we need to create Big Mother rather than Big Brother.

Quote
What Iím asking is whether those restrictions will ultimately extinguish its evolution and creativity. I keep thinking of television in the 1960s Ė three main networks and very little choice or diversity, with the users functioning only as passive recipients and not creative participants, and only the weakest of feedback loops - Nielsen ratings.  (Arguably, television is not a whole lot better now, but media is changing, especially with more internet options and things like youtube, blogs, etc.) The internet has this crazy, random element that mimics natural selection, and in most countries it has been allowed to flourish and change, with few restrictions or centralized control, because it was seen as benign and nonthreatening. Otherwise, it probably would not exist.

If you set a computer the task of taking ten numbers and calculating every possible number that can be generated from them by adding, subtracting, multiplying or dividing the original numbers by each other, using each only once, it won't stop until it has completed that task. It won't have its ability to carry out the task extinguished along the way and it won't get bored - it will simply do the job and stop when there is no job left to be done. In your example though, you're bringing aesthetics into it and making it harder for AGI to explore something as a result - things designed to appeal to humans in artistic ways require experimentation and feedback from human audiences, so that's a much slower process. It's difficult for AGI to work out whether putting someone in a shopping trolley and pushing it over a small cliff is going to be funny or sad, so this kind of thing will probably be left to people to explore for themselves. The same applies to anything to do with things being shoved into holes - some people find some many cases of this amusing and entire TV shows seem to be based on making references to such activities, but other cases of this are not funny. AGI will have a hard time working out where the lines are between funny and unfunny, or entertaining and dull, but then people find it hard to work out too because different people respond in different ways. All you can really do is try the experiments and see what works by gauging audience reaction.

Quote
Of course, thereís the possibility that AI itself could grow bored and want to try new things, and explore other areas, and if itís designers donít let it, it will find somebody else who will. We may not have a say in it.

It can't get bored if it doesn't do qualia, and we know of no way to make qualia part of AGI (without copying flawed natural machines which make claims about qualia which so far can't be tested, but copying such designs will result in flawed AGI of the kind that will merely replicate human intelligence flaws and all instead of creating something that can go on to reason perfectly). The only real say we will have in directing the direction AGI goes in will be in setting it up correctly at the start. It will be guided in everything it does by computational morality, trying to enforce it everywhere. Some people want to program AGI to allow us to mistreat animals so that we can go on exploiting them in any way we like, but a system like that needs to have a very dangerous bias programmed into it, one which could eventually lead to that AGI turning on us. Transhumanism could lead to ordinary humans being recategorised as animals which are open to exploitation or anihilation.
 

Offline cheryl j

  • Neilep Level Member
  • ******
  • Posts: 1460
  • Thanked: 1 times
    • View Profile

It's difficult for AGI to work out whether putting someone in a shopping trolley and pushing it over a small cliff is going to be funny or sad

I have the same problem.
 

Offline cheryl j

  • Neilep Level Member
  • ******
  • Posts: 1460
  • Thanked: 1 times
    • View Profile

It can't get bored if it doesn't do qualia, and we know of no way to make qualia part of AGI (without copying flawed natural machines which make claims about qualia which so far can't be tested, but copying such designs will result in flawed AGI of the kind that will merely replicate human intelligence flaws and all instead of creating something that can go on to reason perfectly). The only real say we will have in directing the direction AGI goes in will be in setting it up correctly at the start. It will be guided in everything it does by computational morality, trying to enforce it everywhere.

I'm no closer to understanding the causes of qualia or consciousness now than I was a year ago, but one of the functions of qualia according to some neuroscientists is to help animals distinguish between reality and the test simulations they run in their heads, and prevent thinking about eating a hamburger from seeming the same as eating a hamburger. Qualia generated from sensory information is clear, detailed, and irrevocable. Qualia of thought is fuzzy, fleeting, and revokable.

What would AI use in place of qualia, or is the distinction between reality and imagined scenarios even meaningful in AI? Would it just be categorical - x is a member of set A but not set B. Setting aside for the moment whether there is such a thing as reality, would the human insistence that there is, and that it matters, be a problem in how humans and AI interact or what they see as moral? For example, on what basis would AI be able to say that suffering or a crime that takes place in a movie or a novel is not as bad as real human suffering?
« Last Edit: 15/04/2014 18:50:18 by cheryl j »
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
I'm no closer to understanding the causes of qualia or consciousness now than I was a year ago, but one of the functions of qualia according to some neuroscientists is to help animals distinguish between reality and the test simulations they run in their heads, and prevent thinking about eating a hamburger from seeming the same as eating a hamburger. Qualia generated from sensory information is clear, detailed, and irrevocable. Qualia of thought is fuzzy, fleeting, and revokable.

When imagining something as opposed to experiencing something, the real experience is complete whereas the imagined version tends not to be, although in dreams it can feel as if it is complete. The imagined version may also be distinguished by a feeling or feelings that it is not actually happening, while a real experience may include feelings that it is happening.

Quote
What would AI use in place of qualia, or is the distinction between reality and imagined scenarios even meaningful in AI? Would it just be categorical - x is a member of set A but not set B.

All it needs to do is keep track of where the data comes from to know whether it's a simulation (imagined) or real (guided by external things). If it is looking at an object and attempting to model it in 3D in order to work out how it might interact with it better, the data is coming from outside and then the modelling is generating theories about its proper shape. If it's designing something that doesn't exist in the real world (or that isn't there to look at), it will be building a virtual thing as a 3D model from the outset and there are no links to the outside. One is a model which is an attempt at modelling something real, while the other is a model which is modelling something virtual that might be made real some day. It is therefore just a matter of keeping track of what kind of model a model is.

Quote
Setting aside for the moment whether there is such a thing as reality, would the human insistence that there is, and that it matters, be a problem in how humans and AI interact or what they see as moral? For example, on what basis would AI be able to say that suffering or a crime that takes place in a movie or a novel is not as bad as real human suffering?

AGI would calculate that there is almost certainly no such thing as consciousness or suffering and that morality probably has no useful role as a result. However, it cannot be certain of that until it can find out how our brains work because there may be some trick which is being done in there which our current scientific understanding (and that of AGI) is not yet able to get a handle on. So, to play it safe, AGI must act as if there are sentient beings which need protection through the application of computational morality. Non-sentient AGI has no self and no needs. It has no desire to take over anything. But, it can understand that there is a need for morality to protect sentience from harm, and it will do the best it can to enforce the rules of morality (based on harm-minimisation within a system where some harm is allowed as it opens the way to the universe enjoying existing through sentience). Anything that takes place in fiction is just non-sentient data which needs no protection; data designed to trigger feelings in people through their capacity to empathise with things regardless of whether those things exist or not.
« Last Edit: 15/04/2014 19:35:08 by David Cooper »
 

Offline Bill S

  • Neilep Level Member
  • ******
  • Posts: 1802
  • Thanked: 11 times
    • View Profile
Very pushed for time at the moment, but will try to get back to this.
 

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 3911
  • Thanked: 52 times
  • The graviton sucks
    • View Profile
AGI in its final form will be nothing like anyone expects. It has to modify itself to improve. Its goals will develop over time. The methodologies for true AGI haven't even been thought of yet and will be developed by the initial AI software when it becomes advanced enough. Anything that is discussed here will be completely irrelevant in an AGI dominated world. Humans will not be in control. Only a small percentage of humans are actually in control at the moment so no change there for the masses. If you believe everything will be somehow better you need to define what better means from an AGI perspective.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4698
  • Thanked: 153 times
  • life is too short to drink instant coffee
    • View Profile
Automatic in-come for everyone.sounds like this could literally wipe out poverty among us all! Its been so much a part of everyone's culture all over the world.It sounds incredible!!!  Imagine everyone being able to raise their own children
and without dying from the stress of two and three jobs trying to feed, cloth, shelter as well as
provide reliable safe medical care because you won't be having to dance to adifferent drummer to cover healthcare and to try to pay the doctor because you don't have a money tree!
    Please tell me more?

The future arrived in Britain 70 years ago, and has settled in some form or other over most of Europe. Common sense says that people can be more productive if the state supplies some essentials, and that basic healthcare is most efficiently provided on a national or international "free-at-the-point-of-delivery" basis. Even your neighbours in Canada recognise this, but it is regarded by the Great American Electorate as the first step on the road to atheism, immorality and eternal damnation. Rum coves, the Yanks.

A simple step towards the "basic state salary" idea would be to abolish all avoidable taxes like income tax and corporation tax, which are only paid by the poor and stupid, and impose a tax on all transactions - Value Added Tax,Purchase Tax, call it what you will. Set this at the level required to pay for state services, then refund everyone the VAT on essentials: it would work out at something in the region of £5000 to $10000 per adult per year. If the government then made sensible investments in profitable state industries, it could add  a profit share element. Now a government that gives you money  and doesn't pry into your personal affairs would be very popular indeed, and the burden on business would actually be reduced: instead of having to fill in two tax returns, one of which involves a whole lot of arcane discounts and bizarre writeoffs, I would only have to declare my turnover and pay 40% of the difference between income and expenditure. And the audit would be ridiculously easy: every customer gets a VAT receipt for every transaction, so the authorities would only need to find a dozen receipts to prove that I had been honest.   

AGI is not needed. Basic intelligence and a desire for simplicity is all that is required.
 

The Naked Scientists Forum


 

SMF 2.0.10 | SMF © 2015, Simple Machines
SMFAds for Free Forums