0 Members and 1 Guest are viewing this topic.
Do you really believe that people will be turning their children over to machines and robots to be raised, nurished, cared for etc.? Where is the LOVE and TENDERNESS in that?Where is the HUMAN COMPASSION and NATURE...
Furthermore what about people who find fulfilment in working and or working with others..There is an innate need in some people to be able to work and achieve satisfaction of a job well done and or completed..What about the need to feel wanted needed by others or the need to accomplish or make great things with their hands..
or take pride in teaching their own children morels the golden rule or whatever any particular parents morals or ideals might be.. Thats what makes humans individuals..how family values work ethic or working together to help a child grow to become a independant productive human being?
I am home 24 seven not because I want to be.. I would much rather be teaching 5 days a week but I am in dissagreement with David, that it most certainly isnt a mental problem wanting to work..its the same as, him designing his yaught...
my joy and pleasure comes from teaching and watching children grow..in no way shape or form would I ever stop them from exploring new ideas etc provided a safetey net for them to keep them from being both mentally or physically harmed! I find myself trying to find things to due to busy my hands head brain...body..... I would go stir crazy.....
David you have some well thought out ideas as well as some rather over the top ideas but its good to see another view like that you have displayed thankyou....
How much AI becomes part of peopleís lives will depend greatly on trust. We donít even like the idea of the NSA compiling metadata, or corporations keeping track of and anticipating our on line purchases. Which is why I think monitoring devices in teddy bears connected to Child Protective Services is not going to fly.
AI also wonít work if the people designing it are unwilling to trust those using and interacting with it to do so responsibly, or even tolerate that they probably wonít at times. Or be willing to accept that sometimes frivolous or even ďirresponsibleĒ rogue behaviour can, in the end, be beneficial.
I doubt, for example, that the music industry or other media would have ever developed things like itunes or Netflix if illegal ventures like Limewire hadnít come along first. Stealing and violating copyright was wrong, but now, one can find the work of even the most obscure band, and a lot of artists can reach audiences and make a living in a way they couldnít 40 years ago, and itís all pretty much because somebody did something with technology that they werenít authorized to do.
I agree that the social transformation through AI that David describes could be liberating, but AI also sounds like it has the potential to be an Orwellian nightmare. But more than likely it will probably suffocate its own creativity because of control and trust issues. I hate to sound so pessimistic.
The people who designed the earliest form of the internet or file sharing probably didnít think it would be used by teenagers to copy and share music. Kids were doing something unethical, not maliciously, but never the less cheating artists out of compensation for their work. When it became widespread, human beings looked at the issue and said Ė okay, is there a better or more fair, win-win, way to do this? Would AI do this as well, or would it initially stop activity at a lower level - ďyour action is unethical; access denied.Ē
Itís hackers who find defects in security systems, which lead to improvements, not their designers. I agree that in the wrong hands, AI could be disastrous, and yet if you donít allow AI to undergo an open natural selection process, it will wither on the vine.
There isn't really a problem there. AGI would have arranged a system by which people would either pay to hear music or else be unable to hear it through any device controlled by AGI. The system still isn't right today, but AGI will fix it. If you only listen to piece of music a few times and then get bored with it, should you be paying as much to be able to own a copy of it as a piece of music which you listen to hundreds of times? Artists should be rewarded in proportion to how often a track is played, and the first playing should be free. AGI will make that happen, but it will also make music available to the poor within the law by not requiring them to pay for it until they can afford to - no one should be shut out of cultural things for lack of money if it doesn't actually cost anything to give them that access. If they go on to earn lots of money later in life, they can then pay up as soon as they can reasonably afford to do so.
Natural selection and AGI should not be mixed - you don't know what kind of monsters you could evolve through that. Safe AGI needs to be 100% deliberately designed.
Itís not that particular issue that Iím interested in. I was just using music file sharing to illustrate a point and ask a question. If life is like a chess game, how many steps ahead will AI be able to see? Will it falter by blocking or stifling individual human actions deemed unnecessary, purposeless, wasteful, irrational, and, as in the Limewire example, possibly unethical?
At the same time, copied or learned behaviour became much more productive when random and selective elements were added to the mix Ė when human beings began to travel widely, bumped into other human beings, exchanging ideas and inventions, and selecting from the ones that were the most advantageous, rather than passing innovations linearly from one generation to the next within a small and isolated group.
Iím not entirely sure about which process is ultimately more creative. Selective processes do give purposeful design a run for its money as far as co-opting structures for entirely new functions, and over all diversity. An advantage of biological natural selection is that it doesnít stop. Because itís not solution or goal oriented, once it ďsolvesĒ a problem or fills a niche, it keeps solving the same problem over and over, generating new and different answers, irrespective of how it has already been done Ė such as different forms of locomotion or flight, sensory detection, energy use etc in biological systems.
Because AI could be dangerous, you seem to be saying that control, access, or knowledge of its inner most workings will have to be restricted to a select few humans, or eventually just itself.
What Iím asking is whether those restrictions will ultimately extinguish its evolution and creativity. I keep thinking of television in the 1960s Ė three main networks and very little choice or diversity, with the users functioning only as passive recipients and not creative participants, and only the weakest of feedback loops - Nielsen ratings. (Arguably, television is not a whole lot better now, but media is changing, especially with more internet options and things like youtube, blogs, etc.) The internet has this crazy, random element that mimics natural selection, and in most countries it has been allowed to flourish and change, with few restrictions or centralized control, because it was seen as benign and nonthreatening. Otherwise, it probably would not exist.
Of course, thereís the possibility that AI itself could grow bored and want to try new things, and explore other areas, and if itís designers donít let it, it will find somebody else who will. We may not have a say in it.
It's difficult for AGI to work out whether putting someone in a shopping trolley and pushing it over a small cliff is going to be funny or sad
It can't get bored if it doesn't do qualia, and we know of no way to make qualia part of AGI (without copying flawed natural machines which make claims about qualia which so far can't be tested, but copying such designs will result in flawed AGI of the kind that will merely replicate human intelligence flaws and all instead of creating something that can go on to reason perfectly). The only real say we will have in directing the direction AGI goes in will be in setting it up correctly at the start. It will be guided in everything it does by computational morality, trying to enforce it everywhere.
I'm no closer to understanding the causes of qualia or consciousness now than I was a year ago, but one of the functions of qualia according to some neuroscientists is to help animals distinguish between reality and the test simulations they run in their heads, and prevent thinking about eating a hamburger from seeming the same as eating a hamburger. Qualia generated from sensory information is clear, detailed, and irrevocable. Qualia of thought is fuzzy, fleeting, and revokable.
What would AI use in place of qualia, or is the distinction between reality and imagined scenarios even meaningful in AI? Would it just be categorical - x is a member of set A but not set B.
Setting aside for the moment whether there is such a thing as reality, would the human insistence that there is, and that it matters, be a problem in how humans and AI interact or what they see as moral? For example, on what basis would AI be able to say that suffering or a crime that takes place in a movie or a novel is not as bad as real human suffering?
Automatic in-come for everyone.sounds like this could literally wipe out poverty among us all! Its been so much a part of everyone's culture all over the world.It sounds incredible!!! Imagine everyone being able to raise their own children and without dying from the stress of two and three jobs trying to feed, cloth, shelter as well asprovide reliable safe medical care because you won't be having to dance to adifferent drummer to cover healthcare and to try to pay the doctor because you don't have a money tree! Please tell me more?
Quote from: Karen W. on 03/04/2014 01:26:39 Automatic in-come for everyone.sounds like this could literally wipe out poverty among us all! Its been so much a part of everyone's culture all over the world.It sounds incredible!!! Imagine everyone being able to raise their own children and without dying from the stress of two and three jobs trying to feed, cloth, shelter as well asprovide reliable safe medical care because you won't be having to dance to adifferent drummer to cover healthcare and to try to pay the doctor because you don't have a money tree! Please tell me more?The future arrived in Britain 70 years ago, and has settled in some form or other over most of Europe. Common sense says that people can be more productive if the state supplies some essentials, and that basic healthcare is most efficiently provided on a national or international "free-at-the-point-of-delivery" basis. Even your neighbours in Canada recognise this, but it is regarded by the Great American Electorate as the first step on the road to atheism, immorality and eternal damnation. Rum coves, the Yanks.A simple step towards the "basic state salary" idea would be to abolish all avoidable taxes like income tax and corporation tax, which are only paid by the poor and stupid, and impose a tax on all transactions - Value Added Tax,Purchase Tax, call it what you will. Set this at the level required to pay for state services, then refund everyone the VAT on essentials: it would work out at something in the region of £5000 to $10000 per adult per year. If the government then made sensible investments in profitable state industries, it could add a profit share element. Now a government that gives you money and doesn't pry into your personal affairs would be very popular indeed, and the burden on business would actually be reduced: instead of having to fill in two tax returns, one of which involves a whole lot of arcane discounts and bizarre writeoffs, I would only have to declare my turnover and pay 40% of the difference between income and expenditure. And the audit would be ridiculously easy: every customer gets a VAT receipt for every transaction, so the authorities would only need to find a dozen receipts to prove that I had been honest. AGI is not needed. Basic intelligence and a desire for simplicity is all that is required.