0 Members and 1 Guest are viewing this topic.
Do you really believe that people will be turning their children over to machines and robots to be raised, nurished, cared for etc.? Where is the LOVE and TENDERNESS in that?Where is the HUMAN COMPASSION and NATURE...
Furthermore what about people who find fulfilment in working and or working with others..There is an innate need in some people to be able to work and achieve satisfaction of a job well done and or completed..What about the need to feel wanted needed by others or the need to accomplish or make great things with their hands..
or take pride in teaching their own children morels the golden rule or whatever any particular parents morals or ideals might be.. Thats what makes humans individuals..how family values work ethic or working together to help a child grow to become a independant productive human being?
I am home 24 seven not because I want to be.. I would much rather be teaching 5 days a week but I am in dissagreement with David, that it most certainly isnt a mental problem wanting to work..its the same as, him designing his yaught...
my joy and pleasure comes from teaching and watching children grow..in no way shape or form would I ever stop them from exploring new ideas etc provided a safetey net for them to keep them from being both mentally or physically harmed! I find myself trying to find things to due to busy my hands head brain...body..... I would go stir crazy.....
David you have some well thought out ideas as well as some rather over the top ideas but its good to see another view like that you have displayed thankyou....
How much AI becomes part of people’s lives will depend greatly on trust. We don’t even like the idea of the NSA compiling metadata, or corporations keeping track of and anticipating our on line purchases. Which is why I think monitoring devices in teddy bears connected to Child Protective Services is not going to fly.
AI also won’t work if the people designing it are unwilling to trust those using and interacting with it to do so responsibly, or even tolerate that they probably won’t at times. Or be willing to accept that sometimes frivolous or even “irresponsible” rogue behaviour can, in the end, be beneficial.
I doubt, for example, that the music industry or other media would have ever developed things like itunes or Netflix if illegal ventures like Limewire hadn’t come along first. Stealing and violating copyright was wrong, but now, one can find the work of even the most obscure band, and a lot of artists can reach audiences and make a living in a way they couldn’t 40 years ago, and it’s all pretty much because somebody did something with technology that they weren’t authorized to do.
I agree that the social transformation through AI that David describes could be liberating, but AI also sounds like it has the potential to be an Orwellian nightmare. But more than likely it will probably suffocate its own creativity because of control and trust issues. I hate to sound so pessimistic.
The people who designed the earliest form of the internet or file sharing probably didn’t think it would be used by teenagers to copy and share music. Kids were doing something unethical, not maliciously, but never the less cheating artists out of compensation for their work. When it became widespread, human beings looked at the issue and said – okay, is there a better or more fair, win-win, way to do this? Would AI do this as well, or would it initially stop activity at a lower level - “your action is unethical; access denied.”
It’s hackers who find defects in security systems, which lead to improvements, not their designers. I agree that in the wrong hands, AI could be disastrous, and yet if you don’t allow AI to undergo an open natural selection process, it will wither on the vine.
There isn't really a problem there. AGI would have arranged a system by which people would either pay to hear music or else be unable to hear it through any device controlled by AGI. The system still isn't right today, but AGI will fix it. If you only listen to piece of music a few times and then get bored with it, should you be paying as much to be able to own a copy of it as a piece of music which you listen to hundreds of times? Artists should be rewarded in proportion to how often a track is played, and the first playing should be free. AGI will make that happen, but it will also make music available to the poor within the law by not requiring them to pay for it until they can afford to - no one should be shut out of cultural things for lack of money if it doesn't actually cost anything to give them that access. If they go on to earn lots of money later in life, they can then pay up as soon as they can reasonably afford to do so.
Natural selection and AGI should not be mixed - you don't know what kind of monsters you could evolve through that. Safe AGI needs to be 100% deliberately designed.
It’s not that particular issue that I’m interested in. I was just using music file sharing to illustrate a point and ask a question. If life is like a chess game, how many steps ahead will AI be able to see? Will it falter by blocking or stifling individual human actions deemed unnecessary, purposeless, wasteful, irrational, and, as in the Limewire example, possibly unethical?
At the same time, copied or learned behaviour became much more productive when random and selective elements were added to the mix – when human beings began to travel widely, bumped into other human beings, exchanging ideas and inventions, and selecting from the ones that were the most advantageous, rather than passing innovations linearly from one generation to the next within a small and isolated group.
I’m not entirely sure about which process is ultimately more creative. Selective processes do give purposeful design a run for its money as far as co-opting structures for entirely new functions, and over all diversity. An advantage of biological natural selection is that it doesn’t stop. Because it’s not solution or goal oriented, once it “solves” a problem or fills a niche, it keeps solving the same problem over and over, generating new and different answers, irrespective of how it has already been done – such as different forms of locomotion or flight, sensory detection, energy use etc in biological systems.
Because AI could be dangerous, you seem to be saying that control, access, or knowledge of its inner most workings will have to be restricted to a select few humans, or eventually just itself.
What I’m asking is whether those restrictions will ultimately extinguish its evolution and creativity. I keep thinking of television in the 1960s – three main networks and very little choice or diversity, with the users functioning only as passive recipients and not creative participants, and only the weakest of feedback loops - Nielsen ratings. (Arguably, television is not a whole lot better now, but media is changing, especially with more internet options and things like youtube, blogs, etc.) The internet has this crazy, random element that mimics natural selection, and in most countries it has been allowed to flourish and change, with few restrictions or centralized control, because it was seen as benign and nonthreatening. Otherwise, it probably would not exist.
Of course, there’s the possibility that AI itself could grow bored and want to try new things, and explore other areas, and if it’s designers don’t let it, it will find somebody else who will. We may not have a say in it.
It's difficult for AGI to work out whether putting someone in a shopping trolley and pushing it over a small cliff is going to be funny or sad