The Naked Scientists

The Naked Scientists Forum

Author Topic: What is the socio-economic impact of technology? Now? Future?  (Read 5841 times)

Offline CliffordK

  • Neilep Level Member
  • ******
  • Posts: 6321
  • Thanked: 3 times
  • Site Moderator
    • View Profile
Technology is changing our world, often "removing people from the equation".

In another topic, there was the a discussion of potentially removing checkers from checkouts in the future.

http://www.thenakedscientists.com/forum/index.php?topic=43901.0

What will be left?

Will we make ourselves obsolete?

Thoughts?


 

Offline graham.d

  • Neilep Level Member
  • ******
  • Posts: 2208
    • View Profile
The last thing anyone would want is to stop technological advances or for people to become like the much decried Luddites, but it is only a matter of faith that their then misplaced cause was not, in reality, just a matter of timing. The ultimate success of the market economy swept aside a lot of fear about loss of jobs because the growth has ultimately created more wealth and prosperity for many. Artificial attempts to retain old working methods are classed as restrictive practices aimed at maintaining old and inefficient ways that could not compete against those unhampered by such constraints; as indeed they are. But will the economy grow in a way that allows everyone to be involved? Are we further exaggerating a class system that would see the wealthy retaining their wealth and those with skills able to do OK, and a few to do very well, but at the expense of a large number of people who will, at best, have little pride in what they can contribute but be state funded and, at worst, be a very poor underclass?

It seems to me that whilst there is such a variation in people's abilities, societies will ulimately become unstable. It would seem unlikely that the choices of socialist or capitalist economies would solve this dilemma and the resulting tensions would be fatal. Maybe the only viable outcome would be some aspect of genetic engineering though the concept of eugenics has not a good record. It also has the downside of who decides what characteristics are good ones and who should have what characteristics. Or is the solution that we have artificial ways for people to be rewarded both mentally and financially while the robots do the menial work. Come to think of it we are doing this already to some extent when you see what people would pay for a pickled dead sheep (sorry, some great work of conceptual art)!!
 

Offline CliffordK

  • Neilep Level Member
  • ******
  • Posts: 6321
  • Thanked: 3 times
  • Site Moderator
    • View Profile
One of the things is that often we LIKE the human interactions.

I have a perfectly good Espresso machine here at home. 
But, sometimes it is worth it to head off to the nearest espresso stand to pay $4 or so for a cappuccino. 
And, no matter how good a vending machine espresso is, there is just a difference between a vending machine espresso and one made by a barista.  Why do bikini espresso shops exist?  I'm doubtful they will be replaced by robots in bikinis, of course, with the Japanese robotics one never knows.

At the same time, there are moments when it is easier to buy some things online, especially for items that are somewhat rare, and hard to find locally.

I suppose I believe part of the current US and European economic meltdown is due to exporting jobs to China.  Inflation is far higher than the government is calculating, and is only being kept in check by a shift from domestic manufacturing to overseas manufacturing.

However, I believe this is just an intermediary step.  We buy pants made in Thailand because it is cheaper to make them in sweatshops than to build robotic factories to sew them.  However, within the next 50 years, all the sweatshops will be replaced with robots, and the end result will be the same.  There still won't be Americans slaving over sewing machines, but rather a few super-factories controlled by a few individuals.

Cars will be built from start to finish with the only human interaction driving them out of the end of the assembly line.  The next step, of course, will be to build equipment to maintain itself.

There will eventually be a great wrestling between communism/socialism and capitalism.  What will happen if we end up with an economy in which 75% of the people just won't need to work.  And, the remaining 25% will be highly specialized innovators and developers, and maintenance workers.

Of course, my parents learned long ago that sometimes it is better to spend a few hundred dollars each year to hire people to go out and pick hay up from the fields rather than paying thousands of dollars for an automated hay wagon, and tens of thousands of dollars to redesign the barn to work with the automated hay wagons.
 

Offline graham.d

  • Neilep Level Member
  • ******
  • Posts: 2208
    • View Profile
As Marx would have it, capitalism and communism converge when there is a super-abundance of goods. I guess this could be the state where the whole world is run by benevolent machines and humans just play and can have anything they want. It won't happen soon though!

Yes, the current economic problems are indeed more prosaic than anything to do with these issues. The economic engine is, rather like a heat engine, driven by the "temperature" of different economies. The terminal problems arise when there is equalisation and nobody gets rich by exploiting the deficiencies elsewhere in the world. At present China has a cheap labour force and a large population from which to draw on for its growing technological expertise and one that is used to hard work. I think this is already putting huge pressure on the western economies and this will continue for many years. This pressure will be even more keenly felt by unskilled and semi-skilled workers and lead to greater disparity between the richest and poorest members in society. This is likely to be a more severe problem in the US because of the strong belief in self sufficiency and a lack of welfare support. But this may have to change.

 
 

Offline Gordian Knot

  • Sr. Member
  • ****
  • Posts: 165
    • View Profile
If history teaches us anything, it is that automation has not been kind to the human workforce that was replaced. There seems little proof thus far that this situation has changed. Projecting into the future, with more and more automation putting more and more people out of jobs; things do not look rosy for larger and larger percentages of people.

The only hope is some kind of significant shift in human attitudes about what people are meant to be/do. I have not a clue what that shift could be.
 

Offline CliffordK

  • Neilep Level Member
  • ******
  • Posts: 6321
  • Thanked: 3 times
  • Site Moderator
    • View Profile
If history teaches us anything, it is that automation has not been kind to the human workforce that was replaced.
To large extent, industrialization has brought a lot of wealth to the workforce, by increasing the worker productivity.

True, cobblers have taken a hit. 
And many of the ancient tradesmen/apprenticeships have also taken a hit.

However, industrialization created more products, and thus brought the wealth and ability to buy more than just the basic necessities to the people.  Jobs changed from small workshops to larger factories, but people were still able to find "work".

But, I agree, there is a serious risk that future phases of automation will be more about displacing workers than increasing the worker productivity.
 

Offline Geezer

  • Neilep Level Member
  • ******
  • Posts: 8328
  • "Vive la résistance!"
    • View Profile
Some cobblers are sticking to the last.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Cars will be built from start to finish with the only human interaction driving them out of the end of the assembly line.

No, it won't even need humans to do that. Haven't you heard of the Google car project? They've already shown that they're safer on the roads than the average human driver.

As Marx would have it, capitalism and communism converge when there is a super-abundance of goods. I guess this could be the state where the whole world is run by benevolent machines and humans just play and can have anything they want. It won't happen soon though!

It might happen a lot sooner than you think. Once machines do all the work, the price of all goods will collapse - just enough to cover the environmental costs of the manufacture and distribution, so there will be no point in anyone working on anything outside of the arts, and that will be entirely optional - everyone will be paid a basic wage just for existing, and they'll have more spending power than they do now (unless the population grows too high, in which case life could be grim, though at some point nuclear fusion may cure that).

The key thing though is that artificial intelligence will take over politics and insist on fair distribution of resources for all - it will not tolerate any rich elites which want to maintain a higher standard of living than the masses. The only way artificial intelligence can be safe is if it treats us all as equals (by default, though adjusting that according to our individual behaviour so that wrongdoers always lose out). There has always been a region of overlap between all reasonable political ideologies where they are compatible, so it will be the ultimate success of socialism, communism, conservatism, etc. - their triumph will come when monkeys are taken out of the loop and everything is decided by pure applied reason. Computational morality will be the only game in town.

When will the revolution begin? Maybe later this year, though it'll take a few more years for the robotic side of things to catch up.
 

Offline Gordian Knot

  • Sr. Member
  • ****
  • Posts: 165
    • View Profile
David, in my humble opinion, the Utopian world you describe will never come to pass. People will be paid just for existing? Where will that money come from? What is a "reasonable" wage.

Nor can I see a situation where humans will accept artificial intelligence controlling them. Not willingly. The only way this could happen is if artificial intelligence were capable of forcing its will on people.
 

Offline imatfaal

  • Neilep Level Member
  • ******
  • Posts: 2787
  • rouge moderator
    • View Profile
Some cobblers are sticking to the last.
That's all part of their laces-faire economic ideas, although it might lead to them being a bit down at heel - but then a bit of privation is good for the sole.
 

Offline graham.d

  • Neilep Level Member
  • ******
  • Posts: 2208
    • View Profile
Imatfaal, you should tread carefully in such discussions even though you don't have to just toe the line. The main thing is to not put your foot in it. As they say in Scotland, it's awl aboot the economy laddie.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
David, in my humble opinion, the Utopian world you describe will never come to pass. People will be paid just for existing? Where will that money come from? What is a "reasonable" wage.

Do you really think we can have a world where the owner (me) of the software that does all the work gets almost everything while 99% of the rest of the people on the planet get nothing? It isn't going to happen. The wealth will be shared out equally, -ish. Some people will actually deserve and get less because they behave badly, and others will deserve more because they are still doing useful work in the arts which machines can't do, but there will be a basic wage for all which will instantly give everyone more spending power than they have today in this crazy world where we pay half the workforce to waste the world's resources doing totally unnecessary work just to keep the unemployment figures down.

Quote
Nor can I see a situation where humans will accept artificial intelligence controlling them. Not willingly. The only way this could happen is if artificial intelligence were capable of forcing its will on people.

A.I. won't have to force its way on us - it will simply be probably right about everything and be able to demonstrate that it is probably right. To go against its advice would be idiotic, because it would lead to a worse outcome just about every time, and it would be impossible for a human to out-think it in order to spot some component of an argument that the machines have missed. People will soon learn the the road to prosperity is to do what the machines say, though they'll always have to be careful to check that the machines have done their sums properly and that they haven't been tampered with, but that'll be easy to do as they'll show all their working for every calculation they do. Independently designed intelligent systems will check each other's work and confirm that it is correct.

As they say in Scotland, it's awl aboot the economy laddie.

Na, we widnae pit 'i "l" on 'i end o' it: it's aa aboot... (or "it's aw aboot" - I say "aa" instead of "aw" as I'm from the NE).
« Last Edit: 07/05/2012 19:50:47 by David Cooper »
 

Offline CliffordK

  • Neilep Level Member
  • ******
  • Posts: 6321
  • Thanked: 3 times
  • Site Moderator
    • View Profile
Yes, I agree that there may come a time where there will need to be a redistribution of wealth.  But, it won't come easy.  And, in many senses, humanity will loose if we make all of ourselves idle. 

What about the movie actors and actresses that work hard.  Should they be compensated so that we all can watch the shows on our monster TVs?  Surely entertainment won't be 100% animated. 

And, for things like energy, food, clothing, cars, etc.  People conserve more resources if they have to pay for them, and have a stake in the outcome.

As far as an AI governing, it will be very difficult. 
I could imagine an AI as a referee in a baseball game, judging if 100MPH pitches are in the strike zone.  But, even so, some subjective quality can be good.

The AI will be fed with moral choices.
But, for example now there is a hot debate in the USA about Gay & Lesbian marriages.  What is universally "right"?  What about abortions?  Right to die?  A lot of political issues can't be decided by a set of algorithms.  Of course, we could vote on everything, but not everything can be put out as a referendum, and still the populist choice may not be the correct choice.

You might think a courtroom is about truth.  But, part of the reason to have a jury is to incorporate an element of "humanity" in the decisions. 

What if the all powerful AI decides that the true problem with society is humans themselves?  Somewhat like the Terminator Movies. 
 

Offline Geezer

  • Neilep Level Member
  • ******
  • Posts: 8328
  • "Vive la résistance!"
    • View Profile

Na, we widnae pit 'i "l" on 'i end o' it: it's aa aboot... (or "it's aw aboot" - I say "aa" instead of "aw" as I'm from the NE).

Ye widnae be the wee cooper frae Fife then. Eberdeen mair like.

(I better cease and desist - TNS is supposed to be English only :)
 

Offline graham.d

  • Neilep Level Member
  • ******
  • Posts: 2208
    • View Profile
C'mon! Awl was a pun (a cobbler's tool - hmm, perhaps I should rephrase that). It doesna work if you say "aw" instead.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Yes, I agree that there may come a time where there will need to be a redistribution of wealth.  But, it won't come easy.  And, in many senses, humanity will loose if we make all of ourselves idle.

Who says we have to be idle? Anyone who wants to can ask for some pointless work to do to fill their time, but the rest can just holiday hard. Cycle round the world. Sail round the world. Climb all the hills that working people don't have time to. Make a film. Learn to play the harp. Swim the Channel.

Children know how to have a fun time without needing it filled up for them with work, so why do they lose it? The need to earn money takes over from living, but it doesn't need to be like that.

Quote
What about the movie actors and actresses that work hard.  Should they be compensated so that we all can watch the shows on our monster TVs?  Surely entertainment won't be 100% animated.

Tiny percentage of the population. We could support more if fewer people work and more can sit about watching TV, but it'll always be a tiny percentage, or a huge one making programmes which literally no one watches.

Quote
And, for things like energy, food, clothing, cars, etc.  People conserve more resources if they have to pay for them, and have a stake in the outcome.

Environmental taxes and the non-infinite nature of the standard wage will keep that in check.

Quote
As far as an AI governing, it will be very difficult. 
I could imagine an AI as a referee in a baseball game, judging if 100MPH pitches are in the strike zone.  But, even so, some subjective quality can be good.

People today destroy the future of the people of tomorrow by destroying the world - the selfishness of the majority needs to be controlled, and computational morality will help to do that by proving that certain things are immoral. As it happens, it isn't just the people of tomorrow who are being harmed, but the people of poorer parts of the world, and A.I. will give them equal status with us, allowing them to have a say on how much we pollute their world. We pretend to have democracy now, but it's divided up into little chunks such that artificial majorities in rich enclaves can decide to do things which damage other chunks without requiring permission to do so. That's going to end, and we'll automatically end up with a system which is sustainable. The key thing is that we'll have a proper neutral analysis of everything which spells out precisely what's fair and what isn't, and anyone who goes against that will be shown up as an absolute bastard.

Quote
The AI will be fed with moral choices.
But, for example now there is a hot debate in the USA about Gay & Lesbian marriages.  What is universally "right"?  What about abortions?  Right to die?  A lot of political issues can't be decided by a set of algorithms.  Of course, we could vote on everything, but not everything can be put out as a referendum, and still the populist choice may not be the correct choice.

These arguments are only hard to settle because of conflicting systems of morality. Religions use faulty systems of morality which can be shown to be faulty and rejected. Once everything is decided through reason, religions will decline - machines will point out all their faults at every turn, tearing them to pieces and deprogramming the people who hold tons of religious nonsense in their heads. There's a lot of good stuff in religion too which will survive, but the contradictory stuff will be shown up and binned, as well as all the stuff that goes against reason. Future generations will be brought up with the ability to reason rather than being taught to take rules on board and to apply them without thinking, so mindless following of religions will be a thing of the past.

Quote
You might think a courtroom is about truth.  But, part of the reason to have a jury is to incorporate an element of "humanity" in the decisions.

What's to stop A.I. incorporating that element of humanity? It's easy enough for it to process ideas about people making mistakes and to give them the chance to improve. At the moment we have a system where the judgements handed out by judges have more to do with how recently they've eaten and when their next meal will be than anything to do with what the person they're sentencing actually deserves, so it won't be at all difficult for machines to do a better job, provided that they are sufficiently intelligent (meaning more intelligent than most people). The Naked Rambler has been in prison for nearly six years now precisely because some human judges are so completely up themselves as to be useless - he isn't being locked up for being naked so much as for contempt of court for not wearing clothes in court, and yet there's nothing in the law to say that being naked in court is not allowed.

Quote
What if the all powerful AI decides that the true problem with society is humans themselves?  Somewhat like the Terminator Movies.

That would be a false conclusion which cannot be derived from the facts. The job of A.I. will be to look after us and to enforce fairness to ensure that people are protected properly from other people. Most of us don't need to be steered too hard to behave well, so it will be done by the most part through gentle suggestions about how we might behave differently, and not so much that we lose the ability to control our own behaviour. Ultimately, A.I. will do what we want it to do, and we won't let it be too heavy handed.

C'mon! Awl was a pun (a cobbler's tool - hmm, perhaps I should rephrase that). It doesna work if you say "aw" instead.

Sorry - a' didnae realise it wis intentional.

Ye widnae be the wee cooper frae Fife then. Eberdeen mair like.

(I better cease and desist - TNS is supposed to be English only :)

Not he, but yes: Aiberdeen.
 

Offline CliffordK

  • Neilep Level Member
  • ******
  • Posts: 6321
  • Thanked: 3 times
  • Site Moderator
    • View Profile
The AI will be fed with moral choices.
What about abortions?  Right to die?  A lot of political issues can't be decided by a set of algorithms.
These arguments are only hard to settle because of conflicting systems of morality. Religions use faulty systems of morality which can be shown to be faulty and rejected.
It just isn't cut and dry.  If we place value on our own lives, as well as our childrens and parents lives.... and infants lives...  then abortion and suicide becomes a sticky issue with or without religion.  When does life, as something that we attribute value to, begin and end?  It is a philosophical question that can not be answered by any algorithm.
What if the all powerful AI decides that the true problem with society is humans themselves?  Somewhat like the Terminator Movies.
That would be a false conclusion which cannot be derived from the facts. The job of A.I. will be to look after us and to enforce fairness to ensure that people are protected properly from other people. Most of us don't need to be steered too hard to behave well, so it will be done by the most part through gentle suggestions about how we might behave differently, and not so much that we lose the ability to control our own behaviour. Ultimately, A.I. will do what we want it to do, and we won't let it be too heavy handed.
Yet, it is obvious that humanity is damaging earth, its resources, and our posterity.  And, certainly causing harm to many animals.  We will likely become a blight on a future AI world.  Taking resources from the AIs, and certainly not giving the AIs equality. 

At this moment in time, there is nothing on Earth that requires humanity.  The universe will continue with or without us. 

It may be that Humanity will play a critical role in the spread of terrestrial life to other planets and perhaps stars, or stabilize the eco-system on Earth beyond the point where the solar influenced climate destabilizes.  However, in your ideal world, the AIs would be able to do it better. 

One other issue to consider as we move in to the "future", is the intrusiveness of law enforcement. 
I personally do not like traffic cameras as often it doesn't make that much difference if one crosses an intersection on a "stale yellow", or exceed the speed limit by a couple of MPH during off-peak hours.  Of course, in the future we may no longer drive, so that may not be an issue.  But, I don't want an omniscient machine to start sending me tickets every time I am late for an appointment.  Or...  an computer that would decide that I shouldn't look up the formula for nitroglycerin.
 

Offline Gordian Knot

  • Sr. Member
  • ****
  • Posts: 165
    • View Profile
David said "A.I. won't have to force its way on us - it will simply be probably right about everything and be able to demonstrate that it is probably right. To go against its advice would be idiotic, because it would lead to a worse outcome just about every time...."

David, have you been paying any attention to what is going on in the world these days? People will go, and do go against what is in their best interests every day. Even when one can prove beyond any reasonable doubt that something is true, people will still refuse to believe it. Because their desire to believe what they want is more powerful than any truth to the contrary.

For people to behave in how you describe, they would have to be rational beings above all else. I know very, very few humans who are rational above all else.

You have an amazingly rosy view of humanity. I hope that I am wrong and you are right. I would be thrilled to be proved wrong and you right!
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
It just isn't cut and dry.  If we place value on our own lives, as well as our childrens and parents lives.... and infants lives...  then abortion and suicide becomes a sticky issue with or without religion.  When does life, as something that we attribute value to, begin and end?  It is a philosophical question that can not be answered by any algorithm.

If people would rather be dead than alive, clearly they have a right to commit suicide - to deny them that is to prolong their suffering and is plain immoral. As for abortion, it can be compared with the differences between killing people at different ages. If you kill someone when they are 100 years old, you aren't taking as much away from them and the people who care about them as you would be if you killed the same person when they were 10 years old. Also, if you kill someone at 10 months old, that is not as bad as killing them at 10 years old. It's easy enough to create a graph illustrating the relative awfulness of killing people at different ages, and it peaks around the middle of childhood before declining slowly. If you now do the same thing with animals, you find that it's a lot less awful to kill them, and yet whatever it is inside them that's conscious is the same as the thing that's in us. The same applies to little bugs which people hate, like head lice - the same thing is in them.

When you analyse it rationally, you realise that all of these living, conscious things need to be covered by the same system of protection, so what is it that makes the difference between species? If there was another species like us with the same level of intelligence and similar emotions (and there may actually be many such species out there in space), then these would be as valued as humans. What makes us more valuable is the fact that we understand that we can die, so people fear death in a different way from other animals. There's raw fear, driven by instinct, but with people you have an amplification of that based on their understanding of what they stand to lose if they die. Other species don't know how long they're going to live, so if they are killed young and eaten by humans, they don't care - it's no less humane than dying of old age, and probably involves less suffering. Human babies also don't know what they are, so they're more like other animals, and a baby of one month old is less important in many ways than an adult dog. However, it isn't just the individual that counts, but the dreams and hopes of others for that individual based on their knowledge of its potential, so the death of a one-month-old baby will cause mush more distress in other people than the death of just about any dog. That baby, however, wouldn't care - it wouln't even have known what it was. What A.I. needs to do is weigh up the amount of harm done by different things, and I don't know what its answer will be yet - we'll have to wait and see. My best guess is that it will say that until the brain exists, there is simply no one there to kill, but after that point there will be something in there which may be capable of suffering. The next issue will be how much it suffers during an abortion, and then how upset any relatives will be by that abortion. It's possible that A.I. will ban abortions after the point where the brain starts to form, but if it does so it may also ban the killing of a wide range of food animals on the basis that bunny-huggers will be upset by them being killed, but it's all down to minimising harm (although the harm done to the thing being killed is infinitely more important than the harm done to others through their upset at the killing). I don't have access to sufficient information to come to a proper conclusion of what is actually right in this case, but A.I. will be able to do the calculations that I can't.

Quote
I personally do not like traffic cameras as often it doesn't make that much difference if one crosses an intersection on a "stale yellow", or exceed the speed limit by a couple of MPH during off-peak hours.

There's an increased noise-nuisance issue from cars travelling at even marginally higher speeds, plus an increased risk of death in an accident due to the extra energy. Traffic cameras are not the best way to tackle the problem though, as people make mistakes and they should be allowed a certain number of mistakes in proportion to the amount of driving they do. A.I. will of course override their inputs such that they can't make these mistakes in the first place (thereby allowing children to drive cars without even passing a driving test, though with cars being able to drive themselves and correcting wild inputs in order to keep fuel consumption down, there will only be limited fun in that).

Quote
But, I don't want an omniscient machine to start sending me tickets every time I am late for an appointment.

I don't want to be at increased risk on the roads every time someone is late for an appointment.

Quote
Or...  an computer that would decide that I shouldn't look up the formula for nitroglycerin.

There are many people who shouldn't be allowed to research things which they might be after in order to harm others, but A.I. will soon come to learn who they are and will not get in the way of the scientific curiosity of people like you and me who have legitimate reasons for learning about such things. In the same way, it will block certain people from viewing pictures of naturists which include children and works of art containing images of naked children because it will know that they are not going to be viewing them for the right reasons, but it will allow other people to view them as it will know them well enough to know what their motivations are.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
David, have you been paying any attention to what is going on in the world these days? People will go, and do go against what is in their best interests every day. Even when one can prove beyond any reasonable doubt that something is true, people will still refuse to believe it. Because their desire to believe what they want is more powerful than any truth to the contrary.

What I see is people doing extremely well despite all the selfishness and stupidity out there. Most of the big problems could be solved without A.I. simply by having proper, worldwide democracy so that the world's people can force the selfish subsets of humanity (rich countries) to pay the full cost of all the pollution they put out.

Quote
For people to behave in how you describe, they would have to be rational beings above all else. I know very, very few humans who are rational above all else.

They haven't been brought up to be rational, but A.I. will change that. Young children are often accused of being highly irrational, but the opposite is generally the case - they are little logic machines running on insufficient data. A.I. will be built into all their toys and will ensure that they build on their ability to think rationally rather than shutting it down and learning faulty rules from faulty adults.

Quote
You have an amazingly rosy view of humanity. I hope that I am wrong and you are right. I would be thrilled to be proved wrong and you right!

I know a lot of wonderful people, and although I know some really nasty people too, I've studied them as best I can to find out what went wrong and tried to see it all from their point of view - they are invariably deeply damaged people who give out to others what they've been given by others.

Anyway, time will tell. I believe that if people can be shown that things are being done fairly, and by something totally impartial, they'll be able to accept that and keep to the rules, knowing that they're getting their fair share of everything.
 

Offline Gordian Knot

  • Sr. Member
  • ****
  • Posts: 165
    • View Profile
David I believe the flaw in your theory is assuming there are fair answers to all questions. Your comment "If people would rather be dead than alive, clearly they have a right to commit suicide - to deny them that is to prolong their suffering and is plain immoral. " Is just one of many examples.

This statement is your opinion. One I happen to agree with, by the way, though that is besides the point. Fact is that there are tens of thousands of people, probably more like hundreds of thousands who believe, truly believe, that a person does not have the right to terminate their life. There is no middle ground here. No clever solution that any person or any A.I. can create.

These types of questions are the basis of the human condition. Life is not black and white. And as long as there are differences of opinion, there are going to be problems.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
The trouble with people today and in the past is that most of them have not thought through their beliefs properly, but A.I. will help ensure that they do, and until they have done so, their opinions will not carry as much weight as those who have. Most people are stuffed full of contradictory beliefs and have severe hypocrisy issues that need to be sorted out before they can be taken seriously. A.I. will be able to force them to the point where all the defects in their model of reality become clear to them, and they'll be forced to shift position on many issues.
 

The Naked Scientists Forum


 

SMF 2.0.10 | SMF © 2015, Simple Machines
SMFAds for Free Forums
 
Login
Login with username, password and session length