Naked Science Forum

General Science => General Science => Topic started by: tweener on 06/05/2004 20:59:10

Title: How would a human made artificial being be treated
Post by: tweener on 06/05/2004 20:59:10
If humans could build an artifical being, whether silicon, biological, or some combination, that had the intelligence to be self-aware, how should it be treated?

Presumably it is a machine made by humans, so, can it be made to perform any task, even very dangerous ones? Is it OK to simply destroy the machine when it's task is complete?  Is it OK to send it into a situation that is certain to result in it's destruction?

How much intelligence would it have to have to be considered "over the line" into sentience?  Human level?  Chimp? Dog? Snail?

----
John - The Eternal Pessimist.
Title: Re: How would a human made artificial being be treated
Post by: Dan B on 06/05/2004 21:15:25
There is a star trek TNG episode called "A Measure of a Man" that deals with this... (also most Asimov books do too [:D] )

It basically goes down to; how can you prove that something is sentient?

As for sending them into certain death; humans do that to one another all the time already anyway.
Title: Re: How would a human made artificial being be treated
Post by: gsmollin on 06/05/2004 21:55:11
We sure do, but is it moral? That's the whole question, and it all hinges on one's definition of moral. Some believe it is moral to kill animals for food, others do not. Some believe it is moral to kill humans of a different religion, unless they convert first, of course. Others do not believe it is moral to kill humans. Etcetera.

I think we will find that it is moral to destroy machines regardless of their AI. At least, we will find it is easy and profitable, and that will be enough for most.
Title: Re: How would a human made artificial being be treated
Post by: Dan B on 06/05/2004 22:03:54
As you point out, morality is subjective. i.e. there is no correct answer [:D]
Title: Re: How would a human made artificial being be treated
Post by: neilep on 06/05/2004 22:23:01
Hi John...I think for me the actual form/shape of the artificial being might have a bearing. Hopefully once it has completed it's task it could either be reprogrammed or recycled rather than having it destroyed....unless of course after performing it's task it has (as a consequence) made it unfit to be in close proximity to humans , animals, the environment etc etc...ie: it had become toxic one way or another.

If the machine (albeit self aware) is programmed to accept destruction then that might ease the moral burden I suppose. Personally I would find it difficult to say good bye . Depends on how much anthropomorphism has taken place.

As far as how much intelligence would have to be considered 'over the line'......phew !!..that's a toughy.......
One would presume that only the necessary amount of intelligence would be instilled into the AI for it to perform it's specific task.

I hate waste....I would hope that destruction would be as an absolute last resort.....

One other point, is, could it be proved if the 'bot' was self aware, to an extent that the sentience it had, was true and honest and completely unilateral ?

I can not imagine giving a 'bot' more intelligence than is needed than just to perform the task...else self doubt may be a symptom causing the task required to potentially turn into a hazard.

However, if it was proved that the 'bot' did have an intelligence worthy of sentience then I would expect it to have the same level of rights as a being of equal sentience.

And that's what I have to say about that.


Oh boy !!..i'm just thinking of pleasure droids !!!

Dan B.....yep..that's a great Episode of TNG.


Neil (totally artificial and with no intelligence)


'Men are the same as women...just inside out !'  (https://www.thenakedscientists.com/forum/proxy.php?request=http%3A%2F%2Finstagiber.net%2Fsmiliesdotcom%2Fcwm%2F3dlil%2Fsleep.gif&hash=fe99e79560e224eed0697eef2c4c7c1c)
Title: Re: How would a human made artificial being be treated
Post by: Reconnect on 06/05/2004 22:27:52
Everythings learns what is neccesary from interaction with its surroundings, by interacting with its surroundings (everything has to be with/alongside what is around it).

If technology hangs out with war mongers/manufacturers its likely to get arms.




Two rights don't make a left, but three do.
Title: Re: How would a human made artificial being be treated
Post by: neilep on 06/05/2004 22:35:52
..only if there is available space  (in it's brain)to learn from it's interaction, to digest and process and store for use the knowledge it has learned. ....if the capacity is not there it can not learn more than it's boundary.

 Put a rat in a maze  and it will soon fnd it's way out (eventually)....stick it on a piano it's not going to play Mozart !!

'Men are the same as women...just inside out !'  (https://www.thenakedscientists.com/forum/proxy.php?request=http%3A%2F%2Finstagiber.net%2Fsmiliesdotcom%2Fcwm%2F3dlil%2Fsleep.gif&hash=fe99e79560e224eed0697eef2c4c7c1c)
Title: Re: How would a human made artificial being be treated
Post by: Donnah on 07/05/2004 03:04:02
I agree with gsmollin that "we will find that it is moral to destroy machines regardless of their AI. At least, we will find it is easy and profitable, and that will be enough for most."

If the intelligence was a high enough level the droid would be capable of making its own choice.  Otherwise it would be like sending people off to war for the first time; most have no idea of the reality they are about to face until it's too late.  Has anyone been in the Vietnam (or American, depending on your POV) war?  Was it anything like you imagined?
Title: Re: How would a human made artificial being be treated
Post by: tweener on 07/05/2004 04:30:32
I haven't seen the Startrek episode, so I can't comment.  

Some people find it easy and profitable (and thus maybe moral?) to destroy other humans.  Governments usually try to make it unprofitable or at least risky by imposing penalties.

Most science fiction seems to assume that intelligent machines will be something akin to slaves that can be turned on/off, bought, sold, destroyed, and abused at will.  Maybe that is how it should be or maybe it is not.  I would not feel comfortable dealing with a sentient machine for any amount of time and then just destroying it, knowing that it has feelings and does not want to be destroyed.

Would "reprogramming" be the same thing?  I take the meaning of that includes wiping out all the memories and learned responses from the prior programming, thus destroying the entity in all but the mechanical sense.

----
John - The Eternal Pessimist.
Title: Re: How would a human made artificial being be treated
Post by: Reconnect on 07/05/2004 10:22:03
It is interesting to see interpretations of AI.  It is often thought of in terms of 'droids' or 'machines'.  I think this is going beyond the concept of 'intelligence' and in to the realm of 'mechanical humans'.

We can already create droids or machines that 'appear' human.  Maybe we misunderstand what intelligence is.  Maybe intelligence is a by-product of other activity (physical/biological) that 'appears' to us, to be 'thinking'.

Two rights don't make a left, but three do.
Title: Re: How would a human made artificial being be treated
Post by: neilep on 07/05/2004 12:41:31
....an interpretation is always a personal opinion isn't it ?....there so much subjectivity involved.

 John....with regard to the reprogramming...yes I could only agree with reprogramming only if it was like reprogramming an AIBO or something, or an industrial 'bot'.....not anything with sentience.

I actually thought Spielbergs film 'AI' was pretty good, if a bit too long.



'Men are the same as women...just inside out !'  (https://www.thenakedscientists.com/forum/proxy.php?request=http%3A%2F%2Finstagiber.net%2Fsmiliesdotcom%2Fcwm%2F3dlil%2Fsleep.gif&hash=fe99e79560e224eed0697eef2c4c7c1c)
Title: Re: How would a human made artificial being be treated
Post by: Reconnect on 07/05/2004 17:45:29
There is always something in common with different viewpoints, they are viewing the same subject from different points.

Intelligence is a strategy, priortisation of subjectivity is a dynamic result based on the previous and current activity of that strategy.

Remembered hierachries contextualise ongoing experience based on interpretive models created from an ongoing priotisation process giving an intelligence a viewpoint.

Re-programming can be done at any point in a lifecyle, to do it quickley just change the way memories are used/interpretated.

Otherwise overwrite the previous experience/memory with more powerful experience/memory that is more likely to influence the intelligent strategy in the way desired.

Maybe there is no such thing as programming.

Maybe there is only reprogramming, full stop.

Two rights don't make a left, but three do.
Title: Re: How would a human made artificial being be treated
Post by: neilep on 07/05/2004 19:58:06
What the ?.......sorry Reconnect....but as a layperson...I personally find it tricky to get to grips with what you say.......That does not mean that I think it's nonsense....but I simply do not understand it to form an opinion.....but if you bring things down a peg or eight to my level I would greatly appreciate your assistance to enable me to understand you.

If your book will be at this level then I personally think you'll be restricting your audience to a select few. Of course, I realise that it's a specialist subject anyway but if you use plainer English it will help, well...it'll help me anyway....Thanks  

The only thing I can see in common with differing viewpoints is that they are different !!

... and I don't understand this at all "Intelligence is a strategy, priortisation of subjectivity is a dynamic result based on the previous and current activity of that strategy.

Remembered hierachries contextualise ongoing experience based on interpretive models created from an ongoing priotisation process giving an intelligence a viewpoint"....
sorry..thanks for help.

'Men are the same as women...just inside out !'  (https://www.thenakedscientists.com/forum/proxy.php?request=http%3A%2F%2Finstagiber.net%2Fsmiliesdotcom%2Fcwm%2F3dlil%2Fsleep.gif&hash=fe99e79560e224eed0697eef2c4c7c1c)
Title: Re: How would a human made artificial being be treated
Post by: Titanscape on 11/05/2004 17:23:35
Well I think that a computer or robot that in some distant future time had human level emotions and intellect with self awareness, then it would be ilogical to send it to war. Or ilogical to program it to feel for itself or the enemy. If it were self programmed it would need have some kind of will and some kind of conscience don't you think? That defeats the purpose of a tool. Robots are high 'tools' or 'technology'.

Too me it's value would be in its use. As now, hardware and software issues. If it's programs can be sent to another model's memory which is identical or better make, where is it? Is it a program or hardware?

Pleasure robots, mmm, do they vibrate neilep? Kinky science, be honest for no one knows who you really are. Do you dream of an, Electra Giga 9000F series? The jingle thus goes, "You wil never want to switch her off, for she can make you peak and trough".


Titanscape
Title: Re: How would a human made artificial being be treated
Post by: neilep on 11/05/2004 19:07:41
Bren (call me Neil)  LOL....yes...bring em on !!!...and that's why I want a holodeck too !!!!

I  totally agree Bren....if you are going to program a being/bot/droid/doll/thing/whatever....... to the point where it's cognitive abilities are close to our own abilities,to a degree that it is aware of its own emotions and thoughts and feelimgs and can see them in others too, then I would think  and hope that it would not be for the purpose which would then lead to it's destruction..........furthermore..I would expect it to have just the same level of rights as we enjoy (I suppose that be debateable depending on where you live).

And then the big debate (notice how I side stepped saying Mass Debate  ?.....DOH !!!!) would then be, how do you come to an agreement as to the difference between sentience and programming, or when does programming become Sentience  ?etc etc ....

After all....are we not machines as well ?...just Bio-Chemical in Nature as opposed to Mechanical.......and thats what I have to say about that.

'Men are the same as women...just inside out !'  (https://www.thenakedscientists.com/forum/proxy.php?request=http%3A%2F%2Finstagiber.net%2Fsmiliesdotcom%2Fcwm%2F3dlil%2Fsleep.gif&hash=fe99e79560e224eed0697eef2c4c7c1c)
Title: Re: How would a human made artificial being be treated
Post by: Titanscape on 11/05/2004 19:38:31
As you may remember, I make a difference between biological and rigid mechanisms and life. And also I make or see a difference between life and feelings to source my sense of ethics' and morality's ability to discern. "Cherish life", that is a Catholic logo. I agree to love life not just feelings. A machine can never go so far as to be alive.

However a biological device or GM organism from a presently existing one is different.

There are laws against killing dolphins, because of their emotions and sense of loss, there intelligence... I almost dispute it. I thought of farming them. I like them tho. Still farmers need to teach their children to harden up.

I write like this because our laws come from a study of the Old Testament, however there you find that they made fine sandals out of dolphin hide! "Kill and eat", "do not lie with a beast"... (Altho satan NT brings an idol to life).


Titanscape
Title: Re: How would a human made artificial being be treated
Post by: tweener on 11/05/2004 20:17:29
Given the advances in biotechnology, it may be highly possible that future beings created by humans my be biochemical in nature, possibly with a lot of mechanical parts.  So, I don't think the debate can focus on the mechanical aspect of the entity in question.

Self-oragnizing systems are not really programmed.  They use simple rules and their behavoir emerges from the massive interaction between simple units.  Thus they can display intelligence to some degree without being programmed at all.

----
John - The Eternal Pessimist.