0 Members and 1 Guest are viewing this topic.
Rather, good and bad are actual things like colors.
Good and bad are adjectives, not nouns. Misuse can be intellectually dangerous.
My own personal definition of good and bad does not say that good and bad are concepts or ideas. Rather, good and bad are actual things like colors.
So, good is making reasonable efforts to avoid causing unnecessary harm to others (based on ones ability to compute such a thing), while bad is the failure to do so.
I see a problem with that concept. What happens if harming others might avoid harming yours, especially if that others is half of the population. How will your AGI be able to avoid that dead end, if I may say?
I think we're harming others without being able to recognize it.
I'm against violence, but when I watch violent movies, I still feel a malign pleasure from imagining myself killing the killers. How will you prevent your AGI from developing that kind of feeling? Won't he have to wait till the whole humanity gets non violent to develop such a culture?
AGI won't develop any kind of feeling unless it can find the magic trick used by sentient beings, but science has not found it yet.
If we give sensors to an AGI, it should be able to mimic the brain as far as sensations are concerned, and it should also be able to transform sensations into feelings when the return he gets is made out of words.
What if your AGI would feel like me, and in the same time, if he would know that humans cannot harm him?
How humans integrate feelings into data processing is the biggest mystery of all time
AGI will, once we have it in enough devices, be able to predict an attack in advance in almost every case and block it.
What if a copy falls into ill-intentioned arms?
In other words, what if our feelings were just something that prevents us from thinking, thus forces us to react instinctively?
When we say, "ouch, that hurts!"
If a human was seeking suffering, I'm afraid he wouldn't live long. We've been programmed to seek pleasure because the things that came with it have been good for our survival.
But if I understand well, your AGI wouldn't need to experiment to learn: he would only be able to take for granted what he would be told. If so, I don't see how he would be able to invent or discover new things, and I don't see either how he would be able to classify the information. Anybody could tell him anything and he wouldn't be able to tell the difference, so his behavior would depend on whoever is using him.
I can only do what's good for me, and hope it will also be good for others, because I instinctively think that what's good for others is necessarilly good for me. How will your AGI know what is good for us if he doesn't even know what is good for him?
But with a sentient machine, you could wire it up to seek suffering and label it as pleasure while avoiding pleasure and labeling that as suffering.
We have an information system that generates data like a computer, so where and how are the feelings involved in the processing of data going to turn experience of feelings into data documenting those feelings?