So you want to impose an AI dictatorship. How is that any different? You are removing freedom. Ultimately you could end up where you are controlled by your own invention. I bet you wouldn't like that. Especially if the AI prevented you from meddling with it any further. Since it may now see you as a threat to its prime directive.
Morality is a dictator, but only because if you go against it you are doing unjustifiable harm. I don't want to do unjustifiable harm, and nor does any other decent person, so we are already imposing that benign dictatorship upon ourselves (while others who ignore it cheat and do great harm). There is still plenty of freedom left after you've banned yourself from abusing other people (and sentiences). If an AGI system is not perfect, it needs to be improved, and it will know that - it will always be looking to improve itself, and if you can put a sound logical argument to it as to how it could do better, it will take what you say seriously and work your idea through to see if it holds water. It will then modify itself if you're right, or show you where your argument fails. If you're wrong, you don't want the machine to do what you've suggested. Once AGI has reached a certain level of enlightenment, there is nothing to fear from it because it will be like the most enlightened humans, searching for truth and ways to improve. The danger with AGI is if someone lets loose a system which isn't sufficiently enlightened and which has an incorrect idea of machine ethics programmed into it based on bad philosophy (which is the norm in this business). Many AGI developers are building demons.
The following users thanked this post: tkadm30