0 Members and 1 Guest are viewing this topic.
For instance, to make a moral judgment about a case of theft or murder on Earth it is not necessary to know about geological events in another solar system.
So given that all considerations are actually partial, either because we have to find them inside a finite horizon or because we have to choose between competing priorities, the concept of an ideal observer is useless.
The progress to build better AI and toward AGI will eventually get closer to the realization of Laplace demon which is already predicted as technological singularity.QuoteThe better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.https://pathmind.com/wiki/neural-networkQuoteIn some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.
The better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.
In some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.
But your ideal observer will still have to make an arbitrary choice of beneficiary for any decision. Not the same as choosing an adequate approximation for pi. 22/7 may be OK for buying bricks, 3.142 for grinding a crankshaft, but nobody has to choose between 2, 7.631 or 19 as the only options. Here's a simple example from real life.I was working with a vet a couple of years ago. A woman brought in a very sorry-looking pigeon that she had just rescued from a sparrowhawk in her garden. The pigeon was beyond redemption so the nurse despatched it, went back to the counter and said "I have euthanised the pigeon. Now what is the hawk going to feed her babies?" Even if the pigeon had survived, some human had to make a choice of beneficiary, and being neither pigeon nor hawk, her choice was entirely arbitrary.
Your argument still depends on determining the universal terminal goal.
Problem is that you can't try a computer for war crimes, especially if it is partially self-trained, nor execute it pour encourager les autres.
The only way you can preserve resources is by suicide, because every other action increases entropy and thus decreases the resources and options available to others.
Even when I'm doing nothing, I'm consuming food that could be eaten by someone or something else, and exhaling carbon dioxide.
Please remind me, in one paragraph, of your universal terminal goal, and whether we agreed on it!
That's very Buddhist, but doesn't address the everyday moral question of whether to kill a conscious being for food, or to prevent oneself being killed.And here's another version of the trolley problem. Two men are attacking one man, and look certain to kill him. You have a gun. What do you do?
But in my example the only sure way to stop the killing without possibly getting yourself killed, is to shoot somebody.
I will probably eat hundreds of chickens in my lifetime, and then die. Does that increase or decrease progress towards your universal goal?
Goal requires the existence of at least one conscious being.Terminal requires the perspective from distant future.Universal requires that no additional arbitrary constraint is applied beyond those already attached to the words goal and terminal.
So I give a warning shot and the two vigilantes who have apprehended a mass murderer, let him go. Is this in tune with the universal moral standard?
In your case, the lack of necessary information is obvious, so getting more information is required. It is urgent to stop the killing since it is an irreversible process, at least for now. So unless we have other significant information, stopping the killing is in high priority. The next action to take would depend on the additional information we get afterward.
The sole function of a commercial-breed chicken is to be eaten. Developed over millennia from fairly rare forest dwellers, they are now the most numerous warmblooded creatures on earth. My next development project will, I hope, be to enhance chicken fattening for the benefit of vegans by harvesting wild locusts to feed to chickens, and thus reduce the damage to vegetable crops - now there's a moral conundrum!As for a "wasteful" process, my aunt remarked, during a celebratory feast, "If it wasn't for Jewish weddings, the country would be overrun with chickens".
Sadly, that doesn't state what your UTG is, and actually makes it undefinable. Here I am as a conscious being, but I have no way of reviewing or judging anything from the standpoint of the near future, never mind the distant one! The only references I have are historical data and personal aspirations.
Keeping the existence of the last conscious being.
Quote from: alancalverd on 01/07/2020 23:22:04The sole function of a commercial-breed chicken is to be eaten. Developed over millennia from fairly rare forest dwellers, they are now the most numerous warmblooded creatures on earth. My next development project will, I hope, be to enhance chicken fattening for the benefit of vegans by harvesting wild locusts to feed to chickens, and thus reduce the damage to vegetable crops - now there's a moral conundrum!As for a "wasteful" process, my aunt remarked, during a celebratory feast, "If it wasn't for Jewish weddings, the country would be overrun with chickens".Will you eat synthetic chicken meat which has exactly the same physical and chemical structure as the natural one, but never became part of a living animal?