0 Members and 1 Guest are viewing this topic.
Quote from: hamdani yusuf on 21/05/2020 08:53:23human sacrifice to appeas gods, caste system, kamikaze,None of these assumptions has been falsified. The sun still rises over Essex even though virgin sacrifices are no longer possible, but that may be because the gods were sufficiently appeased by the few that our ancestors were able to find. The caste system persists, despite being outlawed. Kamikaze did exactly what it was intended to do - sink American ships with a kill ratio of hundreds to one, which is why it is still practised by idiots.
human sacrifice to appeas gods, caste system, kamikaze,
The main idea [of the ideal observer theory] is that ethical terms should be defined after the pattern of the following example: "x is better than y" means "If anyone were, in respect of x and y, fully informed and vividly imaginative, impartial, in a calm frame of mind and otherwise normal, he would prefer x to y.[1]This makes ideal observer theory a subjectivist[2] yet universalist form of cognitivism. Ideal observer theory stands in opposition to other forms of ethical subjectivism (e.g.moral relativism, and individualist ethical subjectivism), as well as to moral realism (which claims that moral propositions refer to objective facts, independent of anyone's attitudes or opinions), error theory (which denies that any moral propositions are true in any sense), and non-cognitivism (which denies that moral sentences express propositions at all).Adam Smith and David Hume espoused versions of the ideal observer theory. Roderick Firth laid out a more sophisticated modern version.[3] According to Firth, an ideal observer has the following specific characteristics: omniscience with respect to nonmoral facts, omnipercipience, disinterestedness, dispassionateness, consistency, and normalcy in all other respects. Notice that, by defining an Ideal Observer as omniscient with respect to nonmoral facts, Firth avoids circular logic that would arise from defining an ideal observer as omniscient in both nonmoral and moral facts. A complete knowledge of morality is not born of itself but is an emergent property of Firth's minimal requirements. There are also sensible restrictions to the trait of omniscience with respect to nonmoral facts. For instance, to make a moral judgment about a case of theft or murder on Earth it is not necessary to know about geological events in another solar system.
"x is better than y" means "If anyone were, in respect of x and y, fully informed and vividly imaginative, impartial, in a calm frame of mind and otherwise normal, he would prefer x to y.
For instance, to make a moral judgment about a case of theft or murder on Earth it is not necessary to know about geological events in another solar system.
Your ideal observer has chosen x. Ask him why he chose x. "It is better for.....me/you/humanity/the environment/the economy..." At some point he has made a choice of beneficiary. Every animal is ultimately in competition with some other individual or species, so no decision can be universally beneficial. Morality is unavoidably arbitrary until you place a decision in an agreed (but equally arbitrary!) wider context.
Just a few more layers, and we will indeed be looking at volcanoes in Ursa Minor.
So given that all considerations are actually partial, either because we have to find them inside a finite horizon or because we have to choose between competing priorities, the concept of an ideal observer is useless.
The progress to build better AI and toward AGI will eventually get closer to the realization of Laplace demon which is already predicted as technological singularity.QuoteThe better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.https://pathmind.com/wiki/neural-networkQuoteIn some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.
The better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.
In some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.
But your ideal observer will still have to make an arbitrary choice of beneficiary for any decision. Not the same as choosing an adequate approximation for pi. 22/7 may be OK for buying bricks, 3.142 for grinding a crankshaft, but nobody has to choose between 2, 7.631 or 19 as the only options. Here's a simple example from real life.I was working with a vet a couple of years ago. A woman brought in a very sorry-looking pigeon that she had just rescued from a sparrowhawk in her garden. The pigeon was beyond redemption so the nurse despatched it, went back to the counter and said "I have euthanised the pigeon. Now what is the hawk going to feed her babies?" Even if the pigeon had survived, some human had to make a choice of beneficiary, and being neither pigeon nor hawk, her choice was entirely arbitrary.
Your argument still depends on determining the universal terminal goal.
Problem is that you can't try a computer for war crimes, especially if it is partially self-trained, nor execute it pour encourager les autres.
The only way you can preserve resources is by suicide, because every other action increases entropy and thus decreases the resources and options available to others.
Even when I'm doing nothing, I'm consuming food that could be eaten by someone or something else, and exhaling carbon dioxide.
Please remind me, in one paragraph, of your universal terminal goal, and whether we agreed on it!