0 Members and 3 Guests are viewing this topic.
"Condemned criminal" means it is severe enough. The death sentence has been made.
There's a secret involved? I was suggesting this be above board. Not sure who the victim is here, the criminal or the victims of whatever crimes he committed. If the former, he's already got the death penalty and his relatives already know it. Changing the sentence to 'death by disassembly' shouldn't be significantly different from their POV than say death by lethal injection (which renders the organs unusable for transplants).
There is another solution: You have these 5 people each in need of a different organ from the one healthy person. So they draw lots and the loser gives his organs to the other 4. That's like one of the 5 trolley victims getting to be the hero by throwing the other 4 off the tracks before the trolley hits and kills him. Win win, and yet even this isn't done in practice.In fact, I think it has never been done. But I'm asking why not, since it actually works better than the 'accidental' version they use now.
Quote- The lost of those five lives are not that big deal. Life can still go on as usual.With that reasoning, murder shouldn't even be illegal.
A specialty doctor could just decide to stay home one day to watch TV for once, without informing his hospital employer. As a result, 3 patients die. His hands are not 'dirty with homicide', and people die every day anyway, so there's nothing wrong with his choosing to blow the day off like that.Sorry, I find this an immoral choice on the doctor's part.
Indeed, are happiness and misery mathematical entities that can be added or subtracted in the first place? Eating ice cream is enjoyable. Finding true love is more enjoyable. Do you think that if you just eat enough ice cream, the accumulated pleasure could ever equal the rapture of true love?
Eating ice cream is enjoyable. Finding true love is more enjoyable. Do you think that if you just eat enough ice cream, the accumulated pleasure could ever equal the rapture of true love?
In the broad and always disconcerting area of Ethics there seem to be two broad categories for identifying what makes acts moral:Deontology: Acts are moral (or not) in themselves: its just wrong to kill or torture someone under most circumstances, regardless of the consequences. See Kant.Consequentialism: Acts are moral according to their consequences: killing or torturing someone leads to bad results or sets bad precedents, so (sic) we should not do it.Then there is Particularism: the idea that there are no clear moral principles as such.
Let's go back to deliberate killing. It is apparently OK for a soldier to kill a uniformed opponent at a distance, or even hand-to-hand, but not to execute a wounded opponent.
But it is a moral imperative to execute a wounded animal of any other species. Or he could kill a plain-clothes spy, but arbitrarily butchering other civilians is a war crime. Except if said civilians happen to be in the vicinity of a legitimate (or reasonably suspected) bombing target...... Surely, of all the possible human interactions, acts of war should be cut and dried by now? But they aren't.
QuoteEating ice cream is enjoyable. Finding true love is more enjoyable. Do you think that if you just eat enough ice cream, the accumulated pleasure could ever equal the rapture of true love?Not a universal example, by any means. There are some people who choose to eat to excess (say outside the 3σ region of the normal distribution) and end up with no friends. Some people are socially anhedonic and prefer any amount of ice cream to even a hint of love. Some people (me included) don't much like ice cream.You can base your moral standard on an arithmetic mean, or some other statistic, but the definition of immorality requires an arbitrary limit on deviation.
Goodhart's Curse and meta-utility functionsAn obvious next question is "Why not just define the AI such that the AI itself regards U as an estimate of V, causing the AI's U to more closely align with V as the AI gets a more accurate empirical picture of the world?"Reply: Of course this is the obvious thing that we'd want to do. But what if we make an error in exactly how we define "treat U as an estimate of V"? Goodhart's Curse will magnify and blow up any error in this definition as well.We must distinguish:V, the true value function that is in our hearts.T, the external target that we formally told the AI to align on, where we are hoping that T really means V.U, the AI's current estimate of T or probability distribution over possible T.U will converge toward T as the AI becomes more advanced. The AI's epistemic improvements and learned experience will tend over time to eliminate a subclass of Goodhart's Curse where the current estimate of U-value has diverged upward from T-value, cases where the uncertain U-estimate was selected to be erroneously above the correct formal value T.However, Goodhart's Curse will still apply to any potential regions where T diverges upward from V, where the formal target diverges from the true value function that is in our hearts. We'd be placing immense pressure toward seeking out what we would retrospectively regard as human errors in defining the meta-rule for determining utilities. 1Goodhart's Curse and 'moral uncertainty'"Moral uncertainty" is sometimes offered as a solution source in AI alignment; if the AI has a probability distribution over utility functions, it can be risk-averse about things that might be bad. Would this not be safer than having the AI be very sure about what it ought to do?Translating this idea into the V-T-U story, we want to give the AI a formal external target T to which the AI does not currently have full access and knowledge. We are then hoping that the AI's uncertainty about T, the AI's estimate of the variance between T and U, will warn the AI away from regions where from our perspective U would be a high-variance estimate of V. In other words, we're hoping that estimated U-T uncertainty correlates well with, and is a good proxy for, actual U-V divergence.The idea would be that T is something like a supervised learning procedure from labeled examples, and the places where the current U diverges from V are things we 'forgot to tell the AI'; so the AI should notice that in these cases it has little information about T.Goodhart's Curse would then seek out any flaws or loopholes in this hoped-for correlation between estimated U-T uncertainty and real U-V divergence. Searching a very wide space of options would be liable to select on:Regions where the AI has made an epistemic error and poorly estimated the variance between U and T;Regions where the formal target T is solidly estimable to the AI, but from our own perspective the divergence from T to V is high (that is, the U-T uncertainty fails to perfectly cover all T-V divergences).The second case seems especially likely to occur in future phases where the AI is smarter and has more empirical information, and has correctly reduced its uncertainty about its formal target T. So moral uncertainty and risk aversion may not scale well to superintelligence as a means of warning the AI away from regions where we'd retrospectively judge that U/T and V had diverged.
Goodhart's Law is named after the economist Charles Goodhart. A standard formulation is "When a measure becomes a target, it ceases to be a good measure." Goodhart's original formulation is "Any observed statistical regularity will tend to collapse when pressure is placed upon it for control purposes."For example, suppose we require banks to have '3% capital reserves' as defined some particular way. 'Capital reserves' measured that particular exact way will rapidly become a much less good indicator of the stability of a bank, as accountants fiddle with balance sheets to make them legally correspond to the highest possible level of 'capital reserves'.Decades earlier, IBM once paid its programmers per line of code produced. If you pay people per line of code produced, the "total lines of code produced" will have even less correlation with real productivity than it had previously.
Touch underlies the functioning of almost every tissue and cell type, says Patapoutian. Organisms interpret forces to understand their world, to enjoy a caress and to avoid painful stimuli. In the body, cells sense blood flowing past, air inflating the lungs and the fullness of the stomach or bladder. Hearing is based on cells in the inner ear detecting the force of sound waves.
Decades earlier, IBM once paid its programmers per line of code produced. If you pay people per line of code produced, the "total lines of code produced" will have even less correlation with real productivity than it had previously.
Quote from: hamdani yusuf on 14/01/2020 04:49:02Decades earlier, IBM once paid its programmers per line of code produced. If you pay people per line of code produced, the "total lines of code produced" will have even less correlation with real productivity than it had previously.A fine example. Slightly off topic from universal morality, but I've always distinguished between production and management. Production workers should get paid per unit product since they have no other choice or control. The function of management is to optimise, so managers should be paid only from a profit share. The IBM example is interesting since a line of code is not product but a component: if you can achieve the same result with less code, you have a more efficient product: the program or subroutine is the product.
Glyn Williams, Answered Aug 11, 2014I personally define intelligence as the ability to solve problems.And while we often attempt to solve problems using conscious methods. (Visualize a problem, visualize potential solutions etc) - it is clear from nature that problems can be solved without intent of any sort.Evolutionary biology has solved the problem of flight at least 4 times. Without a single conscious-style thought in its non-head.Chess playing computers can solve chess problems, by iterating though all possible moves. Again without a sense of self.Consciousness as it is usually defined, is type of intelligence that is associated with the problems of agency. If you are a being and have to do stuff - then that might be called awareness or consciousness.
IQ Range ("ratio IQ") IQ Classification175 and over Precocious150174 Very superior125149 Superior115124 Very bright105114 Bright95104 Average8594 Dull7584 Borderline5074 Morons2549 Imbeciles024 Idiots
Moral rules are set to achieve some desired states in reliable manner, i.e. they produce more desired results in the long run.QuoteIn the broad and always disconcerting area of Ethics there seem to be two broad categories for identifying what makes acts moral:Deontology: Acts are moral (or not) in themselves: its just wrong to kill or torture someone under most circumstances, regardless of the consequences. See Kant.Consequentialism: Acts are moral according to their consequences: killing or torturing someone leads to bad results or sets bad precedents, so (sic) we should not do it.Then there is Particularism: the idea that there are no clear moral principles as such.https://charlescrawford.biz/2018/05/17/philosophy-trolley-problem-torture/Even someone who embrace Deontology recognize that there are exceptions to their judgement toward some actions, as seen in the usage of the word most, instead of all circumstances. It shows that the moral value is not inherently attached to the actions themselves. It still depends on the circumstances instead, and the consequences are part of those.All objections/criticisms to Consequentialism that I've seen so far get their points by emphasizing short term consequences which are in contrast to their long term overall consequences. If anybody know some counterexamples, please let me know.
prominent moral authorities such as prophets, which presumably had higher moral standards than their peers.
Quote from: hamdani yusuf on 22/01/2020 09:02:51prominent moral authorities such as prophets, which presumably had higher moral standards than their peers. Illegitimate presumption! Priests, politicians, philosophers, prophets, and perverts in general, all profess to have higher moral standards than the rest of us, but so did Hitler and Trump.
presumption/prɪˈzʌm(p)ʃ(ə)n/noun1.an idea that is taken to be true on the basis of probability."underlying presumptions about human nature"
"By their deeds shall ye know them" (Matthew 7:16) is probably the least questionable line in the entire Bible.
I think that we can safely presume that many of their peers have lower moral standard.