0 Members and 1 Guest are viewing this topic.
Many years ago, when I started reading about Hinduism I came across a comment to the effect that it was much like the Church of England in that you could believe almost anything and be Hindu, or C of E. I have had to wait all these years for confirmation.
The only mention of "god" in a science forum should be in relation to mental disorders or psycotropic drugs
Blaming science for the misuse of atomic energy is very much like blaming alcohol for drunkenness, or blaming God for the bigotry and hatred perpetrated in the name of religion.
Do science and religion have any common ground?
QuoteDo science and religion have any common ground?Yes.AnthropologyPerhaps History (when considering the history being written and interpreted to the benefit of the religion).
If you read carefully you will find that I was not "blaming science"
I would maintain that science can only provide a basis and a methodology for understanding the physical world around us. I would maintain that there are important truths and issues that are outside the realm that science can explore -- not just outside its present incomplete scope, but beyond the capability of science to address....As far as I am concerned both morality and aesthetics are important aspects of life, and ones that by their very nature cannot be addressed by science. I am proud to be a scientist who is a believing Christian, and a member of an Anglican Church, of liberal Anglo-Catholic practice.
Morality is little more than the minimisation of harm, so it's something a computer could make pronouncements on which would be demonstrably superior to anything that could be derived from religion.
Aesthetics is also controlled by rules, but we don't yet know what those rules are or how they vary from person to person or how much they can be modified in a person over time. We do know, however, that the golden ratio has a substantial importance in visual art, and that comes about because of the Fibonacci sequence which is written through many living things and which is an indicator of healthy growth. We see the golden ratio many times in a beautiful face, and we also see it in the arrangement of components of beautiful images. In time, everything aesthetic will be accounted for in full by science, but the barrier to that is untangling it all from the complexity of how our brains function.
Quote from: David Cooper on 28/03/2012 19:45:45Morality is little more than the minimisation of harm, so it's something a computer could make pronouncements on which would be demonstrably superior to anything that could be derived from religion.Here you are stating your religious faith. The statement is made entirely without evidence or justification. If you want to take this position, you cannot then retreat into the "Science is morally neutral. Any evil that arises from science is just because humans misuse its results." position, as adherents of scientism are wont to do.
In fact, there is a large ongoing debate in the academic philosophy literature about whether or not "morality is little more than minimization of harm" (a position described as "utilitarianism"). There have been some very effective arguments put against this position.
Quote from: David Cooper on 28/03/2012 19:45:45Aesthetics is also controlled by rules, but we don't yet know what those rules are or how they vary from person to person or how much they can be modified in a person over time. We do know, however, that the golden ratio has a substantial importance in visual art, and that comes about because of the Fibonacci sequence which is written through many living things and which is an indicator of healthy growth. We see the golden ratio many times in a beautiful face, and we also see it in the arrangement of components of beautiful images. In time, everything aesthetic will be accounted for in full by science, but the barrier to that is untangling it all from the complexity of how our brains function.This again is a statement of faith, and an almost mystical devotion to the "golden ratio". Yes, the Fibonacci sequence does arise in some simple models of healthy growth, and it is certainly seen directly in areas like sunflower seed patterns. But it is rather a long stretch from there to imagining that the golden ratio underpins a large chunk of aesthetics.
...I was setting out my position to invite further discussion, hence the lack of evidence and justification. I work in artificial intelligence and am building a system which will soon be using a morality law based almost entirely on minimising harm to calculate from scratch the rights and wrongs of all things. On paper it should work better than any other system of determining what is moral: taking "morality" from a holy book would inevitably result in the machine determining that you should be stoned to death for something trivial such as wearing clothes made from more than one kind of fibre. We aren't completely stupid, of course, so we don't generally follow religious laws religiously for the same reason - they are imperfect to the point that they kill innocent people at the drop of a hat while protecting evil people, so it's obvious that we reject the stupid ones. But how are we making our judgements about which religious rules are sensible and which are plain barking? Well, we simply apply the real moral rule to each case and try our best to minimize harm....QuoteIn fact, there is a large ongoing debate in the academic philosophy literature about whether or not "morality is little more than minimization of harm" (a position described as "utilitarianism"). There have been some very effective arguments put against this position.I'm too busy building the software system which will tidy up this god-awful world and do not have time to hunt through mountains of **** to find the stuff on the subject that may actually be worth reading (most philosophers being completely thick, writing screeds of stuff in fancy words which in reality say nothing), but I'm sure there must be some good ones out there who have been hidden by the sheer mass of morons. If you can help point me towards these effective arguments, I will be very grateful to you as I would very much like to explore them - up until now I've found it virtually impossible to find any intelligent life on this planet capable of discussing the implications of the computational morality which will soon be unleashed on an unsuspecting world.
For now, I am only concerned with the issue of powerful arguments against utilitarianism.
It is rather pointless and disrespectful to be challenging the rest of your religious faith.
Philosophers have tended to base their arguments around utilitarianism on situations of moral dilemma: issues like do you rush to switch the points for a runaway train so that it will certainly mow down a single railway worker in your field of vision when you "know" that if you do not it will probably collide with another train on the main line and kill dozens?
Henry John McCloskey -- John Stuart Mill: a Critical Study and God and EvilBernard Williams and JJC Smart -- Utilitarianism: For and Against.
Here, in summary, are some of the arguments:-- minimizing the harm and maximizing the good do not always amount to the same thing.
-- it is not possible to minimize or maximize anything unless it can be quantified, and there is no uniquely privileged quantification of harm or good.
-- all judgements of harm or good are probably culturally tainted.
-- are we to see harm or good in terms of (1) our family? (the 'selfish gene' concept) (2) our 'tribe' or sub-culture? (3) our nation? (4) the whole of humanity? (5) the whole of the animal kingdom? or (6) the whole planet?
Here are two real life examples of dilemmas which I believe quite clearly highlight the immorality and downright evil that can be associated with a utilitarian approach:(1) A certain doctor is assigned to a concentration camp, where he knows that the inmates are all destined for the gas chamber. He decides that some of them should be thrown into ice water pools instead so that he can obtain reliable data about the onset and characteristics of human hypothermia. This is the best available data that is still used by doctors and scientists today.
(2) A certain nation (i.e. government) has a large number of people condemned to execution. Note that I am not here discussing the morality or otherwise of capital punishment. Nor am I entering into politics as such -- I have a great admiration of this government for many of its other achievements. It adopts a policy of keeping these people alive on death row, and timing executions so that fresh body parts can be farmed at times convenient to meet the demands of transplant operations.
Quote from: damocles on 30/03/2012 05:16:02For now, I am only concerned with the issue of powerful arguments against utilitarianism.That's good, because that's the bit that most interests me.QuoteIt is rather pointless and disrespectful to be challenging the rest of your religious faith.It needn't be pointless - I can certainly cure you of your religion if you are rational.
QuotePhilosophers have tended to base their arguments around utilitarianism on situations of moral dilemma: issues like do you rush to switch the points for a runaway train so that it will certainly mow down a single railway worker in your field of vision when you "know" that if you do not it will probably collide with another train on the main line and kill dozens?Do religions offer you any guidance for such situations? I expect there will be something somewhere that can be twisted to fit, and something else that can be twisted to fit which will lead to the opposite action.
In reality, all we can do is calculate based on minimising harm. In the example above it is clear that the people in the train are not to blame (unlike other examples of this kind of thing where a large group of people are stupidly standing on the line and the question is whether you should switch the points and kill someone tied to the line who's being filmed for a movie on a piece of line which shouldn't have trains on it), whereas the railway worker is a representative of the system which has failed, so like a captain of a ship he might be seen as having a duty to take the hit if it comes to that. We then have to think, might he have a family? Perhaps all the people on the two trains are neo-Nazis, but that's unlikely. We have to guess based on what we know of railway workers and passengers in general, and the odds are overwhelmingly in the direction of making it better to kill the railway worker. If on the other hand we knew that the trains were indeed full of neo-Nazis, it might well be worth saving the railway worker, even at the expense of the two train drivers. That's a tough one to calculate, but the calculations could be certainly be done if you put sufficient time into it. There wouldn't be enough time for humans to calculate it properly if this scenario is suddenly sprung on them, but artificial intelligence will certainly be able to do it.
That's false, the problem here being ambiguity. There are two possible meanings of "good" which can be applied here, and neither of them can make the above argument valid. What makes it appear valid to some people is that they are using the ambiguity with the two meanings to confuse themselves, and that leads to an incorrect conclusion. Good (the pure meaning) is at zero on the scale: it is simply the absense of bad. ... (snip) ...Quote-- it is not possible to minimize or maximize anything unless it can be quantified, and there is no uniquely privileged quantification of harm or good.This is why probabilities come into it. You can try to do the same harm to two people, and one may be much more damaged by it than the other, while it may be impossible for anyone to tell which was more damaged. All we can do is attempt to do a statistical analysis ...(snip)...
Quote-- all judgements of harm or good are probably culturally tainted.Only if you allow yourself to be misled by cultural values rather than being fully impartial. Machines caculating morality will not be open to any such bias, and some people are pretty good at eliminating bias from their thinking too, though it's hard for any humans to be sure that they've managed to eliminate it completely.
Quote-- are we to see harm or good in terms of (1) our family? (the 'selfish gene' concept) (2) our 'tribe' or sub-culture? (3) our nation? (4) the whole of humanity? (5) the whole of the animal kingdom? or (6) the whole planet?(1) The selfish gene idea is an explanation of natural selection and is not intended to be misused as any kind of morality - evolution is vicious. (2 & 3) These are primarily extensions of family, although it's complified by migration. (4) This would allow all animal cruelty. (5) Yes, but we have to allow for anything else that could be harmed, such as a sentient machine or plant, and we should also consider the possibility that pain could be generated in a chemistry experiment. (6) The planet is probably not capable of being harmed, but if we discovered that all its material down to a depth of several metres was in pain because of Radio 1 being broadcast on a particular frequency, we would have to move that signal or shut the station down.
QuoteHere are two real life examples of dilemmas which I believe quite clearly highlight the immorality and downright evil that can be associated with a utilitarian approach:(1) A certain doctor is assigned to a concentration camp, where he knows that the inmates are all destined for the gas chamber. He decides that some of them should be thrown into ice water pools instead so that he can obtain reliable data about the onset and characteristics of human hypothermia. This is the best available data that is still used by doctors and scientists today.Which dilemma? Should we use the data? Since it exists, yes - the victims themselves would want us to if it helps to save others. Should it have been collected though? If it was a less awful way for them to die and that doctor couldn't directly save them from death, then yes. If there was a chance that they'd survive many such experiments and might live long enough to survive the war, then yes again and more emphatic.
Quote(2) A certain nation (i.e. government) has a large number of people condemned to execution. Note that I am not here discussing the morality or otherwise of capital punishment. Nor am I entering into politics as such -- I have a great admiration of this government for many of its other achievements. It adopts a policy of keeping these people alive on death row, and timing executions so that fresh body parts can be farmed at times convenient to meet the demands of transplant operations.Again there is no actual dilemma here - if they are to be killed, they might as well be used to save others. The problem there is that many of them have done nothing wrong and are being killed for political reasons (if you're talking about China), and indeed it's possible that the system is so corrupt that people are being sentenced to death precisely because their organs will be compatilble with a rich person who needs them.
Quote from: David Cooper on 30/03/2012 20:56:01It needn't be pointless - I can certainly cure you of your religion if you are rational.Now that is brash and disrespectful, and far from certain. But I suspect that your definition of 'rational' amounts at bottom to 'sharing my views', so it is a fairly safe statement.
It needn't be pointless - I can certainly cure you of your religion if you are rational.
QuoteQuotePhilosophers have tended to base their arguments around utilitarianism on situations of moral dilemma: issues like do you rush to switch the points for a runaway train so that it will certainly mow down a single railway worker in your field of vision when you "know" that if you do not it will probably collide with another train on the main line and kill dozens?Do religions offer you any guidance for such situations? I expect there will be something somewhere that can be twisted to fit, and something else that can be twisted to fit which will lead to the opposite action. My flavour of religion does indeed offer good guidance, which would not be based around 'twisting' or any interpretation or misinterpretation of a scriptural text....There is an alternative.In reality, there is a wise and good God overseeing this whole situation. I would avoid committing the murder of the man that I can see, and I have recourse to prayer that the main line be clear. If it is not, and a tragedy ensues, then I must mourn with the victims' families, and wear any blame they would choose to heap on me.
All of this is predicated on a one-dimensional conception of good and harm. Most philosophers do not see the issues in one-dimensional terms, and that is probably why you are seeing them as spouting nonsense or worse. How are you proposing to place the values of freedom, health, material comfort, etc. on a single numerical scale, especially when they conflict at times? Are you suggesting that statistics of how many people choose slavery with comfort as against freedom with hardship might be a sort of means of placing these disparate goods on a single numerical scale?
Are you suggesting that statistics of how many people choose slavery with comfort as against freedom with hardship might be a sort of means of placing these disparate goods on a single numerical scale?
There is no yardstick available to measure what constitutes 'fully impartial'. The very notion of impartiality is loaded with cultural bias, and means very different positions to different people. Machines calculating morality will certainly be open to cultural bias, unless you are expecting some sort of miraculous breakthrough in the AI field.
At bottom, machines inevitably have to be provided with heuristic guidelines by human designers, either in terms of values to put on different goods and harms, or judgement criteria for calculating such values, or, with learning machines, heuristics for which outcomes to enhance and which to diminish. These heuristics cannot be totally impartial -- they necessarily have imbedded in them cultural bias, whether conscious or unconscious.
I would be interested to be enlightened about recent developments in AI if this is not the case. Isaac Asimov -- writing at a very primitive time in the development of AI -- was up front about these issues when he devised his five laws of robotics. I cannot remember the detail, but I seem to recall that one of his short stories was about malfunction of a robot when placed in a situation where two of these laws were in conflict. Perhaps another reader of this can help out with a reference.
I think that you have rather missed the point here. Perhaps it is more easily understood in terms of: does harm to a human have a constant factor on your numerical scale, or is one family member worth two outsiders, or 3 foreigners, or perhaps even four infidels? I suspect that you would say that all should be equal -- and that is expressing a Western liberal bias.
Most human beings come from cultural backgrounds that might not agree with this, and while nearly all in our prosperous Western democracies would pay lip service to it, the attitude is often very different when it comes down to practicalities.
And the "all humans should have equal consideration" notion owes nothing to rationality; it might owe something to Christianity.
And how do we place relative values on the different stages of life? Are there different harm factors for neonates? children? young adults? the very elderly?
Most people, and the American courts who tried them in particular, judged the German doctors involved in this sort of thing as war criminals and monsters, and I believe rightly so.
You are right in your conclusion about what the utilitarian position would be. I believe that position to be quite evil.
Using the data is quite another matter -- if it exists, and is judged useful, then it is only rational and scientific that it should be used. Science needs to draw on all available information. A suggestion has been made, which I agree with, that if such data is used, it should be accompanied by text expressing revulsion when it is cited in a scientific article. See http://www.garfield.library.upenn.edu/essays/v14p328y1991.pdf, for example.
It is interesting that you see "nothing wrong" in the actions of political prisoners.
You have been attached to a society where freedom of speech is seen as a right. That is a culturally biassed position.
It is at least arguable, from a utilitarian point of view, that some of these political prisoners have attempted to undermine the rather fragile cohesion of a society where hardship and starvation are never far away, and that speaking out publically is an action that might well result in huge disruption and hardship and great harm to the society and the poorer people in it. Once again, I see your position, which I believe accurately reflects utilitarian principles, as both immoral and evil.
QuoteIt is at least arguable, from a utilitarian point of view, that some of these political prisoners have attempted to undermine the rather fragile cohesion of a society where hardship and starvation are never far away, and that speaking out publically is an action that might well result in huge disruption and hardship and great harm to the society and the poorer people in it. Once again, I see your position, which I believe accurately reflects utilitarian principles, as both immoral and evil.Whereas most Chinese people will, like me, see your position as being both immoral and evil. The idea that the mass-murdering autocracy that runs China provides the best available governance for that country is ridiculous. It's corrupt from top to bottom, it loads a substantial minority with wealth while neglecting the majority, and it's taking China down the wrong path by copying the West's failed model of development which is further painting us all into a corner. It isn't quite that simple, of course, because they are also doing a lot of good things, and switching to democracy could result in very bad governance too. In many ways, their system may be superior to ours in that only highly educated people can become members of the party in power, with the result that they run an economy better than we do. If they could add proper morality into the required qualifications for party members, they might even end up with a system that's better than our imperfect demorcracy. But it will all be academic soon - artificial intelligence will soon out-think us in every way and leave us with no role in politics other than to agree to what the machines suggest once we've checked that their analysis is correct.
QuoteQuoteIt is at least arguable, from a utilitarian point of view, that some of these political prisoners have attempted to undermine the rather fragile cohesion of a society where hardship and starvation are never far away, and that speaking out publically is an action that might well result in huge disruption and hardship and great harm to the society and the poorer people in it. Once again, I see your position, which I believe accurately reflects utilitarian principles, as both immoral and evil.Whereas most Chinese people will, like me, see your position as being both immoral and evil. The idea that the mass-murdering autocracy that runs China provides the best available governance for that country is ridiculous. It's corrupt from top to bottom, it loads a substantial minority with wealth while neglecting the majority, and it's taking China down the wrong path by copying the West's failed model of development which is further painting us all into a corner. It isn't quite that simple, of course, because they are also doing a lot of good things, and switching to democracy could result in very bad governance too. In many ways, their system may be superior to ours in that only highly educated people can become members of the party in power, with the result that they run an economy better than we do. If they could add proper morality into the required qualifications for party members, they might even end up with a system that's better than our imperfect demorcracy. But it will all be academic soon - artificial intelligence will soon out-think us in every way and leave us with no role in politics other than to agree to what the machines suggest once we've checked that their analysis is correct.I do want to correct the notion that you are referring to "my position". The passage you have quoted was being put forward as a possible argument from a utilitarian point of view, and could be posited about political prisoners under any hypothetical Government that was squeaky clean trying to run a poor and socially fragile country. The assertion that the present Chinese government does not fit this criterion is quite irrelevant to the underlying philosophical point, even if it is true (and I suspect that it may be).
It is interesting that you see "nothing wrong" in the actions of political prisoners. You have been attached to a society where freedom of speech is seen as a right. That is a culturally biassed position. It is at least arguable, from a utilitarian point of view, that some of these political prisoners have attempted to undermine the rather fragile cohesion of a society where hardship and starvation are never far away, and that speaking out publically is an action that might well result in huge disruption and hardship and great harm to the society and the poorer people in it. Once again, I see your position, which I believe accurately reflects utilitarian principles, as both immoral and evil.
The other thing I want to say, especially in view of Geezer's last posting, is that it was not my intention to describe David Cooper personally as "immoral" or "evil", but to apply those labels to some of the outcomes of the extreme utilitarian approach he was describing.
Apart from that, I think that as far as exchanges between me and David are concerned, this debate has gone about as far as it can.