The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. General Discussion & Feedback
  3. Just Chat!
  4. Is there a universal moral standard?
« previous next »
  • Print
Pages: 1 ... 11 12 [13] 14 15 ... 212   Go Down

Is there a universal moral standard?

  • 4236 Replies
  • 965623 Views
  • 2 Tags

0 Members and 169 Guests are viewing this topic.

Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2404
  • Activity:
    6%
  • Thanked: 1014 times
Re: Is there a universal moral standard?
« Reply #240 on: 08/10/2019 14:44:14 »
Quote from: David Cooper on 07/10/2019 22:59:22
Quote from: Halc
Here you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.
If you don't have output from the sentience, it has no role in the system.
With that I agree, but you are not consistent with this model.

Quote
I also never said that something external to the physical system was involved in any way. Whatever is sentient, if feelings exist at all, is necessarily part of the physical system.
OK, this is different. If it is part of the physical system, why can't it play a role in the system?  What prevents it from having an output?
It would seem that I don't avoid hitting my thumb with a hammer because I want to avoid saying 'ouch'.  I can say the word freely and it causes me no discomfort. No, I avoid hitting my thumb because it would hurt, which means the past experience of pain has had the causal effect of making me more careful. That's an output (a useful role), but you deny that this causal chain (output from the physical sentience) exists.
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #241 on: 08/10/2019 22:12:00 »
Quote from: hamdani yusuf on 08/10/2019 09:50:41
Your calculation of harm:benefit here has nothing to do with feelings.

It's entirely about feelings. The benefits are all feelings, and the harm is all feelings. If you prevent some of the benefits by removing the people who would have received them, you have worse harm:benefit figures as a result of their loss. Killing them humanely doesn't prevent that loss. Let's say that with our thousand people on the island, the average one of them gets 90 units of pleasure and 10 units of suffering in their life, so we have 10,000 units of suffering and 90,000 units of pleasure. By using a ":" I accidentally made it look like a ratio, but that isn't the right way to crunch the numbers. What you have to do is subtract the 10,000 form the 90,000 to get the score of 80,000. That is what the lives of those 1000 people are worth. If you kill 999 people humanely, you reduce the harm figure for each of those 999 people from 10 to 0 and you reduce the pleasure figure from 90 to 0 for each of them too. The one survivor is happier, and he may now have a suffering value of 1 and a pleasure value of 500, which we can turn into a score of 499. The quality of life for the new population is 499, but it used to be 80,000. That's a very immoral change indeed.

Note that if we could get that one surviving individual's pleasure up to 100,000, that would change the situation, but it would be hard to achieve such figures in any real scenario unless we're dealing with one human versus 999 mosquitoes.

Quote
Moral rules based on pleasure and suffering as their ultimate goals are vulnerable to reward hacking (such as drugs) and exploitation by utility monsters.

People don't want to be put on drugs by force. Those who wish to do that to others have a moral obligation to do it to themselves instead and allow others to make the same choice. As it happens, a lot of people do make that choice for themselves, and it doesn't appear to give them better lives. And utility monsters are not doing the sums correctly.

Quote
We know that killing random person is immoral, even if we can make sure that the person doesn't feel any pain while dying. There must be a more fundamental reason to get to that conclusion, other than minimising suffering, because no suffering is involved here.

You've focused on one half of the sum and ignored the other. That killer is not taking into account the loss of pleasure that results from his actions.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #242 on: 08/10/2019 22:15:55 »
Quote from: evan_au on 08/10/2019 10:30:56
If you grew up Scottish winters, standing in the rain is likely to give you hypothermia.
If you grew up in Darwin (Australia), standing in the rain cools you down a bit, and the water will evaporate fairly soon anyway.

That is taken into account before the inputs go into the black box. The feelings relate to the total and not to the individual components. The black box remains superfluous.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #243 on: 08/10/2019 22:32:57 »
Quote from: Halc on 08/10/2019 14:44:14
Quote from: David Cooper on 07/10/2019 22:59:22
Quote from: Halc
Here you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.
If you don't have output from the sentience, it has no role in the system.
With that I agree, but you are not consistent with this model.

I've been over this stuff a thousand times in conversations like this and I'm being consistent throughout. The models keep changing to illustrate different points and what's said about them has to change to match. I don't know what you think isn't consistent, but if you want to chase it down you'll find that it isn't there.

Quote
Quote
I also never said that something external to the physical system was involved in any way. Whatever is sentient, if feelings exist at all, is necessarily part of the physical system.
OK, this is different. If it is part of the physical system, why can't it play a role in the system?  What prevents it from having an output?

Nothing prevents it from having an output. The problem is that the output is dictated by the input in such a way that the experience of feelings is superfluous. That doesn't mean an experience of feelings isn't part of the chain of causation, but the system would work just as well without it. And the other problem is that the information system that generates the claims about feelings being felt is outside the black box and cannot know anything about the feelings that are supposedly being experienced in there. If you attempt to put the information system inside the black box, we can again isolate the part where the feelings are experienced and put that into another black box within the first one, and again we see that the information system that generates the claims about feelings does so without seeing any evidence for them existing.

Quote
It would seem that I don't avoid hitting my thumb with a hammer because I want to avoid saying 'ouch'.  I can say the word freely and it causes me no discomfort. No, I avoid hitting my thumb because it would hurt, which means the past experience of pain has had the causal effect of making me more careful. That's an output (a useful role), but you deny that this causal chain (output from the physical sentience) exists.

I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot competent, unless there's something spectacular going on in the physics which science has not yet uncovered. There is no way to integrate sentience into computers in any way that enables the information system to know what is being experienced by anything that might be feeling feelings. It simply can't be done. If feelings are real in humans, something truly weird is going on.
Logged
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #244 on: 09/10/2019 02:01:07 »
In a chess game, the winner is not determined by who has more pieces, nor the one with highest sum of pieces' values. They are merely rule of thumb, short cut, approximation, Which is usually useful when we can't be sure about the end position of the game. We can easily find exceptions where they don't apply, which means they are not the most fundamental principle. Likewise, maximizing pleasure and minimizing pain are just short cut to approximate a more fundamental moral rule. The real fundamental moral rule must be applied universally, without exception. Any dispute would turn out to be technical problems due to incomplete information at hand.
Logged
Unexpected results come from false assumptions.
 



Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2404
  • Activity:
    6%
  • Thanked: 1014 times
Re: Is there a universal moral standard?
« Reply #245 on: 09/10/2019 02:47:21 »
Quote from: David Cooper on 08/10/2019 22:32:57
And the other problem is that the information system that generates the claims about feelings being felt is outside the black box and cannot know anything about the feelings that are supposedly being experienced in there.
I am conversing with your information system, not the black box, and that information system seems very well aware indeed of those feelings. Your stance seem to be that you are unaware that you feel pain and such. I feel mine, but I cannot prove that to you since only I have a subjective connection to the output of what you call this black box.

On the other hand, you claim the black box does have outputs, but they're apparently not taken into consideration by anything, which is functionally the same as not having those outputs, sort of like a computer with a VGA output without a monitor plugged into it.

Quote
Quote
I avoid hitting my thumb because it would hurt, which means the past experience of pain has had the causal effect of making me more careful. That's an output (a useful role), but you deny that this causal chain (output from the physical sentience) exists.
I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot competent,
Cannot competent?  That seems a typo, but I cannot guess as to what you meant there.
Again this contradiction is asserted: You don't deny the causal connection exists, yet the information system is seemingly forbidden from using the connection.  Perhaps your black box also holds an entirely different belief about how it all works, but your information system instead generates these contradictory statements, and the black box lacks the free will to make it post its actual beliefs.

Quote
unless there's something spectacular going on in the physics which science has not yet uncovered.
A simple wire (nerve) from the black box to the 'information system' part is neither spectacular nor hidden from science.  In reality, there's more than one, but a serial line would do in a pinch.  Perhaps you posit that the black box is spatially separated from the information system to where a wire would not be practical. If so, you've left off that critical detail, which is why I'm forced to play 20 questions, 'chasing it down' as you put it.

Quote from: hamdani yusuf on 09/10/2019 02:01:07
In a chess game, the winner is not determined by who has more pieces, nor the one with highest sum of pieces' values. They are merely rule of thumb, short cut, approximation, Which is usually useful when we can't be sure about the end position of the game.
The set of all possible chess states does not represent a game being played. It wouldn't be an eternal structure if it did.
If the states have any sort of property of being better or worse than a different state, then there are exactly 3 states: Ones where white can force a win, ones where black can, and the remaining states.  The only reason a human game of chess is deeper than that is because we can't just look at a chess position and know which of those 3 states it represents. If we could, the game would be trivial.
So no, there are no values on the pieces or rules of thumb in the set I described. Those 3 states at best, and not even those if the concept of 'win' is not part of the various final positions (the ones that don't have a subsequent position).
« Last Edit: 21/03/2024 19:38:03 by Halc »
Logged
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #246 on: 09/10/2019 07:10:55 »
Quote from: Halc on 09/10/2019 02:47:21
The only reason a human game of chess is deeper than that is because we can't just look at a chess position and know which of those 3 states it represents. If we could, the game would be trivial.
In some cases we can, especially when the possible moves ahead are limited. That's why in high level games, grand masters often resign when they still have several moves ahead before inevitably fall to a checkmate position.
Logged
Unexpected results come from false assumptions.
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #247 on: 09/10/2019 20:49:18 »
Quote from: hamdani yusuf on 09/10/2019 02:01:07
In a chess game, the winner is not determined by who has more pieces, nor the one with highest sum of pieces' values. They are merely rule of thumb, short cut, approximation, Which is usually useful when we can't be sure about the end position of the game. We can easily find exceptions where they don't apply, which means they are not the most fundamental principle. Likewise, maximizing pleasure and minimizing pain are just short cut to approximate a more fundamental moral rule. The real fundamental moral rule must be applied universally, without exception. Any dispute would turn out to be technical problems due to incomplete information at hand.

The more fundamental rule is the one that you treat all participants as if they are a single participant. It ends up being much the same thing as utilitarianism. In your chess example, the players don't care about the wellbeing of their troops: a player could deliberately play a game in which he ends up with nothing more than king and rook against king and he will be just as happy as if he annihilated the other side without losing a piece of his own.

If you think my method for calculating morality doesn't work, show me an example of it failing.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #248 on: 09/10/2019 21:25:16 »
Quote from: Halc on 09/10/2019 01:42:40
I am conversing with your information system, not the black box, and that information system seems very well aware indeed of those feelings.

Then show me a model for how those feelings are integrated into the information system. The only kinds of information system science understands map to the Chinese Room processor in which feelings cannot have a role.

Quote
Your stance seem to be that you are unaware that you feel pain and such. I feel mine, but I cannot prove that to you since only I have a subjective connection to the output of what you call this black box.

My stance is that if feelings are real, something's going on in the brain which is radically different from anything science knows about when it comes to computation because feeling are incompatible with what computers do.

Quote
On the other hand, you claim the black box does have outputs, but they're apparently not taken into consideration by anything, which is functionally the same as not having those outputs, sort of like a computer with a VGA output without a monitor plugged into it.

Not at all. The outputs clearly have a role, but they are determined by the inputs in such a way that the black box is superfluous: the inputs can feed directly into the outputs without any difference in the actions of the machine and the claims that it generates about feelings being experienced.

Computers don't have a "read qualia" instruction, but if they did, it would simply be taking the output from a black box and then interpreting it by applying rules stored in data which was put together by something that had no idea what was actually in the black box. That is the big disconnect. http://magicschoolbook.com/consciousness - this illustrates the problem, and I've been trying to find an error in this for many years.

For what it's worth, I think sentience (and consciousness as a whole) is the most fundamental thing that we're dealing with here, not least because it's the one thing about our universe that can't be a simulation. I think that the way we do processing in computers is not the only way that computation can be done and that there must be some alternative method in which sentience is at its core. It is necessarily part of physics, but it has not yet been identified. Tracing back the claims that we generate to see what evidence they're based on is the way to explore this, but it may be hard to do that without destroying the thing whose workings we're trying to study. If everything else is a simulation, the mechanism may be very carefully hidden, but it has to show up somewhere because it is part of the chain of causation.

Quote
Quote
I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot competent,
Cannot competent?  That seems a typo, but I cannot guess as to what you meant there.

"cannot be competent" - a word went missing somehow.

Quote
Again this contradiction is asserted: You don't deny the causal connection exists, yet the information system is seemingly forbidden from using the connection.  Perhaps your black box also holds an entirely different belief about how it all works, but your information system instead generates these contradictory statements, and the black box lacks the free will to make it post its actual beliefs.

We have something (unidentified) experiencing feelings, but how is that unidentified thing going to be able to tell anything else about that experience? Is it to be a data system? If so, what is it in that information system that's experiencing feelings? The whole thing? Where's the mechanism for that? If we run that information on a Chinese Room processor, we find that there's no place for feelings in it. Where is it reading the feelings and how is it generating data to document that experience of feelings? We can stick a black box into it in which the feelings are felt, and then we can go into that black box to find another information system with a black box in it where the feelings are felt, and then when we go into that box we find another black box... It's black boxes all the way down forever.

You're one of the most rational people I've encountered in my search for intelligent life on this planet, so maybe you'll be able to get your head round the problem and then be able to start looking for solutions in the right place. We're looking for a model that makes sense of sentience by showing its role and by showing how data is generated to document the experience of feelings. With computation as we know it, there is no way to make such a model. We're missing something big.

Quote
A simple wire (nerve) from the black box to the 'information system' part is neither spectacular nor hidden from science.

How do you know what the output from the box means? How does the data system attribute meaning to that signal? If we try to model this based on our current understanding of computation, we get a signal in from the black box in the form of a value in a port. We then look up a file to see what data from that port represents, and then we assert that it represents that. The information in the file was not created by anything in the black box, and whatever it was that created that data has no way of knowing anything about feelings, so the claims generated by mapping data in that file to input from that port are nothing more than fiction.

Quote
In reality, there's more than one, but a serial line would do in a pinch.

Let's give it a parallel port and have it speak in ASCII. Now ask yourself, how is it able to speak to us? How does it know our language? There's an information processing system in the black box, and that can run on a Chinese Room processor. Where are the feelings being experienced in the box, and what by? How is the information system in the black box able to measure them and know what the numbers it's getting in its measurements mean? It looks up a file to see what the numbers mean, and then it maps them too it and creates an assertion about something which it cannot know anything about.

Quote
Perhaps you posit that the black box is spatially separated from the information system to where a wire would not be practical. If so, you've left off that critical detail, which is why I'm forced to play 20 questions, 'chasing it down' as you put it.

Draw a model and see how well you get on with it. Where is the information system reading the feeling and how does it know that there's a feeling there at all? How does it construct the data that documents this experience of feeling, and where does it ever see the evidence that the feeling is in any way real?
Logged
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #249 on: 10/10/2019 08:02:30 »
Quote from: David Cooper on 09/10/2019 20:49:18
The more fundamental rule is the one that you treat all participants as if they are a single participant. It ends up being much the same thing as utilitarianism. In your chess example, the players don't care about the wellbeing of their troops: a player could deliberately play a game in which he ends up with nothing more than king and rook against king and he will be just as happy as if he annihilated the other side without losing a piece of his own.
Yes. It's written in the rules of the game. People tend to be more emotional when they are dealing with anthropomorphized objects, such as chess pieces. I don't see something like that in other games like Go, where the pieces are not anthropomorphized.

Quote from: David Cooper on 09/10/2019 20:49:18
If you think my method for calculating morality doesn't work, show me an example of it failing.

Quote
Because utilitarianism is not a single theory but a cluster of related theories that have been developed over two hundred years, criticisms can be made for different reasons and have different targets.

https://en.wikipedia.org/wiki/Utilitarianism#Criticisms



Quote
The thought experiment
A hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.[1] Nozick writes:

Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose ... the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.[2]

This thought experiment attempts to show that utilitarianism is not actually egalitarian, even though it appears to be at first glance.[1]

The experiment contends that there is no way of aggregating utility which can circumvent the conclusion that all units should be given to a utility monster, because it's possible to tailor a monster to any given system.

For example, Rawls' maximin considers a group's utility to be the same as the utility of the member who's worst off. The "happy" utility monster of total utilitarianism is ineffective against maximin, because as soon as a monster has received enough utility to no longer be the worst-off in the group, there's no need to accommodate it. But maximin has its own monster: an unhappy (worst-off) being who only gains a tiny amount of utility no matter how many resources are given to it.

It can be shown that all consequentialist systems based on maximizing a global function are subject to utility monsters.[1]

History
Robert Nozick, a twentieth century American philosopher, coined the term "utility monster" in response to Jeremy Bentham's philosophy of utilitarianism. Nozick proposed that accepting the theory of utilitarianism causes the necessary acceptance of the condition that some people would use this to justify exploitation of others. An individual (or specific group) would claim their entitlement to more "happy units" than they claim others deserve, and the others would consequently be left to receive fewer "happy units".

Nozick deems these exploiters "utility monsters" (and for ease of understanding, they might also be thought of as happiness hogs). Nozick poses utility monsters justify their greediness with the notion that, compared to others, they experience greater inequality or sadness in the world, and deserve more happy units to bridge this gap. People not part of the utility monster group (or not the utility monster individual themselves) are left with less happy units to be split among the members. Utility monsters state that the others are happier in the world to begin with, so they would not need those extra happy units to which they lay claim anyway.
https://en.wikipedia.org/wiki/Utility_monster
Logged
Unexpected results come from false assumptions.
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #250 on: 10/10/2019 20:29:41 »
Quote from: hamdani yusuf on 10/10/2019 08:02:30
Because utilitarianism is not a single theory but a cluster of related theories that have been developed over two hundred years, criticisms can be made for different reasons and have different targets.

I have given you a method which can be used to determine the right form of utilitarianism. Where they differ, we can now reject the incorrect ones.
 
Quote
The thought experiment
A hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.

No it would not allow the mistreatment of anyone. This is what poor philosophers always do when they analyse thought experiments incorrectly - they jump to incorrect conclusions. Let me provide a better example, and then we'll look back at the above one afterwards. Imagine that a scientist creates a new breed of human which gets 100 times more pleasure out of life, and that these humans aren't disadvantaged in any way. The rest of us would then think, we want that too. If we can't have it added to us through gene modification, would it be possible to design it into our children? If so, then that is the way to switch to a population of people who enjoy life more without upsetting anyone. The missing part of the calculation is that upset that would be caused by mistreating or annihilating people, and the new breed of people who get more enjoyment out of living aren't actually going to get that enjoyment if they spend all their time fearing that they'll be wiped out next in order to make room for another breed of human which gets 10,000 times as much pleasure out of living. By creating all that fear, you actually create a world with less pleasure in it.

Let us suppose that we can't do it with humans though and that we need to be replaced with the utility monster in order to populate the universe with things that get more out of existing than we do. The correct way to make that transition is for humans voluntarily to have fewer children and to reduce their population gradually to zero over many generations while the utility monsters grow their population. We'd agree to do this for the same reason that if we were spiders we'd be happy to disappear and be replaced by humans. We would see the superiority of the utility monster and let it win out, but not through abuse and genocide.

Quote
[1] Nozick writes:

Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose ... the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.[2]

No. Utilitarian theory applied correctly does not allow that because it actually results in a hellish life of fear for the utility monsters.

Quote
This thought experiment attempts to show that utilitarianism is not actually egalitarian, even though it appears to be at first glance.[1]

When you apply my method to it, you see that one single participant is each of the humans and each of the utility monsters, living each of those lives in turn. This helps you see the correct way to apply utilitarianism because that individual participant will suffer more if the people in the system are abused and if the utility monsters are in continual fear that they'll be next to be treated that way.

Quote
The experiment contends that there is no way of aggregating utility which can circumvent the conclusion that all units should be given to a utility monster, because it's possible to tailor a monster to any given system.

That analysis of the experiment is woeful philosophy (and it is also very much the norm for philosophy because most philosophers are shoddy thinkers who fail to take all factors into account).

Quote
For example, Rawls' maximin considers a group's utility to be the same as the utility of the member who's worst off. The "happy" utility monster of total utilitarianism is ineffective against maximin, because as soon as a monster has received enough utility to no longer be the worst-off in the group, there's no need to accommodate it. But maximin has its own monster: an unhappy (worst-off) being who only gains a tiny amount of utility no matter how many resources are given to it.

I don't know what that is, but it isn't utilitarianism because it's ignoring any amount of happiness beyond the level of the least happy thing in existence.

Quote
It can be shown that all consequentialist systems based on maximizing a global function are subject to utility monsters.[1]

If you ask people if they'd like to be modified so that they can fly, most would agree to that. We could replace non-flying humans with flying ones and we'd like that to happen. That is a utility monster, and it's a good thing. There are moral rules about how we get from one to the other, and that must be done in a non-abusive way. If all non-flying humans were humanely killed to make room with flying ones, are those flying ones going to be happy when they realise the same could happen to them to make room for flying humans that can breathe underwater? No. Nozick misapplies utilitarianism.
Logged
 

Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2404
  • Activity:
    6%
  • Thanked: 1014 times
Re: Is there a universal moral standard?
« Reply #251 on: 11/10/2019 05:34:37 »
Quote from: David Cooper on 09/10/2019 21:25:16
Then show me a model for how those feelings are integrated into the information system. The only kinds of information system science understands map to the Chinese Room processor in which feelings cannot have a role.
I don't think a system would pass a Turing test without feelings, so the Chinese room, despite being a test of ability to imitate human intelligence, not feelings, would seem to be an example of strong AI. All Searle manages to prove is that by replacing a CPU with a human, the human can be shown to function without an understanding of the Chinese language, which is hardly news. In the same way, the CPU of my computer has no idea that a jpg file represents an image.
Secondly, the mind of no living thing works via a von-Neumann architecture, with a processing unit executing a stream of instructions, but it has been shown that a Turning machine can execute any algorithm including doing what any living thing does, and thus the Chinese room is capable of passing the Turing test if implemented correctly.

- - -

Concerning the way we've been using the term 'black box'.  You are describing a white box since you are placing the feelings of the sentience in the box.  A black box has no description of what is in the box, only a description of inputs and outputs.  A black box with no outputs can be implemented with an empty box.

Quote
The outputs clearly have a role, but they are determined by the inputs in such a way that the black box is superfluous: the inputs can feed directly into the outputs without any difference in the actions of the machine and the claims that it generates about feelings being experienced.
If the inputs and outputs are identical, the box can be implemented as a pass-through box, which is indeed superfluous unless bypass is not an option.  The phone lines in my street work that way, propagating signals from here to there with the output being ideally the same as the input.
Those lines are not superfluous because my phone would not work if you took them away.  You seem to posit that the box is white, not black, and generates feelings that are not present at the inputs.  If the inputs can be fed straight into the outputs without any difference, then the generation of said feelings cannot be distinguished at the outputs from a different box that doesn't generate them.

Quote
it would simply be taking the output from a black box and then interpreting it by applying rules stored in data which was put together by something that had no idea what was actually in the black box.
The whole point of a black box is that one doesn't need to know what's inside it. The whole point of the consciousness debate is to discuss what's going on inside us, so using black-box methodology seems a poor strategy for achieving this.

Quote
http://magicschoolbook.com/consciousness - this illustrates the problem, and I've been trying to find an error in this for many years.
The site lists 19 premises.  Some of them are just definitions, but some very much are assumptions, and the conclusions drawn are only as strong as the assumptions. I could think of counterexamples of many of the premises. Others are begging a view that defies methodological naturalism, which makes them non-scientific premises. So you're on your own if you find problems with it.

Quote
I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot [be] competent,
OK, I repaired the sentence, but now you're saying that your own claims of experiencing pain are not competent claims?  I don't think you meant to say that either, but that's how it comes out now.  The claims (the posts on this site) are output by the information system, right?  What else produces them? Maybe you actually mean it.

Quote
We have something (unidentified) experiencing feelings, but how is that unidentified thing going to be able to tell anything else about that experience?
Using the output you say it has. I don't think the thing is unidentified, nor do I deny the output from it since said output is plastered all over our posts.

Quote
Is it to be a data system? If so, what is it in that information system that's experiencing feelings? The whole thing? Where's the mechanism for that?
You don't know where the whole thing is?
Neurologists say that most of the basic emotions we feel (pleasure, fear and such) are processed in the limbic system, so most creatures don't feel them. Various kinds of qualia are handled in different places.  Pain in particular seems not specific to any subsystem, so 'whole thing' (not just brain) is a pretty good description. If you hold to the dualist view, then you assert that all this is simply correlation, a cop-out that can be used no matter how much science learns about these things.

Quote
If we run that information on a Chinese Room processor, we find that there's no place for feelings in it.
The Chinese room models a text-only I/O.  A real human is not confined to a text-only stream of input.  It makes no attempt to model a human.  If it did, there would indeed be a place for feelings. All the experiment shows is that the system can converse in Chinese without the guy knowing Chinese, similar to how I can post in English without any of my cells knowing the language.

Quote
With computation as we know it, there is no way to make such a model. We're missing something big.
Computation as you know it is a processor running a set of instructions, hardly a model of any living thing, which is more of an electro-chemical system with a neural net. The chemicals are critical, easily demonstrated by the changed behavior of people under various drugs. Chemicals would have zero effect on a CPU running a binary instruction stream, except possibly to dissolve it.

Quote
A simple wire (nerve) from the black box to the 'information system' part is neither spectacular nor hidden from science.
How do you know what the output from the box means?[/quote]I don't have to. According to your terminology, the 'data system' needs the output to be mapped according to the rules of that data system. Evolution isn't going to select for one system that cannot parse its own inputs. That would be like hooking the vision data to the auditory system and v-v. It violates the rules of the data system, leaving the person blind and deaf.

Quote
How does the data system attribute meaning to that signal?
Same way my computer attributes meaning from the USB signal from my mouse: by the mouse outputting according to the rules of the data system, despite me personally not knowing those rules. I'm no expert in USB protocol. I'm more of an NFS guy, and this computer doesn't use an NFS interface. There's probably no mouse that speaks NFS.

Quote
If we try to model this based on our current understanding of computation, we get a signal in from the black box in the form of a value in a port. We then look up a file to see what data from that port represents, and then we assert that it represents that.
Look up a file? My, you sure know a lot more about how it works than I do.

Quote
In reality, there's more than one, but a serial line would do in a pinch.
Let's give it a parallel port and have it speak in ASCII. Now ask yourself, how is it able to speak to us? How does it know our language?[/quote]You tell me.  You're the one that compartmentalizes it into an isolated box like that. Not my model at all.

Quote
There's an information processing system in the black box
Then it isn't a black box.
Quote
and that can run on a Chinese Room processor. Where are the feelings being experienced in the box, and what by? How is the information system in the black box able to measure them and know what the numbers it's getting in its measurements mean? It looks up a file to see what the numbers mean, and then it maps them too it and creates an assertion about something which it cannot know anything about.
Again, your model, not mine. I have no separation of information system and the not-information-system.

Quote
Draw a model and see how well you get on with it. Where is the information system reading the feeling and how does it know that there's a feeling there at all?
There's no reading of something outside the information system. My model only has the system, which does its own feeling.
Quote
How does it construct the data that documents this experience of feeling
Sounds like you're asking how memory works. I don't know. Not a neurologist.
Quote
where does it ever see the evidence that the feeling is in any way real?
I (the information system) have subjective evidence of my feelings.
Logged
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #252 on: 11/10/2019 10:13:54 »
Quote from: David Cooper on 10/10/2019 20:29:41
No it would not allow the mistreatment of anyone. This is what poor philosophers always do when they analyse thought experiments incorrectly - they jump to incorrect conclusions. Let me provide a better example, and then we'll look back at the above one afterwards. Imagine that a scientist creates a new breed of human which gets 100 times more pleasure out of life, and that these humans aren't disadvantaged in any way. The rest of us would then think, we want that too. If we can't have it added to us through gene modification, would it be possible to design it into our children? If so, then that is the way to switch to a population of people who enjoy life more without upsetting anyone. The missing part of the calculation is that upset that would be caused by mistreating or annihilating people, and the new breed of people who get more enjoyment out of living aren't actually going to get that enjoyment if they spend all their time fearing that they'll be wiped out next in order to make room for another breed of human which gets 10,000 times as much pleasure out of living. By creating all that fear, you actually create a world with less pleasure in it.

Let us suppose that we can't do it with humans though and that we need to be replaced with the utility monster in order to populate the universe with things that get more out of existing than we do. The correct way to make that transition is for humans voluntarily to have fewer children and to reduce their population gradually to zero over many generations while the utility monsters grow their population. We'd agree to do this for the same reason that if we were spiders we'd be happy to disappear and be replaced by humans. We would see the superiority of the utility monster and let it win out, but not through abuse and genocide.
I think we need to be clear about our definition of terms we used in this discussion, since subtle differences may lead to frustrating disagreements. I want to avoid implicit assumptions and taking for granted that our understanding of a term is the same as other participants.
Who do you mean with anyone? human? what about animals and plants?
Why pleasure is good while pain is bad? what about inability/reduced ability to feel pain or pleasure?
How much fewer children is considered acceptable?
Logged
Unexpected results come from false assumptions.
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #253 on: 11/10/2019 22:25:26 »
Quote from: Halc on 11/10/2019 05:34:37
I don't think a system would pass a Turing test without feelings, so the Chinese room, despite being a test of ability to imitate human intelligence, not feelings, would seem to be an example of strong AI. All Searle manages to prove is that by replacing a CPU with a human, the human can be shown to function without an understanding of the Chinese language, which is hardly news. In the same way, the CPU of my computer has no idea that a jpg file represents an image.
Secondly, the mind of no living thing works via a von-Neumann architecture, with a processing unit executing a stream of instructions, but it has been shown that a Turning machine can execute any algorithm including doing what any living thing does, and thus the Chinese room is capable of passing the Turing test if implemented correctly.

In principle, a system with no feelings could pretend to have feelings sufficiently well to pass the Turing Test. It would gradually learn what it has to claim to be feeling in any situation not be be caught out.

We do actually process streams of instructions in our head when doing careful processing, like maths, or when making food by following a recipe. The algorithms that we apply there are directly replicable on computers.

Quote
Concerning the way we've been using the term 'black box'.  You are describing a white box since you are placing the feelings of the sentience in the box.  A black box has no description of what is in the box, only a description of inputs and outputs.  A black box with no outputs can be implemented with an empty box.

Okay, but it's a black box until we try to work out what's going on inside it, at which point it becomes a white box and we have to complete the contents by including a new black box.

Quote
Those lines are not superfluous because my phone would not work if you took them away.  You seem to posit that the box is white, not black, and generates feelings that are not present at the inputs.  If the inputs can be fed straight into the outputs without any difference, then the generation of said feelings cannot be distinguished at the outputs from a different box that doesn't generate them.

First, the outputs are not the same as the inputs: there's an extra output line which duplicates what goes out on the main output line, and this extra one is read as indicating that a feeling was experienced. Then there's the bit about your phone functioning. What evidence to we have that sentience is functioning in the box? Is there information coming out of the box that says so? If so, how is that data constructed? Do we have an information system inside the box creating it or do we just have a signal coming out of the box whose meaning is asserted for it by an information system on the outside which cannot know if its claims are true? If the latter, the data is incompetent. If the former, then there's an information system inside the box (now turning white) and we're adding a new black box on the inside to hold the part of the system which we can't model.

Quote
The whole point of a black box is that one doesn't need to know what's inside it. The whole point of the consciousness debate is to discuss what's going on inside us, so using black-box methodology seems a poor strategy for achieving this.

The whole point of the black box is to draw your attention to the problem. If the bit we can't model is inside the black box and we don't know what's going on in there, we don't have a proper model of sentience. To make a proper model of sentience we have to eliminate the black box, but no one has ever managed to do so because they always have to point somewhere and say "feelings are felt here and they are magically recognised as existing and as being feelings by this magic routine which asserts that they are being felt there even though it has absolutely no evidence to back its assertion".

Quote
The site lists 19 premises.  Some of them are just definitions, but some very much are assumptions, and the conclusions drawn are only as strong as the assumptions. I could think of counterexamples of many of the premises. Others are begging a view that defies methodological naturalism, which makes them non-scientific premises. So you're on your own if you find problems with it.

Give me your best counterexample then. So far as I can see, they are correct. If you can break any one of them, that might lead to an advance, so don't hold back.

Quote
OK, I repaired the sentence, but now you're saying that your own claims of experiencing pain are not competent claims?  I don't think you meant to say that either, but that's how it comes out now.  The claims (the posts on this site) are output by the information system, right?  What else produces them? Maybe you actually mean it.

That is predicated on the idea that the brain works like a computer, processing data in ways that science understands. If the claims coming out of my head about feelings are competent, some other kind of system is putting that data together in a way that science has yet to account for.

Quote
Quote
We have something (unidentified) experiencing feelings, but how is that unidentified thing going to be able to tell anything else about that experience?
Using the output you say it has. I don't think the thing is unidentified, nor do I deny the output from it since said output is plastered all over our posts.

That isn't good enough. The whole point is that the only way to interpret that output is to map baseless assertions to it, unless the output is already coming in the form of data that the external data system can understand, but if that's the case, we need to see the information system that constructed that data and to model how it knew what the output from the sentient thing means.

Quote
Quote
Is it to be a data system? If so, what is it in that information system that's experiencing feelings? The whole thing? Where's the mechanism for that?
You don't know where the whole thing is?

I'm not asking where the whole thing is. I was asking if it's the whole thing that's experiencing feelings rather than just a part of it. It makes little difference either way though, because to model this we need to have an interface between the experience and the system that makes data. For that data to be true, the system that makes it has to be able to know about the experience, but it can't.

Quote
If you hold to the dualist view, then you assert that all this is simply correlation, a cop-out that can be used no matter how much science learns about these things.

You've only found it once you've found the interface and seen how the data system knows that the data it's generating is true.

Quote
The Chinese room models a text-only I/O.  A real human is not confined to a text-only stream of input.  It makes no attempt to model a human.  If it did, there would indeed be a place for feelings. All the experiment shows is that the system can converse in Chinese without the guy knowing Chinese, similar to how I can post in English without any of my cells knowing the language.

A Chinese Room processor can run any code at all and can run an AGI system. It is Turing complete. It cannot handle actual feelings, but can handle data that represents feelings. A piece of paper with a symbol on it can represent a feeling, but nothing there is feeling that feeling. We need to model actual feelings, and that's something that science cannot yet do in any way that enables them to be detected.

Quote
Computation as you know it is a processor running a set of instructions, hardly a model of any living thing, which is more of an electro-chemical system with a neural net. The chemicals are critical, easily demonstrated by the changed behavior of people under various drugs. Chemicals would have zero effect on a CPU running a binary instruction stream, except possibly to dissolve it.

We can simulate neural networks. Where is the interface between the experience of feelings and the system that generates the data to document that experience? Waving at something complex isn't good enough. You have no model of sentience, but we do have models of neural nets which are equivalent to running algorithms on conventional computers.

Quote
Quote
How do you know what the output from the box means?
I don't have to. According to your terminology, the 'data system' needs the output to be mapped according to the rules of that data system. Evolution isn't going to select for one system that cannot parse its own inputs. That would be like hooking the vision data to the auditory system and v-v. It violates the rules of the data system, leaving the person blind and deaf.

If evolution selects for an assertion of pain being experienced in once case and an assertion of pleasure in another case, who's to say that the sentient thing isn't actually feeling the opposite sensation to the one asserted? The mapping of assertion to output is incompetent.

Quote
Quote
How does the data system attribute meaning to that signal?
Same way my computer attributes meaning from the USB signal from my mouse: by the mouse outputting according to the rules of the data system, despite me personally not knowing those rules. I'm no expert in USB protocol. I'm more of an NFS guy, and this computer doesn't use an NFS interface. There's probably no mouse that speaks NFS.

The mouse is designed to speak the language that the computer understands, or rather, the computer is told how to interpret the squeaks from the mouse. If there are feelings being experienced in the mouse, the computer cannot know about them unless the mouse tells it, and for the mouse to tell it it has to use a language. If the mouse is using a language, something in the mouse has to be able to read the feelings, and how does that something know what's being felt? It can't.

Quote
Quote
If we try to model this based on our current understanding of computation, we get a signal in from the black box in the form of a value in a port. We then look up a file to see what data from that port represents, and then we assert that it represents that.
Look up a file? My, you sure know a lot more about how it works than I do.

That's exactly the point. You see room for something impossible in the places where you don't know what's going on. I understand what's going on throughout the whole system, apart from the place where the magic is needed to complete the model.

Quote
Quote
Let's give it a parallel port and have it speak in ASCII. Now ask yourself, how is it able to speak to us? How does it know our language?
You tell me.  You're the one that compartmentalizes it into an isolated box like that. Not my model at all.

Your model works on magic. I'm trying to eliminate the magic, and the black box shows the point where that task becomes impossible. So, you open up the black box and have the feelings exist somewhere (who cares where) in the system while data is generated to document the existence of those feelings, but you still can't show me how the part of the system putting that data together knows anything about the feelings at all.

Quote
Quote
There's an information processing system in the black box
Then it isn't a black box.

We want to explain the magic component, so we break it open and it becomes a white box, but it then contains a black box where the magic component resides. We can go on opening an infinite chain of black boxes and watch them turn white, but there will always be another black box containing the magic component which you can't model.

Quote
Again, your model, not mine. I have no separation of information system and the not-information-system.

And that's how you fool yourself into thinking you have a working model, but it runs on magic. The part of it that generates the data about feelings might be in intense pain, but how can the process it's running know anything about that feeling in order to generate data about it? It can't. That's where science is hopelessly lost. Our current understanding of computation is not compatible with sentience.

Quote
Quote
Draw a model and see how well you get on with it. Where is the information system reading the feeling and how does it know that there's a feeling there at all?
There's no reading of something outside the information system. My model only has the system, which does its own feeling.

And how does it then convert from that experience of feeling into data being generated in a competent way that ensures that the data is true? This is where there's a gap in your knowledge a mile wide, and you need to fill that.

Quote
Quote
How does it construct the data that documents this experience of feeling
Sounds like you're asking how memory works. I don't know. Not a neurologist.

I'm asking for a theoretical model. Science doesn't have one for this.

Quote
Quote
where does it ever see the evidence that the feeling is in any way real?
I (the information system) have subjective evidence of my feelings.

Show me the model.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #254 on: 11/10/2019 22:52:52 »
Quote from: hamdani yusuf on 11/10/2019 10:13:54
Who do you mean with anyone? human? what about animals and plants?

If they're sentient, then they're included. Some animals may not be, and it's highly doubtful that any plants are, or at least, not in any way that's tied to what's happening to them (just as the material of a rock could be sentient).

Quote
Why pleasure is good while pain is bad?

They are just what they are. One is horrible and we try to avoid it, while the other is nice and we seek it out, with the result that most people are now overweight due to their desire to eat delicious things.

Quote
what about inability/reduced ability to feel pain or pleasure?

What about it? Each individual must be protected by morality from whatever kinds of suffering can be inflicted on it, and that varies between different people as well as between different species.

Quote
How much fewer children is considered acceptable?

Imagine that you have to live all the lives of all the people and utility monsters. They are all you. With that understanding in your head, you decide that you prefer being utility monsters, so you want to phase out people and replace them. You also have to live the lives of those people, so you need to work out how not to upset them, and the best way to do that is to let the transition take a long time so that the difference is too small to register with them. For a sustainable human population, each person who has children might have 1.2 children. That could be reduced to 1.1 and the population would gradually disappear while the utility monsters gradually increase in number. Some of those humans will realise that they're envious of the utility monsters and would rather be them, so they may be open to the idea of bringing up utility monsters instead of children, and that may be all you need to drive the transition. It might also make the humans feel a lot happier about things if they know that a small population of humans will be allowed to go on existing forever - that could result in better happiness numbers overall than having them totally replaced by utility monsters.
Logged
 

Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2404
  • Activity:
    6%
  • Thanked: 1014 times
Re: Is there a universal moral standard?
« Reply #255 on: 12/10/2019 23:01:36 »
Quote from: David Cooper on 11/10/2019 22:25:26
Okay, but it's a black box until we try to work out what's going on inside it, at which point it becomes a white box and we have to complete the contents by including a new black box.
This is fine, but you're not going to demonstrate your sentience that way, since you always put it in the black box where you cannot assert its existence.

Quote
First, the outputs are not the same as the inputs
Didn't you say otherwise?
Quote from: David
the inputs can feed directly into the outputs without any difference in the actions of the machine and the claims that it generates about feelings being experienced.
OK, this statement says the inputs can be fed into the outputs, but are not necessarily.  It says those outputs make no difference to the actions of the machine, which means the machine would claim feelings even if there were none. That means you've zero evidence for this sentience you claim.

Quote
there's an extra output line which duplicates what goes out on the main output line, and this extra one is read as indicating that a feeling was experienced.
This contradicts your prior statement.
1) How do you know about these lines? The answer seems awfully like something you just now made up.
2) If there are two outputs and one is a duplicate of the other, how can it carry additional information?  Duplicate outputs are usually there for redundancy so the system still works if one of them fails. That's part of the reason you have two eyes and such. The presence of a duplicate data stream does not indicate feelings if the one 'main' stream does not indicate the feelings.  There is no additional information in the 2nd line.
3) This is the contradiction part: You said earlier that the action of the 'machine' is unaffected by these outputs, but here you claim that an output is read as indicating that a feeling was experienced. That's being affected. If the machine action is unaffected by this output, then the output is effectively ignored at some layer.

Where does the output of your black box go?  To what is it connected?  This is outside the black box, so science should be able to pinpoint it. It's in the white part of the box after all.  If you can't answer that, then you can't make your black box ever smaller since the surrounding box is also black.

Quote
The whole point of the black box is to draw your attention to the problem.
More like a way to hide it. The scientists that work on this do not work this way. They explore what's in the box.
Quote
If the bit we can't model is inside the black box and we don't know what's going on in there, we don't have a proper model of sentience.
So you're admitting you don't have a proper white box model?  Does anybody claim they have one?
Quote
they always have to point somewhere and say "feelings are felt here and they are magically recognised as existing and as being feelings by this magic routine which asserts that they are being felt there even though it has absolutely no evidence to back its assertion".
I'm unaware of this wording.  There are no 'routines' for one thing. They very much do have evidence as to mapping where much of this functionality goes on, but that isn't a model of how it works.  It is a pretty good way to say which creatures 'feel' the various sorts of this to which humans can relate.
I don't think there can be an objective model of a subjective experience.  We might create an artificial sentience, and yet even knowing how it was created, we'd not be able to say how it works.  They're already way past the point where they know how some of the real AI systems work.  Fake AI, maybe, but not real AI.  A self-driving car is fake AI.

Quote
Quote
The site lists 19 premises.  Some of them are just definitions, but some very much are assumptions, and the conclusions drawn are only as strong as the assumptions. I could think of counterexamples of many of the premises. Others are begging a view that defies methodological naturalism, which makes them non-scientific premises. So you're on your own if you find problems with it.
Give me your best counterexample then. So far as I can see, they are correct. If you can break any one of them, that might lead to an advance, so don't hold back.
Some small nits.  The information system processes only data (1).  3 says the non-data must first be converted to data before being given to the information system (IS), but 5 and 13 talk about the IS doing the converting, which means it processes something that isn't data.  As I said, that's just a nit.
13 also talks about ideas being distinct from data. An idea sounds an awful lot like data to me.

A counterexamples comes up with 10 which says that data which is not covered by the rules of the IS cannot be considered by the IS.  Not sure what they mean by 'considered' but take a digital signal processor (DSP) or just a simple amplifier.  It might be fed a data stream that is meaningless to the IS, yet the IS is completely capable of processing the stream.  This is similar to the guy in the Chinese room.  He is an IS, and he's handling data (the Chinese symbols) that does not conform to his own rules (English), yet he's tasked with processing that data.

My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.

Quote
That is predicated on the idea that the brain works like a computer, processing data in ways that science understands.
Science does not posit the brain to operate like a computer.  There are some analogies, sure, but there is no equivalent to a CPU, address space, or instructions.  Yes, they have a fairly solid grasp on how the circuitry works, but not how the circuit works.

Quote
I'm not asking where the whole thing is. I was asking if it's the whole thing that's experiencing feelings rather than just a part of it.
Yes, It's the whole thing.  It isn't a special piece of material or anything.

Quote
It makes little difference either way though, because to model this we need to have an interface between the experience and the system that makes data. For that data to be true, the system that makes it has to be able to know about the experience, but it can't.
Doesn't work that way. Eyes arguably 'makes data', yet isn't a device that 'knows' about experience. The system that processes the data (in my case) has evolved to be compatible with the system that makes the data, not the other way around. It's very good at that, being able to glean information from new sources. They've taught humans to navigate by sound like a bat, despite the fact that we've not evolved for it. The system handles this alternately formatted data (outside the rules of the IS) just fine. The only thing they needed to add was the bit that produces the sound pulses, since we're not physically capable of generating them.

Quote
Quote
All the [Chinese room] experiment shows is that the system can converse in Chinese without the guy knowing Chinese, similar to how I can post in English without any of my cells knowing the language.
A Chinese Room processor can run any code at all and can run an AGI system. It is Turing complete.
Didn't say otherwise, but that's all it does is run code.  The processor doesn't know Chinese.  But the system (the whole thing) does.  There is no black box where the Chinese part is.  There's not a 'know Chinese' instruction in the book of English instructions from which the guy in there works.

Quote
We can simulate neural networks. Where is the interface between the experience of feelings and the system that generates the data to document that experience?
This presumes that the experience is not part of the system, and that it needs to be run through this data-generation step. You hold the same premise as step 7.
Anyway, an neural net would not accurately simulate a human since a human is more than a network. A human is part of a larger network, which would also need to be simulated. Not saying it cannot be done, and I don't think it need be done at a deeper level than electro-bio-chemical.  Going to the molecular level for instance seems unnecessary.

Quote
Waving at something complex isn't good enough. You have no model of sentience.
Pretty much how you're presenting your views, yes.  My model is pretty simple actually.  I don't claim to know how it works.  Neither do you, but you add more details than do I, but still hide your complex part in a black box, as if you had an understanding of how the data-processing part worked.

Quote
but we do have models of neural nets which are equivalent to running algorithms on conventional computers.
That we do.

Quote
If evolution selects for an assertion of pain being experienced in once case and an assertion of pleasure in another case, who's to say that the sentient thing isn't actually feeling the opposite sensation to the one asserted? The mapping of assertion to output is incompetent.
This makes no sense to me since I don't model the sentience as a separate thing. There is no asserting going on. If the data system takes 'damage' data and takes pleasure from them, then it will make choices to encourage the sensation, resulting in the being being less fit.

Quote
The mouse is designed to speak the language that the computer understands, or rather, the computer is told how to interpret the squeaks from the mouse.
The first guess is closer. Somebody put out a standard interface and both computer and mouse adhere to that interface. Sensory organs and brains don't work that way, being evolved rather than designed. Turns out the sensory organ pretty much defines the data format, and the IS is really good at extracting meaning from any data. So we could in theory afix a 6th sense to detect vibrations of passing creatures, like the lateral line in fish. Run some nerves from that up the spine and the IS would quickly have a new sense to add to its qualia. Some people see a 4th color, and some only 2.

Quote
If there are feelings being experienced in the mouse, the computer cannot know about them unless the mouse tells it, and for the mouse to tell it it has to use a language.
And even then, the computer only knows about the claim, not the feelings. You don't seem to be inclined to believe a computer mouse if it told you it had feelings.

Quote
If the mouse is using a language, something in the mouse has to be able to read the feelings, and how does that something know what's being felt? It can't.
This again assumes feelings separate from the thing that reads it.  Fine and dandy if it works that way, but if the two systems don't interface in a meaningful way, then system 2 is not able to pass on a message from system 1 that it just interprets as noise.

Quote
I'm trying to eliminate the magic, and the black box shows the point where that task becomes impossible. So, you open up the black box and have the feelings exist somewhere (who cares where) in the system while data is generated to document the existence of those feelings, but you still can't show me how the part of the system putting that data together knows anything about the feelings at all.
The part of the system putting that data together experiences the subjective feelings directly since it's the same system. No magic is needed for a system to have access to itself.  The part of the system documenting the feelings is probably my mouth and hands since I can speak and write of those feelings.  You seem to ask how the hands know about the feelings.  They don't.  They do what they're told via the one puppet language they understand: Move thus. They have no idea that they're documenting feelings, and such documentation can be produced by anything (like a copy machine), so it's hardly proof of a particular documented claim.

Quote
And that's how you fool yourself into thinking you have a working mode, but it runs on magicl
I'm only fooling myself if I'm wrong, and that hasn't been demonstrated. My model doesn't run on magic. I've asserted no such thing, and you've supposedly not asserted it about your model.

Quote
The part of it that generates the data about feelings might be in intense pain, but how can the process it's running know anything about that feeling in order to generate data about it?
Your model, not mine.  You need magic because you're trying to squeeze your model into mine.  Your statement above mixes layers of understanding and is thus word salad, like describing a system using classic and quantum physics intermixed.

Quote
Quote
There's no reading of something outside the information system. My model only has the system, which does its own feeling.
And how does it then convert from that experience of feeling into data being generated in a competent way that ensures that the data is true?
For one, it already is data, so no conversion. I am capable of lying, so if I generate additional data (like I do on these posts), I have no way of proving that the data is true, so I cannot assure something outside the system of the truth of generated data. Inside the system, there is no truth or falsehood, just subjective experience.

Quote
I'm asking for a theoretical model. Science doesn't have one for this.
A model of how memory works?  I think they have some, but I'm no personally aware of them. It's just not my field. I mean, I'm a computer guy, and yet I'd have to look it up if I were to provide an answer as to how exactly the various kinds of computer memory work. For my purposes, I just assume it does.

Quote
Quote
I (the information system) have subjective evidence of my feelings.
Show me the model.
That is the model.  One system, not multiple. Yes, it has inputs and outputs, but the feelings don't come from those. There is no generation of data of feelings from a separate feeling organ.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #256 on: 14/10/2019 01:19:12 »
Quote from: Halc on 12/10/2019 23:01:36
Quote from: David Cooper on 11/10/2019 22:25:26
Okay, but it's a black box until we try to work out what's going on inside it, at which point it becomes a white box and we have to complete the contents by including a new black box.
This is fine, but you're not going to demonstrate your sentience that way, since you always put it in the black box where you cannot assert its existence.

The point of the black box is to put sentience in the model by hiding the missing part where the model depends on magic. If we open the black box, we then have to try to show the functionality of the magic, and science doesn't have that bit of the model. We cannot represent the magic other than by writing "the magic bit happens here", and that's what the black box does already. The opened black box merely reveals the bit saying "the magic happens here". The box is thus the same whether it's closed and black or open and white.

Quote
Quote
First, the outputs are not the same as the inputs
Didn't you say otherwise?

They can be the same if you want them to be, but there has to be an extra output if you want it to signal the existence of a feeling being experienced, and that output will be the same as the other output (so it doesn't actually provide any extra information). You can turn one wire into two without the box as well, and that's why the box doesn't add any useful functionality and the outputs tell you precisely nothing about sentience.

Quote
It says those outputs make no difference to the actions of the machine, which means the machine would claim feelings even if there were none. That means you've zero evidence for this sentience you claim.

That's the whole point: there is no evidence of the sentience. There is no way for a data system to acquire such evidence, so its claims about the existence of sentience are incompetent.

Quote
Quote
there's an extra output line which duplicates what goes out on the main output line, and this extra one is read as indicating that a feeling was experienced.
This contradicts your prior statement.

It doesn't. The extra output is proposed as a way to make an additional signal to indicate the existence of a feeling in the box. You could do this with a hundred extra outputs if you like, and they can be inversions of the main output signal too. It doesn't matter what they are, because the data system on the outside cannot know what they actually mean and can only look up an interpretation file to find out what something outside of the box asserts that they mean; something that doesn't actually know if feelings exist in the box at all.

Quote
1) How do you know about these lines? The answer seems awfully like something you just now made up.

Of course it's something I made up: it's an attempt to build a model of sentience, and it's an attempt that fails. You're free to attempt to build one to your own design, and it will fail too. No one has built a model of sentience that doesn't fail, and it looks impossible to do otherwise.

Quote
2) If there are two outputs and one is a duplicate of the other, how can it carry additional information?

That's exactly the point: it doesn't carry additional information. What you're supposed to be doing here is asking yourself how data documenting the existence of feelings can be generated and how it can relate to the actual experience of that feeling in such a way as for the data documenting the existence of feelings to be true rather than a mere assertion with no connection to the experience of the feeling. When I type a key and the word "ouch" appears on the screen, that data isn't put together by anything that read a feeling in something sentient: it's just generating a baseless assertion. The challenge is to build a model where the claims aren't mere assertions but where they can be shown to be true.

Quote
3) This is the contradiction part: You said earlier that the action of the 'machine' is unaffected by these outputs, but here you claim that an output is read as indicating that a feeling was experienced. That's being affected. If the machine action is unaffected by this output, then the output is effectively ignored at some layer.

The black box can be replaced by wires which simply connect an input to two outputs. The existence of one of those outputs can then be made to trigger a false claim to be generated. That affects the behaviour of the machine: cut the wire and the claim is no longer triggered, but it isn't sentience that's driving that change in behaviour.

Quote
Where does the output of your black box go?  To what is it connected?  This is outside the black box, so science should be able to pinpoint it. It's in the white part of the box after all.  If you can't answer that, then you can't make your black box ever smaller since the surrounding box is also black.

With the computer saying "ouch" when a key is typed, a signal comes in through a port, a routine picks up a value there and looks up a "file" (or set of variables) to see what string that value should be mapped to, and then it prints that string to the screen. Where might we have the feeling experienced? In the port? The port becomes a black box. The feeling is felt in the port and then the value is read, mapped to a string, and the string is sent to the screen. The experience of the feeling makes no difference to the end result, and the "ouch" is no more evidence of the existence of sentience than it was before. Science can follow everything that's going on there except for the part where the feeling is being experienced, and nothing the determines what data is put together to appear on the screen can detect the experiencing of the feeling. It is undetectable and superfluous.

Quote
Quote
The whole point of the black box is to draw your attention to the problem.
More like a way to hide it. The scientists that work on this do not work this way. They explore what's in the box.

I'm not hiding anything: this is all about what's going on in the box. We open it up and we see that it contains magic. We don't like that, but that's what's in there. Either that or there is no sentience involved.

Quote
Quote
If the bit we can't model is inside the black box and we don't know what's going on in there, we don't have a proper model of sentience.
So you're admitting you don't have a proper white box model?  Does anybody claim they have one?

I said from the start that it looks impossible. I'm showing you a broken model in order to show you the problem, and I said at the start that it is a broken model. Anyone who claims that sentience is compatible with computation as we understand it needs to prove the point by demonstrating a working model of it. I've stated that sentience is incompatible with computation as science understands it and that a model of it cannot be built unless we find some radically new way of doing computation which is beyond current scientific knowledge. Computers as we understand them today cannot read feelings in anything: all they can do is map assertions to inputs from something which might be sentient but which might equally not be. The asserted claims thus generated are completely incompetent.

Quote
Quote
they always have to point somewhere and say "feelings are felt here and they are magically recognised as existing and as being feelings by this magic routine which asserts that they are being felt there even though it has absolutely no evidence to back its assertion".
I'm unaware of this wording.  There are no 'routines' for one thing. They very much do have evidence as to mapping where much of this functionality goes on, but that isn't a model of how it works.  It is a pretty good way to say which creatures 'feel' the various sorts of this to which humans can relate.

There are routines. Once you're dealing with neural nets, you may not be able to work out how they do what they do, but they are running functionality in one way or another. That lack of understanding leaves room for people to point at the mess and say "sentience is in there", but that's not doing science. We need to see the mechanism and we need to identify the thing that is sentient. Neural nets can be simulated and we can then look at how they behave in terms of cause and effect. If they are producing data, we should be able to look to see how they built that data and what caused them to do so. If they are producing data, they must be doing something systematic based on running some kind of algorithm. The data either maps to something that's really happening (a sentient experience) or it doesn't. If there is a sentient experience in there, how is the neural net reading it and how does it make sure that the data it generates to document that experience is true? Those are the important questions to focus on. If there's some mechanism by which it can detect the sentient experience and know that the data it's generating about it is true, that would be the most important scientific discovery of all time. But it looks impossible. The process generating the data cannot ensure that the data is true.

Quote
Some small nits.  The information system processes only data (1).  3 says the non-data must first be converted to data before being given to the information system (IS), but 5 and 13 talk about the IS doing the converting, which means it processes something that isn't data.  As I said, that's just a nit.

You're right - it is possible to process something meaningless before it is proper data. If we have data coming in from a port, it's just a number. The system has to decide what it represents, and it's only then that the number maps to a meaning. Processing of it when it has no meaning merely converts it from something meaningless to something else that's meaningless. If it's an 8-bit value, it can be converted into a 32-bit value, for example. It's still a meaningless value until a meaning is mapped to it. The system might decide that it represents a feeling, and it then maps the idea of a feeling to that value, but it didn't get that idea from the value itself or the port that it came in through, and the point is that there is a disconnect. Whatever the value meant to the sentience on the other side of the port, that meaning is not passed across with the value. For the meaning to be passed too, we must have an information system on the other side of the port, and if we have that, we need to look at how it's reading the sentience. It in turn is going to be reading a value from a port to measure the sentience, so the problem recurs there.

Quote
13 also talks about ideas being distinct from data. An idea sounds an awful lot like data to me.

Variables are data, but they are not ideas. The can represent ideas, and the ideas are more complex than the symbols used to represent them. The previous sentence can be represented by z, but z is not the idea: you have to look back at the previous sentence to find the idea which z represents.

Quote
A counterexamples comes up with 10 which says that data which is not covered by the rules of the IS cannot be considered by the IS.  Not sure what they mean by 'considered' ...

It's simply that it can't know anything about them. There is no way for the idea of sentience to be passed across, so the receiver of the data has to map that idea to it itself. It has no way of knowing if that idea was ever represented by the data by the passer of the data.

Quote
... but take a digital signal processor (DSP) or just a simple amplifier.  It might be fed a data stream that is meaningless to the IS, yet the IS is completely capable of processing the stream.  This is similar to the guy in the Chinese room.  He is an IS, and he's handling data (the Chinese symbols) that does not conform to his own rules (English), yet he's tasked with processing that data.

Yes, but the information systems on both sides were designed by people to handle the data correctly. The problem with sentience is that it cannot build a data system, and any data system that is built cannot know about sentience, so a data system which makes claims about sentient must fabricate them.

Quote
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.

[Break due to character limit being exceeded...]
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Is there a universal moral standard?
« Reply #257 on: 14/10/2019 01:20:19 »
[Continuation...]

Quote
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.

If sentience is a form of data, what does that sentience look like in the Chinese Room? It's just symbols on pieces of paper and simple processes being applied where new symbols are produced on pieces of paper. If a piece of paper has "ouch" written on it, is that an experience of pain?

Quote
Science does not posit the brain to operate like a computer.  There are some analogies, sure, but there is no equivalent to a CPU, address space, or instructions.  Yes, they have a fairly solid grasp on how the circuitry works, but not how the circuit works.

But science has an understanding of computation, and the issue here is about whether sentience can interface with computation. With computation as we understand it, it can't.

Quote
Quote
I'm not asking where the whole thing is. I was asking if it's the whole thing that's experiencing feelings rather than just a part of it.
Yes, It's the whole thing.  It isn't a special piece of material or anything.

If a multi-component feels a feeling without any of the components feeling anything, that's magic. And you still have to have something in that multi-component thing reading the level of feeling in it before it can generate data to document that level of feeling being experienced. You can't just have the whole thing magically generate data to document a feeling without a mechanism to go from experience of feeling to an information system creating the data to document it.

Quote
Doesn't work that way. Eyes arguably 'makes data', yet isn't a device that 'knows' about experience. The system that processes the data (in my case) has evolved to be compatible with the system that makes the data, not the other way around. It's very good at that, being able to glean information from new sources. They've taught humans to navigate by sound like a bat, despite the fact that we've not evolved for it. The system handles this alternately formatted data (outside the rules of the IS) just fine. The only thing they needed to add was the bit that produces the sound pulses, since we're not physically capable of generating them.

There is no sentience tied up in that. The data that comes in can be seen to match up to the external reality by the success of the algorithms used to interpret it. Machines can match it all, but there are no feelings involved.

Quote
The processor doesn't know Chinese.  But the system (the whole thing) does.  There is no black box where the Chinese part is.  There's not a 'know Chinese' instruction in the book of English instructions from which the guy in there works.

Again, that's easy because sentience isn't involved. Computation (of the kinds known to science) have no trouble with accounting for vision or communication in Chinese. The problem is with sentience (and any other aspect of consciousness that might be distinct from sentience).

Quote
]This presumes that the experience is not part of the system, and that it needs to be run through this data-generation step. You hold the same premise as step 7.

We don't have any model for sentience being part of the system and we don't have any model for how a feeling can be measured.

Quote
Quote
Waving at something complex isn't good enough. You have no model of sentience.
Pretty much how you're presenting your views, yes.  My model is pretty simple actually.  I don't claim to know how it works.  Neither do you, but you add more details than do I, but still hide your complex part in a black box, as if you had an understanding of how the data-processing part worked.

Your model is simple because it has magic hidden in the complexity which you can simply wave at without showing how it works. My model is honest in that it isolates the magic part and labels it as such by putting it in a black (magic) box.

Quote
Quote
If evolution selects for an assertion of pain being experienced in one case and an assertion of pleasure in another case, who's to say that the sentient thing isn't actually feeling the opposite sensation to the one asserted? The mapping of assertion to output is incompetent.
This makes no sense to me since I don't model the sentience as a separate thing.

The point is that an unpleasant feeling could be used to drive someone to increase that feeling rather than decrease it. We can take the same system the printed "ouch" to the screen and replace the "ouch" with "Oooh, I like that!" and there is no change to any feeling that might be imagined to be being experienced. If it was painful before, it's still painful, and if it's pleasant now, it was pleasant before.

Quote
There is no asserting going on. If the data system takes 'damage' data and takes pleasure from them, then it will make choices to encourage the sensation, resulting in the being being less fit.

The claims that come out about feelings are assertions. They are either true or baseless. If the damage inputs are handled correctly, the pleasure will be suppressed in an attempt to minimise damage. And if an unpleasant feeling is generated when an animal eats delicious food, it will be designed (by evolution) to go on eating it. The information system in the animal will in each case generate data about those experiences which claim the opposite of what they actually felt like.

Quote
Quote
The mouse is designed to speak the language that the computer understands, or rather, the computer is told how to interpret the squeaks from the mouse.
The first guess is closer.

I doubt that. The mouse is designed first, then the computer is told how to interpret its squeaks. That's why the computer keeps having to be taught new languages to speak to new designs of mouse.

Quote
And even then, the computer only knows about the claim, not the feelings.

Exactly.

Quote
You don't seem to be inclined to believe a computer mouse if it told you it had feelings.

How's it going to do that without fabricating the data? We can look at how the mouse works and see that such data is fake. With real mice and humans, we don't have sufficient resolution to do that without chopping them up, and if we chop them up, the functionality is disrupted and it's hard to study it.

Quote
This again assumes feelings separate from the thing that reads it.  Fine and dandy if it works that way, but if the two systems don't interface in a meaningful way, then system 2 is not able to pass on a message from system 1 that it just interprets as noise.

Trying to integrate the two things into one is fine, but at some point you need to have something measure the feeling, and it also has to have some way to know that what it's measuring is a feeling.

Quote
The part of the system putting that data together experiences the subjective feelings directly since it's the same system.

The Chinese Room can't measure feelings, so what's different in the brain that makes the impossible possible?

Quote
No magic is needed for a system to have access to itself.

Magic is needed to measure a feeling and know that it is a feeling.

Quote
The part of the system documenting the feelings is probably my mouth and hands since I can speak and write of those feelings.

The system documenting the feelings is in the brain. But can it be trusted to be telling the truth?

Quote
My model doesn't run on magic. I've asserted no such thing, and you've supposedly not asserted it about your model.

It's measuring a feeling and magically knowing that it's a feeling that it's measuring rather than just a signal of any normal kind.

Quote
Your model, not mine.  You need magic because you're trying to squeeze your model into mine.  Your statement above mixes layers of understanding and is thus word salad, like describing a system using classic and quantum physics intermixed.

Your model needs the same magic. You just hide that from yourself by flinging it into complexity so that you don't need to understand it. But all of that complexity is running on simpler rules which science claims to understand, leaving no room for sentience unless there's something going on which science has missed.

Quote
Quote
And how does it then convert from that experience of feeling into data being generated in a competent way that ensures that the data is true?
For one, it already is data, so no conversion.

Magical sentient data. Symbols on paper experiencing feelings according to the meanings represented.

Quote
I am capable of lying, so if I generate additional data (like I do on these posts), I have no way of proving that the data is true, so I cannot assure something outside the system of the truth of generated data. Inside the system, there is no truth or falsehood, just subjective experience.

For the symbols on paper, there is no experience tied to the meaning of the data. Any feelings there are not part of the process and do not convert into data representing the experience of them.

Quote
A model of how memory works?

I was asking for a theoretical model of sentience: referring to the words of mine that you quoted rather than your reply to it (in which you brought in memory).

Quote
That is the model.  One system, not multiple. Yes, it has inputs and outputs, but the feelings don't come from those. There is no generation of data of feelings from a separate feeling organ.

It is a magic model, like any other model that's been attempted. The feelings must be measured by something, and the thing doing the measuring cannot know that they are feelings.
Logged
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #258 on: 14/10/2019 05:41:08 »
Quote from: David Cooper on 10/10/2019 20:29:41
I have given you a method which can be used to determine the right form of utilitarianism. Where they differ, we can now reject the incorrect ones.

No it would not allow the mistreatment of anyone. This is what poor philosophers always do when they analyse thought experiments incorrectly - they jump to incorrect conclusions. Let me provide a better example, and then we'll look back at the above one afterwards. Imagine that a scientist creates a new breed of human which gets 100 times more pleasure out of life, and that these humans aren't disadvantaged in any way. The rest of us would then think, we want that too. If we can't have it added to us through gene modification, would it be possible to design it into our children? If so, then that is the way to switch to a population of people who enjoy life more without upsetting anyone. The missing part of the calculation is that upset that would be caused by mistreating or annihilating people, and the new breed of people who get more enjoyment out of living aren't actually going to get that enjoyment if they spend all their time fearing that they'll be wiped out next in order to make room for another breed of human which gets 10,000 times as much pleasure out of living. By creating all that fear, you actually create a world with less pleasure in it.

Let us suppose that we can't do it with humans though and that we need to be replaced with the utility monster in order to populate the universe with things that get more out of existing than we do. The correct way to make that transition is for humans voluntarily to have fewer children and to reduce their population gradually to zero over many generations while the utility monsters grow their population. We'd agree to do this for the same reason that if we were spiders we'd be happy to disappear and be replaced by humans. We would see the superiority of the utility monster and let it win out, but not through abuse and genocide.

No. Utilitarian theory applied correctly does not allow that because it actually results in a hellish life of fear for the utility monsters.

When you apply my method to it, you see that one single participant is each of the humans and each of the utility monsters, living each of those lives in turn. This helps you see the correct way to apply utilitarianism because that individual participant will suffer more if the people in the system are abused and if the utility monsters are in continual fear that they'll be next to be treated that way.

That analysis of the experiment is woeful philosophy (and it is also very much the norm for philosophy because most philosophers are shoddy thinkers who fail to take all factors into account).

I don't know what that is, but it isn't utilitarianism because it's ignoring any amount of happiness beyond the level of the least happy thing in existence.

If you ask people if they'd like to be modified so that they can fly, most would agree to that. We could replace non-flying humans with flying ones and we'd like that to happen. That is a utility monster, and it's a good thing. There are moral rules about how we get from one to the other, and that must be done in a non-abusive way. If all non-flying humans were humanely killed to make room with flying ones, are those flying ones going to be happy when they realise the same could happen to them to make room for flying humans that can breathe underwater? No. Nozick misapplies utilitarianism.
I think what you are doing here is building a moral system based on simple version of utilitarianism, and then apply patches to cover specific criticisms that discovers loopholes on it. Discovering those loopholes is what philosophers do.
Rawl's version is widely recognized as one form of utilitarianism.

Ability to fly or breath underwater can be useful, but they don't have to be permanent nor expressed genetically. Ancient humans can survive freezing weather by simply using other mammal's fur.

At least we can agree that moral rules should consider long term consequences.
« Last Edit: 14/10/2019 07:16:30 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11799
  • Activity:
    92.5%
  • Thanked: 285 times
Re: Is there a universal moral standard?
« Reply #259 on: 14/10/2019 06:17:10 »
Quote from: David Cooper on 11/10/2019 22:52:52
If they're sentient, then they're included. Some animals may not be, and it's highly doubtful that any plants are, or at least, not in any way that's tied to what's happening to them (just as the material of a rock could be sentient).
You need to draw a line between sentient and non-sentient. Or assign numbers to allow us measure and describe sentience, including partial sentience. The next step would be some methods to use those numbers to make decisions of which options to take in morally conflicting situations.
Logged
Unexpected results come from false assumptions.
 



  • Print
Pages: 1 ... 11 12 [13] 14 15 ... 212   Go Up
« previous next »
Tags: morality  / philosophy 
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.362 seconds with 65 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.