0 Members and 1 Guest are viewing this topic.
Quote from: HalcHere you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.If you don't have output from the sentience, it has no role in the system.
Here you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.
I also never said that something external to the physical system was involved in any way. Whatever is sentient, if feelings exist at all, is necessarily part of the physical system.
Your calculation of harm:benefit here has nothing to do with feelings.
Moral rules based on pleasure and suffering as their ultimate goals are vulnerable to reward hacking (such as drugs) and exploitation by utility monsters.
We know that killing random person is immoral, even if we can make sure that the person doesn't feel any pain while dying. There must be a more fundamental reason to get to that conclusion, other than minimising suffering, because no suffering is involved here.
If you grew up Scottish winters, standing in the rain is likely to give you hypothermia.If you grew up in Darwin (Australia), standing in the rain cools you down a bit, and the water will evaporate fairly soon anyway.
Quote from: David Cooper on 07/10/2019 22:59:22Quote from: HalcHere you are asserting output from the sentience, which you say cannot be done without some kind of magic that we both deny.If you don't have output from the sentience, it has no role in the system.With that I agree, but you are not consistent with this model.
QuoteI also never said that something external to the physical system was involved in any way. Whatever is sentient, if feelings exist at all, is necessarily part of the physical system.OK, this is different. If it is part of the physical system, why can't it play a role in the system? What prevents it from having an output?
It would seem that I don't avoid hitting my thumb with a hammer because I want to avoid saying 'ouch'. I can say the word freely and it causes me no discomfort. No, I avoid hitting my thumb because it would hurt, which means the past experience of pain has had the causal effect of making me more careful. That's an output (a useful role), but you deny that this causal chain (output from the physical sentience) exists.
And the other problem is that the information system that generates the claims about feelings being felt is outside the black box and cannot know anything about the feelings that are supposedly being experienced in there.
QuoteI avoid hitting my thumb because it would hurt, which means the past experience of pain has had the causal effect of making me more careful. That's an output (a useful role), but you deny that this causal chain (output from the physical sentience) exists.I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot competent,
I avoid hitting my thumb because it would hurt, which means the past experience of pain has had the causal effect of making me more careful. That's an output (a useful role), but you deny that this causal chain (output from the physical sentience) exists.
unless there's something spectacular going on in the physics which science has not yet uncovered.
In a chess game, the winner is not determined by who has more pieces, nor the one with highest sum of pieces' values. They are merely rule of thumb, short cut, approximation, Which is usually useful when we can't be sure about the end position of the game.
The only reason a human game of chess is deeper than that is because we can't just look at a chess position and know which of those 3 states it represents. If we could, the game would be trivial.
In a chess game, the winner is not determined by who has more pieces, nor the one with highest sum of pieces' values. They are merely rule of thumb, short cut, approximation, Which is usually useful when we can't be sure about the end position of the game. We can easily find exceptions where they don't apply, which means they are not the most fundamental principle. Likewise, maximizing pleasure and minimizing pain are just short cut to approximate a more fundamental moral rule. The real fundamental moral rule must be applied universally, without exception. Any dispute would turn out to be technical problems due to incomplete information at hand.
I am conversing with your information system, not the black box, and that information system seems very well aware indeed of those feelings.
Your stance seem to be that you are unaware that you feel pain and such. I feel mine, but I cannot prove that to you since only I have a subjective connection to the output of what you call this black box.
On the other hand, you claim the black box does have outputs, but they're apparently not taken into consideration by anything, which is functionally the same as not having those outputs, sort of like a computer with a VGA output without a monitor plugged into it.
QuoteI don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot competent,Cannot competent? That seems a typo, but I cannot guess as to what you meant there.
I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot competent,
Again this contradiction is asserted: You don't deny the causal connection exists, yet the information system is seemingly forbidden from using the connection. Perhaps your black box also holds an entirely different belief about how it all works, but your information system instead generates these contradictory statements, and the black box lacks the free will to make it post its actual beliefs.
A simple wire (nerve) from the black box to the 'information system' part is neither spectacular nor hidden from science.
In reality, there's more than one, but a serial line would do in a pinch.
Perhaps you posit that the black box is spatially separated from the information system to where a wire would not be practical. If so, you've left off that critical detail, which is why I'm forced to play 20 questions, 'chasing it down' as you put it.
The more fundamental rule is the one that you treat all participants as if they are a single participant. It ends up being much the same thing as utilitarianism. In your chess example, the players don't care about the wellbeing of their troops: a player could deliberately play a game in which he ends up with nothing more than king and rook against king and he will be just as happy as if he annihilated the other side without losing a piece of his own.
If you think my method for calculating morality doesn't work, show me an example of it failing.
Because utilitarianism is not a single theory but a cluster of related theories that have been developed over two hundred years, criticisms can be made for different reasons and have different targets.
The thought experimentA hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.[1] Nozick writes:Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose ... the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.[2]This thought experiment attempts to show that utilitarianism is not actually egalitarian, even though it appears to be at first glance.[1]The experiment contends that there is no way of aggregating utility which can circumvent the conclusion that all units should be given to a utility monster, because it's possible to tailor a monster to any given system.For example, Rawls' maximin considers a group's utility to be the same as the utility of the member who's worst off. The "happy" utility monster of total utilitarianism is ineffective against maximin, because as soon as a monster has received enough utility to no longer be the worst-off in the group, there's no need to accommodate it. But maximin has its own monster: an unhappy (worst-off) being who only gains a tiny amount of utility no matter how many resources are given to it.It can be shown that all consequentialist systems based on maximizing a global function are subject to utility monsters.[1]HistoryRobert Nozick, a twentieth century American philosopher, coined the term "utility monster" in response to Jeremy Bentham's philosophy of utilitarianism. Nozick proposed that accepting the theory of utilitarianism causes the necessary acceptance of the condition that some people would use this to justify exploitation of others. An individual (or specific group) would claim their entitlement to more "happy units" than they claim others deserve, and the others would consequently be left to receive fewer "happy units".Nozick deems these exploiters "utility monsters" (and for ease of understanding, they might also be thought of as happiness hogs). Nozick poses utility monsters justify their greediness with the notion that, compared to others, they experience greater inequality or sadness in the world, and deserve more happy units to bridge this gap. People not part of the utility monster group (or not the utility monster individual themselves) are left with less happy units to be split among the members. Utility monsters state that the others are happier in the world to begin with, so they would not need those extra happy units to which they lay claim anyway.
The thought experimentA hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.
[1] Nozick writes:Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose ... the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.[2]
This thought experiment attempts to show that utilitarianism is not actually egalitarian, even though it appears to be at first glance.[1]
The experiment contends that there is no way of aggregating utility which can circumvent the conclusion that all units should be given to a utility monster, because it's possible to tailor a monster to any given system.
For example, Rawls' maximin considers a group's utility to be the same as the utility of the member who's worst off. The "happy" utility monster of total utilitarianism is ineffective against maximin, because as soon as a monster has received enough utility to no longer be the worst-off in the group, there's no need to accommodate it. But maximin has its own monster: an unhappy (worst-off) being who only gains a tiny amount of utility no matter how many resources are given to it.
It can be shown that all consequentialist systems based on maximizing a global function are subject to utility monsters.[1]
Then show me a model for how those feelings are integrated into the information system. The only kinds of information system science understands map to the Chinese Room processor in which feelings cannot have a role.
The outputs clearly have a role, but they are determined by the inputs in such a way that the black box is superfluous: the inputs can feed directly into the outputs without any difference in the actions of the machine and the claims that it generates about feelings being experienced.
it would simply be taking the output from a black box and then interpreting it by applying rules stored in data which was put together by something that had no idea what was actually in the black box.
http://magicschoolbook.com/consciousness - this illustrates the problem, and I've been trying to find an error in this for many years.
I don't deny that it exists. What I deny is that the information system can know that the pain exists and that the claims it makes cannot [be] competent,
We have something (unidentified) experiencing feelings, but how is that unidentified thing going to be able to tell anything else about that experience?
Is it to be a data system? If so, what is it in that information system that's experiencing feelings? The whole thing? Where's the mechanism for that?
If we run that information on a Chinese Room processor, we find that there's no place for feelings in it.
With computation as we know it, there is no way to make such a model. We're missing something big.
How does the data system attribute meaning to that signal?
If we try to model this based on our current understanding of computation, we get a signal in from the black box in the form of a value in a port. We then look up a file to see what data from that port represents, and then we assert that it represents that.
There's an information processing system in the black box
and that can run on a Chinese Room processor. Where are the feelings being experienced in the box, and what by? How is the information system in the black box able to measure them and know what the numbers it's getting in its measurements mean? It looks up a file to see what the numbers mean, and then it maps them too it and creates an assertion about something which it cannot know anything about.
Draw a model and see how well you get on with it. Where is the information system reading the feeling and how does it know that there's a feeling there at all?
How does it construct the data that documents this experience of feeling
where does it ever see the evidence that the feeling is in any way real?
No it would not allow the mistreatment of anyone. This is what poor philosophers always do when they analyse thought experiments incorrectly - they jump to incorrect conclusions. Let me provide a better example, and then we'll look back at the above one afterwards. Imagine that a scientist creates a new breed of human which gets 100 times more pleasure out of life, and that these humans aren't disadvantaged in any way. The rest of us would then think, we want that too. If we can't have it added to us through gene modification, would it be possible to design it into our children? If so, then that is the way to switch to a population of people who enjoy life more without upsetting anyone. The missing part of the calculation is that upset that would be caused by mistreating or annihilating people, and the new breed of people who get more enjoyment out of living aren't actually going to get that enjoyment if they spend all their time fearing that they'll be wiped out next in order to make room for another breed of human which gets 10,000 times as much pleasure out of living. By creating all that fear, you actually create a world with less pleasure in it.Let us suppose that we can't do it with humans though and that we need to be replaced with the utility monster in order to populate the universe with things that get more out of existing than we do. The correct way to make that transition is for humans voluntarily to have fewer children and to reduce their population gradually to zero over many generations while the utility monsters grow their population. We'd agree to do this for the same reason that if we were spiders we'd be happy to disappear and be replaced by humans. We would see the superiority of the utility monster and let it win out, but not through abuse and genocide.
I don't think a system would pass a Turing test without feelings, so the Chinese room, despite being a test of ability to imitate human intelligence, not feelings, would seem to be an example of strong AI. All Searle manages to prove is that by replacing a CPU with a human, the human can be shown to function without an understanding of the Chinese language, which is hardly news. In the same way, the CPU of my computer has no idea that a jpg file represents an image.Secondly, the mind of no living thing works via a von-Neumann architecture, with a processing unit executing a stream of instructions, but it has been shown that a Turning machine can execute any algorithm including doing what any living thing does, and thus the Chinese room is capable of passing the Turing test if implemented correctly.
Concerning the way we've been using the term 'black box'. You are describing a white box since you are placing the feelings of the sentience in the box. A black box has no description of what is in the box, only a description of inputs and outputs. A black box with no outputs can be implemented with an empty box.
Those lines are not superfluous because my phone would not work if you took them away. You seem to posit that the box is white, not black, and generates feelings that are not present at the inputs. If the inputs can be fed straight into the outputs without any difference, then the generation of said feelings cannot be distinguished at the outputs from a different box that doesn't generate them.
The whole point of a black box is that one doesn't need to know what's inside it. The whole point of the consciousness debate is to discuss what's going on inside us, so using black-box methodology seems a poor strategy for achieving this.
The site lists 19 premises. Some of them are just definitions, but some very much are assumptions, and the conclusions drawn are only as strong as the assumptions. I could think of counterexamples of many of the premises. Others are begging a view that defies methodological naturalism, which makes them non-scientific premises. So you're on your own if you find problems with it.
OK, I repaired the sentence, but now you're saying that your own claims of experiencing pain are not competent claims? I don't think you meant to say that either, but that's how it comes out now. The claims (the posts on this site) are output by the information system, right? What else produces them? Maybe you actually mean it.
QuoteWe have something (unidentified) experiencing feelings, but how is that unidentified thing going to be able to tell anything else about that experience?Using the output you say it has. I don't think the thing is unidentified, nor do I deny the output from it since said output is plastered all over our posts.
QuoteIs it to be a data system? If so, what is it in that information system that's experiencing feelings? The whole thing? Where's the mechanism for that?You don't know where the whole thing is?
If you hold to the dualist view, then you assert that all this is simply correlation, a cop-out that can be used no matter how much science learns about these things.
The Chinese room models a text-only I/O. A real human is not confined to a text-only stream of input. It makes no attempt to model a human. If it did, there would indeed be a place for feelings. All the experiment shows is that the system can converse in Chinese without the guy knowing Chinese, similar to how I can post in English without any of my cells knowing the language.
Computation as you know it is a processor running a set of instructions, hardly a model of any living thing, which is more of an electro-chemical system with a neural net. The chemicals are critical, easily demonstrated by the changed behavior of people under various drugs. Chemicals would have zero effect on a CPU running a binary instruction stream, except possibly to dissolve it.
QuoteHow do you know what the output from the box means?I don't have to. According to your terminology, the 'data system' needs the output to be mapped according to the rules of that data system. Evolution isn't going to select for one system that cannot parse its own inputs. That would be like hooking the vision data to the auditory system and v-v. It violates the rules of the data system, leaving the person blind and deaf.
How do you know what the output from the box means?
QuoteHow does the data system attribute meaning to that signal?Same way my computer attributes meaning from the USB signal from my mouse: by the mouse outputting according to the rules of the data system, despite me personally not knowing those rules. I'm no expert in USB protocol. I'm more of an NFS guy, and this computer doesn't use an NFS interface. There's probably no mouse that speaks NFS.
QuoteIf we try to model this based on our current understanding of computation, we get a signal in from the black box in the form of a value in a port. We then look up a file to see what data from that port represents, and then we assert that it represents that.Look up a file? My, you sure know a lot more about how it works than I do.
QuoteLet's give it a parallel port and have it speak in ASCII. Now ask yourself, how is it able to speak to us? How does it know our language?You tell me. You're the one that compartmentalizes it into an isolated box like that. Not my model at all.
Let's give it a parallel port and have it speak in ASCII. Now ask yourself, how is it able to speak to us? How does it know our language?
QuoteThere's an information processing system in the black boxThen it isn't a black box.
Again, your model, not mine. I have no separation of information system and the not-information-system.
QuoteDraw a model and see how well you get on with it. Where is the information system reading the feeling and how does it know that there's a feeling there at all?There's no reading of something outside the information system. My model only has the system, which does its own feeling.
QuoteHow does it construct the data that documents this experience of feelingSounds like you're asking how memory works. I don't know. Not a neurologist.
Quotewhere does it ever see the evidence that the feeling is in any way real?I (the information system) have subjective evidence of my feelings.
Who do you mean with anyone? human? what about animals and plants?
Why pleasure is good while pain is bad?
what about inability/reduced ability to feel pain or pleasure?
How much fewer children is considered acceptable?
Okay, but it's a black box until we try to work out what's going on inside it, at which point it becomes a white box and we have to complete the contents by including a new black box.
First, the outputs are not the same as the inputs
the inputs can feed directly into the outputs without any difference in the actions of the machine and the claims that it generates about feelings being experienced.
there's an extra output line which duplicates what goes out on the main output line, and this extra one is read as indicating that a feeling was experienced.
The whole point of the black box is to draw your attention to the problem.
If the bit we can't model is inside the black box and we don't know what's going on in there, we don't have a proper model of sentience.
they always have to point somewhere and say "feelings are felt here and they are magically recognised as existing and as being feelings by this magic routine which asserts that they are being felt there even though it has absolutely no evidence to back its assertion".
QuoteThe site lists 19 premises. Some of them are just definitions, but some very much are assumptions, and the conclusions drawn are only as strong as the assumptions. I could think of counterexamples of many of the premises. Others are begging a view that defies methodological naturalism, which makes them non-scientific premises. So you're on your own if you find problems with it.Give me your best counterexample then. So far as I can see, they are correct. If you can break any one of them, that might lead to an advance, so don't hold back.
That is predicated on the idea that the brain works like a computer, processing data in ways that science understands.
I'm not asking where the whole thing is. I was asking if it's the whole thing that's experiencing feelings rather than just a part of it.
It makes little difference either way though, because to model this we need to have an interface between the experience and the system that makes data. For that data to be true, the system that makes it has to be able to know about the experience, but it can't.
QuoteAll the [Chinese room] experiment shows is that the system can converse in Chinese without the guy knowing Chinese, similar to how I can post in English without any of my cells knowing the language.A Chinese Room processor can run any code at all and can run an AGI system. It is Turing complete.
All the [Chinese room] experiment shows is that the system can converse in Chinese without the guy knowing Chinese, similar to how I can post in English without any of my cells knowing the language.
We can simulate neural networks. Where is the interface between the experience of feelings and the system that generates the data to document that experience?
Waving at something complex isn't good enough. You have no model of sentience.
but we do have models of neural nets which are equivalent to running algorithms on conventional computers.
If evolution selects for an assertion of pain being experienced in once case and an assertion of pleasure in another case, who's to say that the sentient thing isn't actually feeling the opposite sensation to the one asserted? The mapping of assertion to output is incompetent.
The mouse is designed to speak the language that the computer understands, or rather, the computer is told how to interpret the squeaks from the mouse.
If there are feelings being experienced in the mouse, the computer cannot know about them unless the mouse tells it, and for the mouse to tell it it has to use a language.
If the mouse is using a language, something in the mouse has to be able to read the feelings, and how does that something know what's being felt? It can't.
I'm trying to eliminate the magic, and the black box shows the point where that task becomes impossible. So, you open up the black box and have the feelings exist somewhere (who cares where) in the system while data is generated to document the existence of those feelings, but you still can't show me how the part of the system putting that data together knows anything about the feelings at all.
And that's how you fool yourself into thinking you have a working mode, but it runs on magicl
The part of it that generates the data about feelings might be in intense pain, but how can the process it's running know anything about that feeling in order to generate data about it?
QuoteThere's no reading of something outside the information system. My model only has the system, which does its own feeling.And how does it then convert from that experience of feeling into data being generated in a competent way that ensures that the data is true?
There's no reading of something outside the information system. My model only has the system, which does its own feeling.
I'm asking for a theoretical model. Science doesn't have one for this.
QuoteI (the information system) have subjective evidence of my feelings.Show me the model.
I (the information system) have subjective evidence of my feelings.
Quote from: David Cooper on 11/10/2019 22:25:26Okay, but it's a black box until we try to work out what's going on inside it, at which point it becomes a white box and we have to complete the contents by including a new black box.This is fine, but you're not going to demonstrate your sentience that way, since you always put it in the black box where you cannot assert its existence.
QuoteFirst, the outputs are not the same as the inputsDidn't you say otherwise?
It says those outputs make no difference to the actions of the machine, which means the machine would claim feelings even if there were none. That means you've zero evidence for this sentience you claim.
Quotethere's an extra output line which duplicates what goes out on the main output line, and this extra one is read as indicating that a feeling was experienced.This contradicts your prior statement.
1) How do you know about these lines? The answer seems awfully like something you just now made up.
2) If there are two outputs and one is a duplicate of the other, how can it carry additional information?
3) This is the contradiction part: You said earlier that the action of the 'machine' is unaffected by these outputs, but here you claim that an output is read as indicating that a feeling was experienced. That's being affected. If the machine action is unaffected by this output, then the output is effectively ignored at some layer.
Where does the output of your black box go? To what is it connected? This is outside the black box, so science should be able to pinpoint it. It's in the white part of the box after all. If you can't answer that, then you can't make your black box ever smaller since the surrounding box is also black.
QuoteThe whole point of the black box is to draw your attention to the problem.More like a way to hide it. The scientists that work on this do not work this way. They explore what's in the box.
QuoteIf the bit we can't model is inside the black box and we don't know what's going on in there, we don't have a proper model of sentience.So you're admitting you don't have a proper white box model? Does anybody claim they have one?
Quotethey always have to point somewhere and say "feelings are felt here and they are magically recognised as existing and as being feelings by this magic routine which asserts that they are being felt there even though it has absolutely no evidence to back its assertion".I'm unaware of this wording. There are no 'routines' for one thing. They very much do have evidence as to mapping where much of this functionality goes on, but that isn't a model of how it works. It is a pretty good way to say which creatures 'feel' the various sorts of this to which humans can relate.
Some small nits. The information system processes only data (1). 3 says the non-data must first be converted to data before being given to the information system (IS), but 5 and 13 talk about the IS doing the converting, which means it processes something that isn't data. As I said, that's just a nit.
13 also talks about ideas being distinct from data. An idea sounds an awful lot like data to me.
A counterexamples comes up with 10 which says that data which is not covered by the rules of the IS cannot be considered by the IS. Not sure what they mean by 'considered' ...
... but take a digital signal processor (DSP) or just a simple amplifier. It might be fed a data stream that is meaningless to the IS, yet the IS is completely capable of processing the stream. This is similar to the guy in the Chinese room. He is an IS, and he's handling data (the Chinese symbols) that does not conform to his own rules (English), yet he's tasked with processing that data.
My big gripe with the list is point 7's immediate and unstated premise that a 'conscious thing' and an 'information system' are separate things, and that the former is not a form of data. That destroys the objectivity of the whole analysis. I deny this premise.
Science does not posit the brain to operate like a computer. There are some analogies, sure, but there is no equivalent to a CPU, address space, or instructions. Yes, they have a fairly solid grasp on how the circuitry works, but not how the circuit works.
QuoteI'm not asking where the whole thing is. I was asking if it's the whole thing that's experiencing feelings rather than just a part of it.Yes, It's the whole thing. It isn't a special piece of material or anything.
Doesn't work that way. Eyes arguably 'makes data', yet isn't a device that 'knows' about experience. The system that processes the data (in my case) has evolved to be compatible with the system that makes the data, not the other way around. It's very good at that, being able to glean information from new sources. They've taught humans to navigate by sound like a bat, despite the fact that we've not evolved for it. The system handles this alternately formatted data (outside the rules of the IS) just fine. The only thing they needed to add was the bit that produces the sound pulses, since we're not physically capable of generating them.
The processor doesn't know Chinese. But the system (the whole thing) does. There is no black box where the Chinese part is. There's not a 'know Chinese' instruction in the book of English instructions from which the guy in there works.
]This presumes that the experience is not part of the system, and that it needs to be run through this data-generation step. You hold the same premise as step 7.
QuoteWaving at something complex isn't good enough. You have no model of sentience.Pretty much how you're presenting your views, yes. My model is pretty simple actually. I don't claim to know how it works. Neither do you, but you add more details than do I, but still hide your complex part in a black box, as if you had an understanding of how the data-processing part worked.
QuoteIf evolution selects for an assertion of pain being experienced in one case and an assertion of pleasure in another case, who's to say that the sentient thing isn't actually feeling the opposite sensation to the one asserted? The mapping of assertion to output is incompetent.This makes no sense to me since I don't model the sentience as a separate thing.
If evolution selects for an assertion of pain being experienced in one case and an assertion of pleasure in another case, who's to say that the sentient thing isn't actually feeling the opposite sensation to the one asserted? The mapping of assertion to output is incompetent.
There is no asserting going on. If the data system takes 'damage' data and takes pleasure from them, then it will make choices to encourage the sensation, resulting in the being being less fit.
QuoteThe mouse is designed to speak the language that the computer understands, or rather, the computer is told how to interpret the squeaks from the mouse.The first guess is closer.
And even then, the computer only knows about the claim, not the feelings.
You don't seem to be inclined to believe a computer mouse if it told you it had feelings.
This again assumes feelings separate from the thing that reads it. Fine and dandy if it works that way, but if the two systems don't interface in a meaningful way, then system 2 is not able to pass on a message from system 1 that it just interprets as noise.
The part of the system putting that data together experiences the subjective feelings directly since it's the same system.
No magic is needed for a system to have access to itself.
The part of the system documenting the feelings is probably my mouth and hands since I can speak and write of those feelings.
My model doesn't run on magic. I've asserted no such thing, and you've supposedly not asserted it about your model.
Your model, not mine. You need magic because you're trying to squeeze your model into mine. Your statement above mixes layers of understanding and is thus word salad, like describing a system using classic and quantum physics intermixed.
QuoteAnd how does it then convert from that experience of feeling into data being generated in a competent way that ensures that the data is true?For one, it already is data, so no conversion.
And how does it then convert from that experience of feeling into data being generated in a competent way that ensures that the data is true?
I am capable of lying, so if I generate additional data (like I do on these posts), I have no way of proving that the data is true, so I cannot assure something outside the system of the truth of generated data. Inside the system, there is no truth or falsehood, just subjective experience.
A model of how memory works?
That is the model. One system, not multiple. Yes, it has inputs and outputs, but the feelings don't come from those. There is no generation of data of feelings from a separate feeling organ.
I have given you a method which can be used to determine the right form of utilitarianism. Where they differ, we can now reject the incorrect ones.No it would not allow the mistreatment of anyone. This is what poor philosophers always do when they analyse thought experiments incorrectly - they jump to incorrect conclusions. Let me provide a better example, and then we'll look back at the above one afterwards. Imagine that a scientist creates a new breed of human which gets 100 times more pleasure out of life, and that these humans aren't disadvantaged in any way. The rest of us would then think, we want that too. If we can't have it added to us through gene modification, would it be possible to design it into our children? If so, then that is the way to switch to a population of people who enjoy life more without upsetting anyone. The missing part of the calculation is that upset that would be caused by mistreating or annihilating people, and the new breed of people who get more enjoyment out of living aren't actually going to get that enjoyment if they spend all their time fearing that they'll be wiped out next in order to make room for another breed of human which gets 10,000 times as much pleasure out of living. By creating all that fear, you actually create a world with less pleasure in it.Let us suppose that we can't do it with humans though and that we need to be replaced with the utility monster in order to populate the universe with things that get more out of existing than we do. The correct way to make that transition is for humans voluntarily to have fewer children and to reduce their population gradually to zero over many generations while the utility monsters grow their population. We'd agree to do this for the same reason that if we were spiders we'd be happy to disappear and be replaced by humans. We would see the superiority of the utility monster and let it win out, but not through abuse and genocide.No. Utilitarian theory applied correctly does not allow that because it actually results in a hellish life of fear for the utility monsters.When you apply my method to it, you see that one single participant is each of the humans and each of the utility monsters, living each of those lives in turn. This helps you see the correct way to apply utilitarianism because that individual participant will suffer more if the people in the system are abused and if the utility monsters are in continual fear that they'll be next to be treated that way.That analysis of the experiment is woeful philosophy (and it is also very much the norm for philosophy because most philosophers are shoddy thinkers who fail to take all factors into account).I don't know what that is, but it isn't utilitarianism because it's ignoring any amount of happiness beyond the level of the least happy thing in existence.If you ask people if they'd like to be modified so that they can fly, most would agree to that. We could replace non-flying humans with flying ones and we'd like that to happen. That is a utility monster, and it's a good thing. There are moral rules about how we get from one to the other, and that must be done in a non-abusive way. If all non-flying humans were humanely killed to make room with flying ones, are those flying ones going to be happy when they realise the same could happen to them to make room for flying humans that can breathe underwater? No. Nozick misapplies utilitarianism.
If they're sentient, then they're included. Some animals may not be, and it's highly doubtful that any plants are, or at least, not in any way that's tied to what's happening to them (just as the material of a rock could be sentient).