Vacchablogga

Introduction

The fading qualia argument is one popular argument for functionalism about consciousness (just ‘functionalism’ for short). The best presentation of the argument is in a paper by Chalmers, which you should read before reading the rest of this post.1

Very simplified summary of the argument: Imagine someone replaces the neurons in your brain one by one with silicon that functions in exactly the same way as the neurons. And suppose, for the sake of argument, that functionalism is false. Then at some point your experiences will be either faded or gone entirely. Nonetheless, you will judge that there has been no change in your experiences, since your judgments about your experiences are determined entirely by your functional states.2 So at some point you will be systematically out of touch with your experiences despite suffering from no “functional pathology”. But this isn’t possible. So functionalism is true.

Here’s what I take to be the key premise of the argument:

  • (F) It is not possible for a “rational being that is suffering from no functional pathology to be systematically out of touch with its experiences”.3

Why should we accept (F)?

Empirical evidence

Chalmers suggests that we have empirical evidence for (F):

In every case with which we are familiar, conscious beings are generally capable of forming accurate judgments about their experience, in the absence of distraction and irrationality. For a sentient, rational being that is suffering from no functional pathology to be so systematically out of touch with its experiences would imply a strong dissociation between consciousness and cognition. We have little reason to believe that consciousness is such an ill-behaved phenomenon, and good reason to believe otherwise.

One of the most salient empirical facts about conscious experience is that when a conscious being with the appropriate conceptual sophistication has experiences, it is at least capable of forming reasonable judgments about those experiences. Perhaps there are some cases where judgment is impaired due to a malfunction in rational processes, but this is not such a case.

But I think things are not so clear. In the rest of this section, I’ll consider:

  1. Whether we actually have empirical evidence that rational, focused (I’ll omit these qualifications going forward) humans are reliable judges of their experiences.
  2. Whether, if humans are reliable judges of their experiences, it follows that (F) is true.
  3. Whether we could even in principle have empirical evidence for (F) over competing principles.

My answers are (1) maybe some, but not much, (2) no, and (3) no.

Do we have empirical evidence that humans are reliable judges of their experiences?

I think we have at best little empirical evidence that humans are reliable judges of their experiences.

The problem is that our primary evidence of what people are experiencing is their judgments of what they are experiencing, as reported by them to us. And it would obviously be question-begging to argue, “Dave judged that he is experiencing redness, so—assuming that he is a reliable judge of his experiences—he really is experiencing redness, so his judgment that he is experiencing redness is accurate. [Repeat a few times.] So he is a reliable judge of his experiences.”

You might instead try to use your own experiences and judgments about them as evidence that you, at least, are a reliable judge of your experiences, and then generalize from that. But I don’t think this works, either. It would be nearly as question-begging as the previous argument to argue, “I judged that I am experiencing redness, and—assuming that I am a reliable judge of my experiences—I can tell directly that I really am experiencing redness, so my judgement that I am experiencing redness is accurate…”.

There is other evidence we could try to rely on, which I consider in a footnote,4 but I don’t think it gets us very far.

If humans are reliable judges of their experiences, does it follow that (F) is true?

But even if humans are reliable judges of their experiences, it doesn’t follow that (F) is true. To see this, it helps to compare (F) with a different principle, which will likely be accepted by those who reject (F):

  • (M) It is not possible for a rational being (i) that is suffering from no functional pathology and (ii) that is is and always has been made out of the sort of material that humans are made out of to be systematically out of touch with its experiences.

The problem is that the mere fact that humans are reliable judges of their experiences does nothing to support (F) over the more modest (M). After all, humans (so far) are and always been made out of the material that humans are actually made out of, as opposed to, say, silicon. So it’s consistent with the fact that humans are currently reliable judges of their experiences that if the material that they are made out were to change, then their judgments about their experiences would become unreliable.5

Is there possible empirical evidence for (F)?

And it’s not just that we don’t, in fact, have empirical evidence for (F) over (M). It’s that we couldn’t possibly have empirical evidence for (F) over (M). This is a point Chalmers himself makes later in the paper:

It is not out of the question that we could actually perform such an experiment [of replacing the neurons in someone’s brain with functionally identical silicon]…But of course there is no point performing the experiment: we know what the result will be. I will report that my experience stayed the same throughout, a constant shade of red, and that I noticed nothing untoward.

So whatever reason we have to accept (F) over (M) can’t be empirical evidence, since we’d expect to have exactly the same empirical evidence whether (F) is true or (M) is true.6

Other principles like (F)

(F) says that it’s impossible for a “rational being that is suffering from no functional pathology” to be systematically out of touch with one specific part of reality—its own experiences. But note that it is not impossible for rational beings to be systematically out of touch with reality more generally. For example, if I was turned into a brain in a vat last night, then I’m now systematically out of touch with my surroundings, despite being no less rational than before. So a generalized version of (F) is false:

  • (F*) It is not possible for a rational being that is suffering from no functional pathology to be systematically out of touch with reality.

(F*) is false because most parts of reality are not determined entirely by our rationality or functional states, and so they can vary independently of each other. In the envatting example, my rationality and functional states are held constant while my surroundings change.

Given this, a restricted version of (F*) is plausible only if the relevant part of reality is determined entirely by our rationality or functional states. Of course, this is precisely what functionalists think about experiences. But asserting (F) to argue for functionalism now starts to look question-begging, since (F) is plausible only if functionalism is true.

On the other hand, if you stare at any deductively valid argument for long enough, it starts to look question-begging, so I don’t think that this is a problem with the argument. The important point is just that (F) shouldn’t be taken as obvious or something that follows from a plausible more general principle about rational beings.

Skepticism about our experiences

Another option is to give up trying to provide evidence for (F), and instead argue that we should accept (F) because some sort of skepticism about our own experiences follows if (F) is false. This skepticism could be used as part of a practical argument for accepting (F) when developing a theory of consciousness, as I explain in the next section.

Why think that this skepticism follows if (F) is false? Well, suppose that (F) is false—that it is possible for a rational being that is suffering from no functional pathology to be systematically out of touch with its experiences. Then, to avoid skepticism, we need to come up with some reason to think that we ourselves are not such beings. But it’s not obvious how to do this.

We can’t appeal to empirical evidence that humans are reliable judges of their experiences, because there is no strong empirical evidence of this (see section 2.1).

We can’t appeal to considerations of parsimony,7 because there isn’t anything especially unparsimonious about the hypothesis that our judgments about our experiences are systematically mistaken because of the stuff we’re made out of.

We can’t appeal to evolution, because a being that is a reliable judge of its experiences has no greater reproductive fitness than a functionally identical being that is systemically mistaken about its experiences.

The best bet might be to try to find some general reason to think that, although it is possible for a being to be systematically out of touch with its experiences because of the material it is made out of, this is very unlikely. For example, maybe 99% of materials support consciousness, and only 1% don’t. So there is only a 1% chance that we would be made out of a material that fails to support consciousness and would be inaccurate judges of our experiences as a result. But I have no idea how we could learn what percentage of materials support consciousness if functionalism is false.8

Practical consequences

So suppose that if (F) is false, then some sort of skepticism about our experiences follows. What are the practical consequences of this?

One practical consequence is arguably that we have good reason to accept (F) when developing a theory of consciousness, even if we don’t have good evidence for (F). After all, we need to rely on knowledge of our own experiences when developing a theory of consciousness. So if (F) is false, and skepticism about our own experiences follows from this, then we can’t develop a theory of consciousness. So when we are developing a theory of consciousness, it’s reasonable to assume that (F) is true, because otherwise we have no hope of succeeding.9

But it’s important not to jump to the conclusion that in every case this gives us good reason to act like (F) is true, or act like functionalism—which, together with the fading qualia argument, (F) entails—is true. For example, I don’t think that anything I’ve said gives us good reason to act like functionalism is true when considering whether to try upload our minds to computers, or when considering whether we should treat advanced AIs with the respect that we should treat conscious beings with.

Here’s an analogy. Suppose that when developing a drug we had, for whatever reason, to make some assumption about its pharmacology. We have no evidence for the assumption—it’s just that we had no hope of succeeding in developing this drug if the assumption is false. Suppose further that if the assumption is true, then the drug is safe and effective. Should this make you feel OK about taking the drug if its safety and efficacy hasn’t been verified? I don’t think so.

If we can’t come up with any actual evidence for (F), or any better argument for functionalism, I think we should feel the same way about mind uploading. And I think the mere fact that we should act like (F) is true when doing philosophy of mind shouldn’t, without further argument, affect our treatment of intelligent AIs.10

Conclusion

Here’s a summary of my key claims:

  • We don’t and couldn’t have empirical evidence for (F) over (M)
  • (F) can’t be supported by appeal to a more general principle connecting rationality to true beliefs, because in general there is no necessary connection between the two
  • Skepticism about our experiences arguably follows if (F) is false
  • If skepticism about our experiences does follow if (F) is false, then although this might give us reason to accept (F) when developing a theory of consciousness, it doesn’t give us reason to accept (F) in other circumstances

If all this is right, then how strong is the fading qualia argument? Strong enough to give us practical reason to accept functionalism when doing philosophy of mind, but not strong enough to give us epistemic reason to accept functionalism, or practical reason to accept functionalism in other cases (such as when deciding whether to try to upload our minds to computers).


  1. By ‘functionalism about consciousness’, I mean the view that Chalmers calls ‘the principle of organizational invariance’ in his paper. In the paper, Chalmers also presents a strengthened version of the fading qualia argument called ‘the dancing qualia argument’, but the points I want to make apply equally to both arguments. A similar presentation of the argument is in Chalmers' The Conscious Mind (1996), chapter 7. The fading qualia argument has some influence outside of philosophy. For example, Karnofsky appeals to it in a recent post on the future of humanity. ↩︎

  2. The argument would still work if we substituted the more modest premise that your judgments about whether your experiences have changed over time are determined entirely by your functional states. The stronger premise in the main text might be denied by those (including Chalmers himself) who think that your judgments about the experiences you are having at a given time can be constituted in part by the experiences themselves. If this is right, then your judgments about your experiences are determined entirely by your functional states only if your experiences themselves are also determined entirely by your functional states—and it would be obviously question-begging for a proponent of this argument to assume that they are. ↩︎

  3. All quotes in this post are from Chalmers' paper. ↩︎

  4. Three other possible sources of evidence:

    • Consistency: If I first judge that, at noon today, I experienced only redness, and then a minute later judge that, at noon today, I experienced only blueness, at least one of my judgments is false. If regularly make inconsistent judgments about my experiences, then I must be unreliable judge of my experiences, regardless of what my exact experiences are. Conversely, if I never make inconsistent judgments about my experiences, this is at least some (weak) evidence that I am a reliable judge of my experiences.
    • Non-verbal behavior: Use someone’s non-verbal behavior to figure out what they are experiencing, and then compare this to their reported judgments. If I show obvious pain-behavior (say, wincing and screaming) but judge that I am not experiencing anything unpleasant, that’s evidence that I’m an unreliable judge of my experiences. Conversely, if there never is any mismatch between the experiences you’d expect me to be undergoing based on my behavior and those I judge myself to be undergoing, that’s at least some evidence that I’m a reliable judge of my experiences. One challenge here is justifying the assumptions about what sorts of behavior are correlated with what experiences—assumptions that, in these cases, we are supposed trust over the judgments of a rational, focused person who is undergoing the experiences.
    • Reliability about other things: Suppose that I am a reliable judge of truth in mathematics, chemistry, history, and philosophy. That’s some evidence evidence that I am a reliable judge in general, and also (very indirect) evidence that I am reliable judge of my experiences in particular. Cf. Sinhababu’s ‘The reliable route from nonmoral evidence to moral conclusions’ (2022).
    ↩︎
  5. Parsimony arguably counts in favour of (F) over (M), since (F) is simpler and arguably less ad hoc than (M). But (F) and (M) aside, parsimony is a general point in favour of functionalism over non-functionalist theories of consciousness. So if parsimony is what’s doing in the work in supporting (F), you might as well just use parsimony to argue for functionalism directly instead of making the fading qualia argument. ↩︎

  6. Possible exception: the experiences of the person who is having the parts of their brain replaced with silicon could maybe count as empirical evidence in favour of (F) over (M), though no one else would have access to this evidence. And even the subject of the experiment won’t be able to retain any knowledge of this evidence after the experiment is done, since their end judgments will be the same no matter what they ended up experiencing. ↩︎

  7. Which we might be able to do in response to the evil demon hypothesis, which is much less parsimonious than the hypothesis that the external world exists. ↩︎

  8. Also, note that if it were the case that 99% of material support consciousness, then functionalism would be nearly extensionally correct even if it were false. So the practical consequences of the view appealed to by this strategy for avoiding skepticism are close to the practical consequences of functionalism itself. For example, both have the consequence that intelligent AIs are at least probably conscious. ↩︎

  9. One exception: we don’t need to rely on knowledge of our own experiences when developing an illusionist theory of consciousness. In fact, illusionists about consciousness might claim that skepticism about our experiences gives us reason to accept illusionism about consciousness. But the practical argument for (F) works if we’re trying to develop a theory of consciousness that doesn’t deny its existence, which is what most people studying consciousness are trying to do. ↩︎

  10. A complication is that if we are completely ignorant of our experiences, then we are also ignorant of whether they are good or bad. This arguably lowers the stakes of any decision that potentially affects our experiences. For example, if I don’t know whether I’m having good experiences, then I don’t know whether I would lose anything of value if I were to lose consciousness from a failed attempt to upload my mind to a computer. Note, however, that this skepticism of the goodness or badness of our experiences follows only if we have complete ignorance of our experiences, and not ignorance merely of, say, their intensity or their history over time. ↩︎