Vacchablogga

October 10, 2020

Arguments for and against functionalism

Functionalism about consciousness (just ‘functionalism’ for short) is the view that a system’s conscious experiences are determined entirely by its functional states.1 If functionalism is true, then a complete computer simulation of my brain would be conscious, and would undergo exactly the same conscious experiences as me.

For functionalism

Against functionalism

Methodological notes

It might seem puzzling how it’s even possible to argue about the truth of functionalism. On the one hand, it seems like functionalism—a theory of consciousness—is the sort of thing that we could learn only by making observations, and so it isn’t a priori. On the other hand, it seems like we’d make exactly the observations whether or not functionalism is true, and so it isn’t a posteriori.4 But we just saw nine arguments for or against functionalism. What’s going on?

Some of the arguments rely on general theoretical principles. The argument from non-exceptionality appeals to the Self-Sampling Assumption or a version of the Copernican Principle, and the argument from simplicity appeals to the principle of parsimony. General theoretical principles like these often give us reason to accept one of many observationally equivalent theories. For example, the principle of parsimony arguably counts against dualism in favor of physicalism, even if dualism and physicalism are observationally equivalent.

The two arguments from induction rely on, well, induction—either from other theories of mind (the optimistic intuition) or other theories more generally (the pessimistic induction).

One of the arguments (the argument from conceptual analysis) entails that, despite what you might have thought, it is a priori that functionalism is true.

The rest of the arguments rely on various assumptions about consciousness. The argument from liberalism relies on assumptions about what sort of things could be conscious. These assumptions are supposed to be pre-theoretically obvious or “intuitive”, though it’s unclear whether they are supposed to be knowable a priori or a posteriori.

The argument from the intrinsic nature of consciousness assumes that consciousness is an intrinsic property. Maybe this is supposed to be knowable by conceptual analysis or introspection. Note that if introspection counts as observation, then functionalism and non-functionalism are arguably not observationally equivalent.

The two remaining arguments (the fading and dancing qualia augments, and the argument from functionalism about phenomenal judgments) rely on assumptions about the relationship between consciousness and cognition. As before, it is unclear whether these assumptions are supposed be a knowable priori or a posteriori.


  1. I understand ‘determination’ broadly to include causal determination in addition to constitution or grounding. Given this, dualists who believe that there is a law of nature connecting a system’s functional states to its (in their view, non-physical) conscious experiences count as functionalists in this sense. So I don’t consider any arguments for or against dualism here. ↩︎

  2. Most of these names are made up by me, so you probably won’t have luck searching for the arguments by them. The two exceptions are ‘the argument from liberalism’ and ‘the dancing and fading qualia arguments’. ↩︎

  3. Given how I’ve characterized functionalism, this argument arguably succeeds only if physicalism is true. If physicalism is false, then certain functional states might reliably bring about non-physical states of consciousness which are intrinsic, even if the functional states themselves are relational. This would be enough to make functionalism true. ↩︎

  4. I’ve sometimes heard it suggested that if, as an experiment, we simulated someone’s brain and asked the simulation whether it’s conscious, an affirmative answer would be empirical evidence for functionalism. But of course we would expect to get this answer even if functionalism is false. If we didn’t get this answer, it would just mean that we didn’t simulate the brain correctly. ↩︎