Functionalism about consciousness (just ‘functionalism’ for short) is the view that a system’s conscious experiences are determined entirely by its functional states.1 If functionalism is true, then a complete computer simulation of my brain would be conscious, and would undergo exactly the same conscious experiences as me.
For functionalism
- The argument from non-exceptionalism:2 If functionalism is false, then there is something exceptional about the material that our brains happen to be made out of. But there probably isn’t anything exceptional about the material that our brains happen to be made out of.
- Foerster (2020), ‘Anthropic bias and the basis of consciousness’
- Schwitzgebel (2020), ‘The Copernican Principle of Consciousness’
- The fading and dancing qualia augments: If functionalism is false, then it is possible for a “rational being that is suffering from no functional pathology to be systematically out of touch with its experiences”. But this isn’t possible.
- Chalmers (1995), ‘Absent Qualia, Fading Qualia, Dancing Qualia’
- Chalmers (1996), The Conscious Mind, chapter 7
- Chalmers (2010), ‘The Singularity: A Philosophical Analysis’, pp. 37-40
- Foerster (2021), ‘How strong is the fading qualia argument?'
- Mogensen (2024), ‘How to Resist the Fading Qualia Argument’
- The optimistic induction: Functionalist theories are true of all our non-conscious mental states, so a functionalist theory is probably true of our conscious mental states too.
- The argument from simplicity: Functionalism is simpler than every plausible non-functionalist theory, but it accounts for the data about consciousness at least as well as non-functionalist theories do.
- Chalmers (1996), The Conscious Mind, pp. 242-6
- The argument from functionalism about phenomenal judgments: A system’s judgments about its phenomenal states are determined entirely by its functional states, so its phenomenal states themselves are probably also determined entirely by its functional states.
- Shoemaker (1975), ‘Functionalism and Qualia’, pp. 295-9
- Chalmers (1996), The Conscious Mind, pp. 288-92
- The argument from conceptual analysis: Functionalism is a conceptual truth.
- Lewis (1966), ‘An Argument for Identity Theory’, section III
- Lewis (1971), ‘Psychophysical and Theoretical Identifications’
- Armstrong (1968), A Materialist Theory Of Mind, pp. 82-5
Against functionalism
- The argument from liberalism—
- Counterfactual version: If functionalism is true, then things like an appropriately organized sequence of water pipes or country would be conscious. But these things would not be conscious, no matter how they are organized.
- Block (1978), ‘Troubles With Functionalism’, section 1.2
- Searle (1980), ‘Minds, Brains and Programs’, p. 423
- Actual world version: If functionalism is true, then things like countries and economies actually are conscious. But these things are not conscious.
- Schwitzgebel (2014), ‘If materialism is true, the United States is probably conscious’
- Framed as a discussion of materialism, but I think better read as a discussion of functionalism. Also, not framed as an objection to materialism, since Schwitzgebel takes no stand on whether the United States is conscious.
- Schwitzgebel (2014), ‘If materialism is true, the United States is probably conscious’
- Triviality version: If functionalism is true, then nearly everything is conscious. But most things are not conscious.
- Putnam (1988), Representation and Reality, pp. 120-5
- Searle (1990), ‘Is the brain a digital computer?', section IV
- Chalmers (1996), ‘Does a Rock Implement Every Finite-State Automaton?'
- Response to Putnam and Searle
- Godfrey-Smith (2007), ‘Triviality arguments against functionalism’
- Piccinini (2017) , ‘Computation in Physical Systems’, section 3
- Counterfactual version: If functionalism is true, then things like an appropriately organized sequence of water pipes or country would be conscious. But these things would not be conscious, no matter how they are organized.
- The pessimistic induction: Functionalist theories are false of the vast majority of things, so a functionalist theory is probably false of consciousness too.
- Searle (1980), ‘Minds, Brains and Programs’, p. 423
- The argument from the intrinsic nature of consciousness: If functionalism is true, then consciousness is a relational property. But consciousness is an intrinsic property.3
- Kim (2005), Physicalism, or Something Near Enough, p. 173
- Van Gluck (2007), ‘Functionalism and Qualia’, pp. 393-4
- Mørch (2019), ‘Is Consciousness Intrinsic?: A Problem for the Integrated Information Theory’
- Focused on IIT specifically and not functionalism more broadly.
Methodological notes
It might seem puzzling how it’s even possible to argue about the truth of functionalism. On the one hand, it seems like functionalism—a theory of consciousness—is the sort of thing that we could learn only by making observations, and so it isn’t a priori. On the other hand, it seems like we’d make exactly the observations whether or not functionalism is true, and so it isn’t a posteriori.4 But we just saw nine arguments for or against functionalism. What’s going on?
Some of the arguments rely on general theoretical principles. The argument from non-exceptionality appeals to the Self-Sampling Assumption or a version of the Copernican Principle, and the argument from simplicity appeals to the principle of parsimony. General theoretical principles like these often give us reason to accept one of many observationally equivalent theories. For example, the principle of parsimony arguably counts against dualism in favor of physicalism, even if dualism and physicalism are observationally equivalent.
The two arguments from induction rely on, well, induction—either from other theories of mind (the optimistic intuition) or other theories more generally (the pessimistic induction).
One of the arguments (the argument from conceptual analysis) entails that, despite what you might have thought, it is a priori that functionalism is true.
The rest of the arguments rely on various assumptions about consciousness. The argument from liberalism relies on assumptions about what sort of things could be conscious. These assumptions are supposed to be pre-theoretically obvious or “intuitive”, though it’s unclear whether they are supposed to be knowable a priori or a posteriori.
The argument from the intrinsic nature of consciousness assumes that consciousness is an intrinsic property. Maybe this is supposed to be knowable by conceptual analysis or introspection. Note that if introspection counts as observation, then functionalism and non-functionalism are arguably not observationally equivalent.
The two remaining arguments (the fading and dancing qualia augments, and the argument from functionalism about phenomenal judgments) rely on assumptions about the relationship between consciousness and cognition. As before, it is unclear whether these assumptions are supposed be a knowable priori or a posteriori.
I understand ‘determination’ broadly to include causal determination in addition to constitution or grounding. Given this, dualists who believe that there is a law of nature connecting a system’s functional states to its (in their view, non-physical) conscious experiences count as functionalists in this sense. So I don’t consider any arguments for or against dualism here. ↩︎
Most of these names are made up by me, so you probably won’t have luck searching for the arguments by them. The two exceptions are ‘the argument from liberalism’ and ‘the dancing and fading qualia arguments’. ↩︎
Given how I’ve characterized functionalism, this argument arguably succeeds only if physicalism is true. If physicalism is false, then certain functional states might reliably bring about non-physical states of consciousness which are intrinsic, even if the functional states themselves are relational. This would be enough to make functionalism true. ↩︎
I’ve sometimes heard it suggested that if, as an experiment, we simulated someone’s brain and asked the simulation whether it’s conscious, an affirmative answer would be empirical evidence for functionalism. But of course we would expect to get this answer even if functionalism is false. If we didn’t get this answer, it would just mean that we didn’t simulate the brain correctly. ↩︎