Moral realism is the view that there are objective moral truths—objective in that they hold independently of our beliefs, attitudes, and preferences.1 For example, if it is an objective truth that torturing people is wrong, then torturing people would still be wrong even if we all had pro-torture beliefs, attitudes, and preferences. Intuitively, moral realists think that morality is out there in the world, not in our heads.
Does the truth or falsity of moral realism have any practical consequences? There are some trivial ways in which in might. For example, if moral realism is true, then maybe philosophy journals should publish fewer articles developing anti-realist moral views than they currently do. A bit less trivially, if moral realism is false, then maybe we should care less about morality than many moral realists currently do.
But are there any more significant practical consequences of the truth or falsity of moral realism? I think there might be.
[W]hen [humans] gaze up at the stars, they turn sentimental and believe that if extraterrestrial intelligences exist, they must be bound by universal, noble, moral constraints, as if cherishing and loving different forms of life are parts of a self-evident universal code of conduct. I think it should be precisely the opposite…[W]e should be ever vigilant, and be ready to attribute the worst of intentions to any Others that might exist in space.
Here is one interpretation of Cixin's comment. Cixin is implicitly assuming that moral realism is false. And he draws a practical consequence from this: that we should “be ever vigilant” of aliens. This might involve not giving aliens directions to Earth (as happens in the novels), and preparing defenses of Earth if we learn of the existence of relatively nearby aliens.
Why is this practical consequence supposed to follow from the falsity of moral realism? My guess is that Cixin has something like the following argument in mind:
- Moral realism is false.
- So aliens are less likely to share our moral beliefs than they otherwise would be.
- So, in particular, they are less likely to share our belief that it is wrong to (say) enslave intelligent creatures than they otherwise would be.
- So they are more likely to try to enslave humans than they otherwise would be.
- So we should “be ever vigilant” of aliens.
There are many ways to resist this argument, even granting (1). Here I'll focus on the the move from (1) to (2).
To see why (2) is supposed to follow from (1), start by supposing that moral realism is true. If moral realism is true, then the truth that it is wrong to enslave intelligent creatures is out there in the world, waiting to be discovered, like the truth that nothing can travel faster than light, or the truth that momentum is conserved. And aliens probably do share our beliefs that nothing can travel faster than light and that momentum is conserved, provided that they are sufficiently intelligent.
In contrast, if moral realism is false, then moral truths are not out there in the world, waiting to be discovered. Instead, either there are no moral truths at all, or there are moral truths but they depend on us. In the second case, moral truths are like truths about what's funny or what tastes good. And, the thought is, aliens probably don't share either our false beliefs or our beliefs about non-objective matters like what's funny or what tastes good.
It might be objected that there was evolutionary pressure on us to form certain moral beliefs. (In fact, some prominent opponents of moral realism claim precisely this.) If aliens faced the same evolutionary pressure as us, then maybe we should expect them to share our moral beliefs even if moral realism is false. But even if this is right, it still seems more likely that aliens share our moral beliefs if moral realism is true than if it is false. And this is enough to support the move from (1) to (2).
At least, this is what I will mean by ‘moral realism’. Sometimes the term is used to refer to other views. ↩︎