The previous installments in this series on morality have argued that we are handicapped in our ability to engage in moral debate. This handicap exists because of our overconfidence and complacency towards our existing moral beliefs, as well as through the lack of guidance offered by the dominant moral theories. But a negative proof – showing what might be wrong with existing beliefs – is often an easier task than a positive argument for some viable alternative. The positive argument is the focus of the final two parts of this series.
A summary of these concluding instalments is perhaps the claim that moral knowledge is just like any other knowledge, and should, therefore, be understood and debated using the same tools and resources we deploy in trying to understand other areas of epistemological contestation. The most successful tools and resources we’ve found so far are those of the scientific method, and I will argue that what we need is a “science of morality”.
The idea of a science of morality has recently enjoyed increased public attention thanks to Sam Harris, and the recent publication of his book “The Moral Landscape”. Many columns and reviews – including some from prominent moral philosophers – have been quick to dismiss Harris as philosophically ignorant, mostly on the basis that he fails to take the concerns of Hume seriously. Hume, the critics say, told us that one cannot derive an “ought” from an “is” – in other words that empirical observations about what is the case cannot tell us how things ought to be.
But instead of being willing to contemplate the possibility that Hume was wrong, or that Hume can be misunderstood, these refutations of Harris’s arguments usually amount to the simple assertion that Hume’s Guillotine (as the argument is known) shows he is wrong. It’s useful to remind ourselves that simple appeals to authority are a logical fallacy – it doesn’t matter who said what, but rather that what they say stands up to logical scrutiny. This is what Hume says, in a passage from “A Treatise of Human Nature” (1740):
“In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it shou’d be observ’d and explain’d; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.”
Read that last sentence again: Hume says that the derivation of “ought” from “is” needs to be explained, and that a reason should be given. He does not say such explanations are impossible, or no relevant reasons for such derivations exist. So here we have a clear example of how appeals to authority can appear convincing, even to many who regard themselves as being well acquainted with the relevant literature. Now, of course I’m simplifying – an opinion piece does not allow for excursions into subsequent work by Moore and others in which this is/ought (or fact/value) distinction is further explored and defended.
But Harris is not the first to think this distinction is at best misleading, or even false. Those who think empirical facts can tell us nothing about morality could spend some time reading the work of Railton, Jackson, Boyd, Binmore, Churchland and others who have presented strong cases for the possibility that facts about the world can indeed tell us something about morality. As I’ve previously argued, the idea that morality involves absolute principles has enjoyed the privilege of being grounded in dogmatic faith – whether religious or secular – and that faith doesn’t necessarily correspond to actual justification.
So if we are to entertain the notion that values can be derived from facts, how should we proceed in doing so? Applying the scientific method does not have to equal scientism. For some, it does, and this is indeed unfortunate. The more modest and useful perspective is to recognise what it is we value about science, and why we find it so useful. We value it and find it useful because it provides us with the best possible answers to questions that potentially have answers, and allow us to make the sorts of predictions about the future most likely to be borne out by subsequent observations.
It does not offer us guarantees, and it never has. It’s important here to reflect on the difference between a lay understanding of science as offering absolute certainty, versus the actual products of scientific inquiry, which are always qualified by reference to statistical tools like margins of error and confidence levels. These things are usually not reported in the mainstream press, but are universally present in any respectable scientific publication.
To take an extreme example: It’s virtually certain my habit of smoking cigarettes will lead to my suffering some unpleasant health consequences in the future. But when we say things like “smoking causes cancer”, that shorthand statement stands in for something far more complicated. A more accurate utterance would be something like “thanks to a vast body of empirical data, the most plausible hypothesis is that smoking has a positive causal relation to cancer, and we can confidently predict that Jacques is likely to develop cancer thanks to this behaviour”.
Many of our hypotheses and predictions do not allow for as much confidence as the example of smoking does. But as soon as there is any evidence – any evidence at all – the possibility exists for us to make better and worse predictions about the consequences of our actions. And we do have some evidence related to the sorts of things that allow for increases or decreases in the welfare of sentient creatures.
Following the advice offered in John Watson’s best-selling childcare book in 1928 – you should not kiss your child more than once a year – will almost certainly have a negative effect on the welfare of that child, other things being equal. So, this fact about what conduces to your child’s welfare allows us to infer the moral principle that it is wrong to neglect your child’s emotional needs.
If you agree that there are some aspects of welfare that can be measured – and if you agree that morality has something to do with welfare – then it seems plain that facts about the world can tell us something about how we should live in that world and how we should treat, not only each other, but also other sentient creatures. We have some data, and any amount of data allows for us to make better and worse predictions regarding the consequences of adopting one moral principle versus another.
Of course we don’t have certainty. But we don’t have it anywhere else, and it is unclear how this is a flaw for moral knowledge, yet not for any other kind of knowledge. This double-standard has no justification, and seems little more than an excuse to do less thinking. DM