At the time when I was most uncharitable towards psychology, it struck me as obviously absurd that we’d devote centuries of scholarship towards treating something grounded in fiction. Our devotion wasn’t limited to time spent in study, but also included resources such as the time and money of those who sought out and were treated by what seemed little more than modern equivalents of physicians curing anything and everything via bloodletting with leeches.
The fiction I refer to is the idea that there’s anything to treat besides the brain. A related fiction would then be that there was any room for academics, clinicians, couches and so forth, all dedicated to addressing exhaust fumes of neurological processes – the perceived “mind” rather than the brain. In short, I argued that while the mind might want conversation, the brain would benefit most from medication.
While I’m still fully committed to the stance best captured in cognitive scientist Marvin Minksy’s phrase “minds are what brains do”, my lack of charity was nevertheless misguided and simple-minded. Even if the best way of treating most things might one day may be a pill, that doesn’t mean that there’s never any value in stopgap approaches, nor that there aren’t a variety of things that are best treated through other means.
The concern, though, was that psychology begins with an essentially dualist philosophy of mind, where there is a body and also a mind – rather than “simply” a body that includes a brain, which somehow produces minded-ness. The dualist assumption leads to research and treatment premised in treating something that doesn’t exist – or so I smugly asserted.
Leaving aside the sort of over-reaching I describe above, though, it remains true that knowing what sort of an entity you’re attempting to treat (is it merely a worried human being, or it is a human being with a neurological disorder?), and then accurately diagnosing its ailment, are essential to being able to provide relief or a cure.
Doing either of those things is difficult. But that can’t excuse us from the reality that we might sometimes get one – or both – of the identification and then the diagnosis horribly wrong, simply because we’re relying on an outdated model of what the organism is, or of what might be wrong with it.
The same problem occurs in psychiatry. One of those books that I (for a time) tried to persuade everyone to read is Dominic Murphy’s Psychiatry in the Scientific Image (2006), in which he argued for a scientific (rather than medical) approach to understanding mental illness, much like the research programme we’ve seen blossom in the cognitive neurosciences over the last five or so years.
Murphy argued that our concepts and language for describing mental illness depend far too much on observed symptoms, and are frequently also historical artefacts that have not been revised in light of more recent knowledge. A scientific approach would look for causal explanations, sometimes involving neuroscience at the molecular level, sometimes genes or biochemisty (to mention only a few possibilities).
He’s wary of reducing explanations to the lowest, or fully reductionist, level. In Murphy’s view, what we’re aiming for is a level of “robustness”, where an explanation accounts for a disorder across a variety of contexts, and eliminates much of the subjectivity involved in assessing disorders through interpreting symptoms.
In his concluding chapters, Murphy applies these ideas in criticising the Diagnostic and Statistical Manual of Mental Disorders, or DSM, which at the time stood at revision IV. The DSM, put simply, is the authority when it comes to mental disorders. If it’s in the DSM, it’s real, and if it’s not, your chances of claiming from medical aid immediately plummet to vanishingly small.
While a disorder being “real” means that it can now be diagnosed, it remains debatable whether some disorders are in fact “real” in the typical sense of the word – in other words something like fidgeting versus ADHD. While the latter is probably real, it’s also a useful catchall for describing the average child, who might not have been diagnosed with anything more worrisome than “causes annoyance to adults” in the years before we developed medication for ADHD.
The DSM V, due to be released later this month, has attracted significant criticism for both what it has removed (things that are no longer real, in the diagnostic sense) as well as what is now considered disordered, and therefore suddenly real. The class of “behavioural addictions”, for example, has been accused of making a “mental disorder of everything we like to do a lot”.
In arguing that the DSM should classify based on causal explanations, Murphy was fully aware that a causal taxonomy of various disorders would be very difficult to develop, but he nevertheless thought that the work should start, because some progress was possible even in the absence of all the relevant data.
That progress would most likely include – even at this relatively early stage of knowledge in fields like neuroscience – not listing first-time drug users alongside addicts (as the DSM V is apparently set to do), because the explanation for dabbling with a drug will quite often not be the same as the explanation for why someone becomes addicted to that drug.
It’s therefore pleasing to be able to report that progress towards a scientific psychiatry has recently taken a sharp upwards turn, with the National Institutes of Mental Health (NIMH) announcement in April that it will no longer be conducting research in accordance with the DSM V categories.
Instead, the NIMH has launched a 10-year Research Domain Criteria project “to transform diagnosis by incorporating genetics, imaging, cognitive science, and other levels of information to lay the foundation for a new classification system”, where that system will be based “on the emerging research data, not on the current symptom-based categories”.
This decision is sure to attract criticism, but we should save the strongest criticism for fields of research, scientists and medical professionals that have access to new data, but choose to ignore or downplay that data in favour of avoiding revolution. As the NIMH statement concludes: “At the end of the 19th century, it was logical to use a simple diagnostic approach that offered reasonable prognostic validity. At the beginning of the 21st century, we must set our sights higher.” DM