(This is a the second post in a mini series on diagnostic reasoning. See here for part 1 The guest author is a co-investigator at the Program for Education Research and Development, McMaster University – Jonathan (@sherbino))
By Sandra Monteiro (@monteiro_meded)
When proponents of cognitive de-biasing talk about the dangers of cognitive biases they often ignore the importance of content knowledge. It was often clear in the methods of many social psychology experiments that access to critical pieces of knowledge was restricted.
Participants were recruited specifically because they would be naïve and experimenters could measure social preferences in the absence of detailed information. So, if you devise a study that presents people with limited information, including limited access to expert knowledge and then notice that they are still able to develop preferences or make confident decisions… that is incredibly meaningful about how the brain supports our survival in society.
In the face of limited knowledge, we can still make decisions… often highly functional decisions. That is an incredible piece of evolutionary work. The suggested explanation is that we can link our current experience to something in our past and make a guess… especially when experimenters are telling us we have to respond to every question. Sometimes our guesses are not optimal but only according to the experimenters’ specifications. Critically, there are an equal number of studies in the literature that both support and refute the accuracy of heuristics.
(Of course, diagnostic accuracy in many of these studies is less relevant, where the goal is to understand the reasoning mechanisms involved.)
But when it comes to evidence suggesting cognitive biases are error-prone, many people don’t know that the game was rigged. The problems presented to people were often artificial and lacking relevance to real life. Sometimes the distinction between a correct and incorrect answer was minimal as to be meaningless (Lopes). The solutions to those problems were meant to follow normative logic, which requires some understanding of philosophy, probabilities or stats… not part of most people’s knowledge and certainly not common knowledge back in the 70s, when a lot of these studies were completed.
Of course, if you teach people about probabilities, philosophy, or if they have some content expertise in the problems they are asked to solve… they do really well with limited information (Klein, Giggerenzer). Again that is because we can very rapidly link our current experience to something in our past and make a guess… with a little analysis a guess can become a ‘final answer’… all in a matter of seconds. So the same mechanisms that lead to errors when experimenters manipulate artificial logic problems gets us to the correct answer when we are in our own domain… so how are we supposed to know when occasionally one of those “guesses” turns out to be incorrect?
The mechanism responsible for a rapid retrieval of an incorrect answer operates the same as when it retrieves a correct answer. It is only our content expertise that provides an subconscious error checking process (Yeung, Botvinick & Cohen, 2004). That process contributes to a physician’s feeling of unease about a incorrect diagnosis… or their confidence in a correct diagnosis. And that naturally occurring unease is what expert physicians should respond to.
Proponents of cognitive de-biasing would have all physicians override that process and act uneasy about every diagnosis… rather exhausting I would say, and possible to introduce errors. When asked to re-consider their previous diagnoses to written medical cases, only about 8% of diagnoses were revised (Monteiro). Most of the time, revisions were applied to incorrect diagnoses but not always successfully. On a small proportion of cases, revisions were applied to correct diagnoses, making them incorrect. The implication of this finding is that we cannot know what will happen from second-guessing ourselves using cognitive de-biasing approaches, which might introduce error where it didn’t exist before.
Of course you may still be thinking that you know of evidence that cognitive de-biasing works… take a critical look at that evidence… often it is just rhetoric, other times the evidence is from contextually different social psychology and not applicable to diagnostic reasoning. When the evidence does come from medicine, it does not conclusively support de-biasing approaches. True there was one study that showed diagnostic accuracy improved after asking physicians to revise their answers… but the experimenters manipulated the physicians into making a mistake in the first place and then effectively pointed out their mistakes (Mamede)… which doesn’t seem to represent the challenges in diagnosing a patient’s condition.
So to summarize, we can trick people into making mistakes… and once that’s done we can identify the bias at play. But warning people that those biases are likely to occur will not help them because it will not take the same form every time (Sherbino, Sherbino).
If we look to the social psychologists who started all this, the solution to error in reasoning is education. But they’re not referring to educating people about the definitions or functions of biases, they are referring to educating people with facts … about other people. Now that the concept of diagnostic bias has made its way into medicine, the solution is clear… reducing diagnostic error requires educating physicians … with facts about medicine. Isn’t that what medical school and residency is already about?