A Primer on Medical Cognition

One subject that I am perennially interested in is the nature of how people in general, and doctors in particular, make decisions and judgments, and how that process can go wrong. I’ve written about the pitfalls of spiraling empiricism, cognitive dissonance, uncertainty and medical decision-making, the Dunning-Kruger effect, why clinical experience is often unreliable, and other aspects of how general human psychology, and the attitudes, training, and approaches of doctors in particular, can lead us to erroneous conclusions and bad clinical decisions. Now a colleague has introduced me to the field of medical cognition, through a dense and often painful-to-read but fascinating and informative article:

Patel, VL. Arocha, JF. Kaufman, DR. A primer on aspects of cognition for medical informatics. J Am Med Inform Assoc. 2001;8:324–343.

There is a lot that I am not qualified to understand or evaluate in the article, particularly as pertains to the details of electronic medical record systems, artificial intelligence, and so on. But some of the insights gleaned from the cognitive psychology literature on how doctors develop decision-making strategies and how these change with education and experience seem very relevant to everyday clinical practice. One pretty established concept is that experienced doctors and those with highly developed skills and expertise are better able to filter out irrelevant information and attend to and classify what is most important in establishing a diagnosis. As the authors put it, research identifies:

…the greater ability of expert physicians to selectively attend to relevant information and narrow the set of diagnostic possibilities…The ability to abstract the underlying principles of a problem is considered one of the hallmarks of expertise, both in medical problem solving and in other domains.

One of the models for how experts achieve this is the concept of schemata:

…mental representations of typical things (e.g. diseases) and events (e.g. episodes of illness)…[which serve] as a “filter” for distinguishing relevant and irrelevant information. Schemas can be considered generic knowledge structures that contain slots for particular kinds of findings (data). For instance, a schema for myocardial infarction most likely contains the findings “chest pain,” “sweating,” but not the finding “goiter,” which is part of the schema for thyroid disease…A function of schemata is to provide a “filtering” mechanism to experts, allowing them to selectively attend to significant information and discard irrelevant clinical information.

Experts process information at a level of abstraction that is most efficient and reduces the burden on memory. Through years of experience, they have learned to conceptualize medical information (e.g., clinical findings from a patient) in terms of constructs…intermediate between the concrete level of particular signs and symptoms and the more abstract nature of diagnoses… In contrast, less experienced physicians tend to process medical information at a more detailed level.

Building these schemata and learning to process the huge amount of available information (history, physical examination findings, bloodwork, imaging, etc) efficiently and effectively takes a long time.

This research has shown that, on average, the achievement of expert levels of performance in any domain requires about ten years of full-time experience.

Developing such expertise certainly requires acquiring specific factual knowledge. However, research suggests that the importance of facts in building competence is often overestimated.

Factual knowledge involves merely knowing a fact or set of facts (e.g., risk factors for heart disease) without any in-depth understanding. Facts are current truth and may become rapidly out of date. The acquisition of factual knowledge alone is not likely to lead to any increase in understanding or behavioral change. The acquisition of conceptual knowledge involves the integration of new information with prior knowledge and necessitates a deeper level of understanding…Factual knowledge is inherently more brittle than conceptual knowledge, and this brittleness is most acutely observed in unfamiliar situations.

This certainly accords with my own experience of transitioning from a new graduate to an experience veterinarian. I sometimes feel as if much of the detailed factual knowledge acquired and regurgitated laboriously in veterinary school has left me, yet I am able to identify the important pieces of information in a given case and relate them to relevant criteria for diagnosis or treatment much more easily than new graduates I work with. And, of course, facts are always available to be looked up when needed.

Interestingly, research in the development of expertise does not seem to support the popular, conventional model of how one gets better at a complex skill. For one thing, the process does not seem to be a steady accretion of knowledge and skill, but an erratic, unsteady trajectory. And in many cases, as one shifts from a detailed, algorithm-driven, formalized method to the more efficient, heuristic approach of an expert, one’s competence may actually decline, a phenomenon the authors refer to as the “intermediate effect.”

Cross-sectional studies of experts, intermediates, and novices have shown that, on some tasks, people at intermediate levels of expertise may perform more poorly than those at lower levels of expertise, a phenomenon known as the “intermediate effect.” When novice–intermediate–expert data are plotted…the performance of intermediate subjects (those who are on their way to becoming proficient in a domain but have not reached the level of experts) declines to a level below that of novices…

This literature suggests that human development and learning does not necessarily consist of the gradually increasing accumulation of knowledge and skills. Rather, it is characterized by the arduous process of continually learning, re-learning, and exercising new knowledge, punctuated by periods of apparent decrease in mastery and declines in performance. Given the ubiquity of this phenomenon, we can argue that such decline may be a necessary part of learning.

One theory that occurs to me to explain this, and which the authors don’t appear to consider, is that the fundamental nature of the shift from novice to expert may itself be counterproductive in some ways. Novices tend to follow explicit rules and patterns taught to them for sorting and utilizing information and solving problems. Experts tend to have internalized these rules and often process information and draw conclusions without explicit, conscious awareness of the thought processes involved. While this is inarguably more efficient, it raises the risk of bias significantly. It appears to be well-established that the risk of drawing incorrect conclusions is increased when explicit and objective controls for unconscious bias are not utilized. Barry Beyerstein has created a list of the cognitive biases and errors that can lead to incorrect clinical decisions, and many of these would seem to involve relying on instinct or intuition, which are colloquial labels for exactly the kind of unconscious information processing the authors of this article characterize as the hallmark of an expert (I have adapted and modified this list to suit the veterinary profession).

Human Psychology Even when no objective improvement occurs, people with a strong psychological investment in the pet can convince themselves the treatment has helped. And doctors, who want very much to do the right thing for their patients and clients, have a vested interest in the outcome as well. A number of common cognitive phenomena can influence one’s impression of whether a treatment helped or hurt a patient. Here’s a brief list of common cognitive errors in medical diagnosis. Any of these sound familiar?

a.      Cognitive Dissonance When experiences contradict existing attitudes, feelings, or knowledge, mental distress is produced. People tend to alleviate this discord by reinterpreting (distorting) the offending information. If no relief occurs after committing time, money, and “face” to a course of treatment internal disharmony can result. Rather than admit to themselves or to others that their efforts have been a waste, many people find some redeeming value in the treatment.

b.     Confirmation Bias is another common reason for our impressions and memories to inaccurately represent reality. Practitioners and their clients are prone to misinterpret cues and remember things as they wish they had happened. They may be selective in what they recall, overestimating their apparent successes while ignoring, downplaying, or explaining away their failures. Or they may notice the signs consistent with their favored diagnosis and ignore or downplay aspects of the case inconsistent with this.

c.      Anchoring This is the tendency to perceptually lock onto salient features in the patient’s initial presentation too early in the diagnostic process, and failing to adjust this initial impression in the light of later information. This error may be severely compounded by the confirmation bias.

d.     Availability The disposition to judge things as being more likely, or frequently occurring, if they readily come to mind. Thus, recent experience with a disease may inflate the likelihood of its being diagnosed. Conversely, if a disease has not been seen for a long time (is less available), it may be underdiagnosed.

e.      Commission Bias results from the obligation toward beneficence, in that harm to the patient can only be prevented by active intervention. It is the tendency toward action rather than inaction. It is more likely in over-confident veterinarians. Commission bias is less common than omission bias.

f.      Omission Bias the tendency toward inaction and rooted in the principle of nonmaleficence. In hindsight, events that have occurred through the natural progression of a disease are more acceptable than those that may be attributed directly to the action of the veterinarian. The bias may be sustained by the reinforcement often associated with not doing anything, but it may prove disastrous.

g.     Diagnosis Momentum Once diagnostic labels are attached to patients they tend to become stickier and stickier. Through intermediaries (clients, techs, other vets) what might have started as a possibility gathers increasing momentum until it becomes definite, and all other possibilities are excluded.

h.     Feedback Sanction Making a diagnostic error may carry no immediate consequences, as considerable time may elapse before the error is discovered, if ever, or poor system feedback processes prevent important information on decisions getting back to the decision maker.

i.       Gambler’s Fallacy Attributed to gamblers, this fallacy is the belief that if a coin is tossed ten times and is heads each time, the 11th toss has a greater chance of being tails (even though a fair coin has no memory). An example would be a vet who sees a series of patients with dyspnea, diagnoses all of them with a CHF, and assumes the sequence will not continue. Thus, the pretest probability that a patient will have a particular diagnosis might be influenced by preceding but independent events.

j.       Posterior Probability Error  Occurs when a vet’s estimate for the likelihood of disease is unduly influenced by what has gone on before for a particular patient. It is the opposite of the gambler’s fallacy in that the doctor is gambling on the sequence continuing,

k.     Hindsight Bias Knowing the outcome may profoundly influence the perception of past events and prevent a realistic appraisal of what actually occurred. In the context of diagnostic error, it may compromise learning through either an underestimation (illusion of failure) or overestimation (illusion of control) of the decision maker’s abilities.

l.       Overconfidence Bias A universal tendency to believe we know more than we do. Overconfidence reflects a tendency to act on incomplete information, intuitions, or hunches. Too much faith is placed in opinion instead of carefully gathered evidence. The bias may be augmented by both anchoring and availability, and catastrophic outcomes may result when there is a prevailing commission bias.

m.   Premature Closure A powerful error accounting for a high proportion of missed diagnoses. It is the tendency to apply premature closure to the decision making process, accepting a diagnosis before it has been fully verified. The consequences of the bias are reflected in the maxim: ‘‘When the diagnosis is made, the thinking stops.’’

n.     Search Satisfying  Reflects the universal tendency to call off a search once something is found. Comorbidities, second foreign bodies, other fractures, and coingestants in poisoning may all be missed. Also, if the search yields nothing, diagnosticians should satisfy themselves that they have been looking in the right place.

It may be that there are advantages to the deliberate processes followed by novices, and that the adjustments in these made to achieve speed and efficiency aren’t always exclusively favorable to accuracy. Of course, there is evidence that experts truly are better at arriving at correct conclusions than novices, so the heuristic methods that they develop are effective most of the time. But a key element in encouraging the adoption of evidence-based medicine is inculcating adequate self-doubt. It is clear that explicit, objective methods of analysis are less prone to bias and error than reliance on our own perceptions and internalized and unconscious decision-making processes. This reliance on the explicit and the objective is most critical in controlled clinical research, but it is also a useful process for reducing error in day-to-day clinical practice. So in recognizing and emulating the heuristic practices of experts, we must not neglect to recognize their pitfalls and include tools and methods for compensating for these weaknesses.

This entry was posted in General. Bookmark the permalink.

3 Responses to A Primer on Medical Cognition

  1. Art says:

    When I first got out of school after working for vets since 8 I had the observational opinion that vets in the middle of their career were the best ones. Now that I am a old expert I am glad to see the data shows I was wrong since the intermediate doctor level shows a drop. Is the word “expert “defined other than someone working in the field a long time? Is the intermediate doctor with a dip in skill measured any way but in time?
    Art Malernee dvm

  2. skeptvet says:

    Expert is generally defined in terms of proficiency, not simply time in practice. How exactly that is usually measured I don’t know.

  3. Art says:

    The intermediate effect sure is strange to me. I would love to read how they measured what a expert is. I do not like the word expert. Sackett I think was the one who said when docs become experts they need to find another job.

    Art Malernee dvm

Leave a Reply

Your email address will not be published. Required fields are marked *