Annually since 1948, the BBC has broadcast a short series of lectures named in honour of its founder, Lord Reith. This year’s series is being given by Atul Gawande. Although his subject is the nature of progress and failure in medicine, the two lectures delivered thus far resonate way beyond that field. I want to pick out a few points here from those two lectures in that they relate to the way we deal with knowledge in our work. The remaining two lectures have a slightly different focus, so I may look at those in a later post.
Lecture 1: Why Do Doctors Fail?
(Audio | Transcript)
At the heart of Gawande’s first lecture is an article published in the first issue of the Journal of Medicine and Philosophy in 1976: “Toward a Theory of Medical Fallibility” by Samuel Gorovitz and Alasdair MacIntyre. As Gawande summarises:
They said there are two primary reasons why we might fail. Number one is ignorance: we have only a limited understanding of all of the relevant physical laws and conditions that apply to any given problem or circumstance. The second reason, however, they called “ineptitude”, meaning that the knowledge exists but an individual or a group of individuals fail to apply that knowledge correctly.
In addition to ignorance and ineptitude, however, Gorovitz and MacIntyre identified a third cause of failure:
they said that there is necessary fallibility, some knowledge science can never deliver on. They went back to the example of how a given hurricane will behave when it will make landfall, how fast it will be going when it does, and what they said is that we’re asking science to do more than it can when we ask it to tell us just what exactly is going on. All hurricanes are ones that follow predictable laws of behaviour but no hurricane is like any other hurricane. Each one is unique. We therefore cannot have perfect knowledge of a hurricane short of having a complete understanding of all the laws that describe natural processes and a complete state description of the world, they said. It required, in other words, omniscience, and we can’t have that.
This necessary fallibility is akin to, if not the same as, the complexity that I described in an earlier blog post here.
Interestingly, Gawande chooses not to focus on necessary fallibility, but on the other two components. In particular, he is concerned that there is an uneven distribution of capabilities:
But the story of our time, I think, has now become in a unique way as much a story about struggling with ineptitude as struggling with ignorance. You go back a hundred years, and we lived in a world where our futures were governed largely by ignorance. But in this last century, we’ve come through an extraordinary explosion of discovery and then the puzzle has become not only how we close the continuing gaps of ignorance open to us but also how we ensure that the knowledge gets there, that the finger probe is on the right finger.
There’s a misconception I think about global health. We think global health is about care in just the poorest parts of the world. But the way I think about global health, it’s about the idea of making care better everywhere – the idea that we are trying to deploy the capabilities that we have discovered over the last century, town by town, to every person alive.
I think something similar is at foot in relation to legal knowledge. Those of us who work on improving knowledge within law firms often focus on the things that look hard — understanding new cases and legislation, for example — but in fact clients would get better value if their lawyers thought more carefully about the laws and processes that they take to be straightforward. Reducing ineptitude within firms is arguably more important than attempting to eliminate legal ignorance. Equally, there is much to be gained from spreading awareness of the law more widely outside law firms. This is an area where I see a number of technology-based enterprises at work, as well as the work of the National Archives in opening up the UK’s legislative archive.
Lecture 2: The Century of the System
(Audio | Transcript)
Atal Gawande’s second lecture draws heavily on his book The Checklist Manifesto. I thought I had already written about this book on the blog, but it turns out I haven’t. It is probably too late to do that at length now, since the concept has found its way deep into business culture. For example, we put it at the heart of some of the risk and quality work that I supported in my last firm.
What the lecture brings out is an emphasis on the checklist as a systematic tool, rather than a personal guide. This is present in the book as well, but when one hears Gawande speak the focus is unavoidable.
One of my colleagues said that “we are graduating from the century of the molecule to the century of the system.” And by that what he meant was that we’ve gained an enormous amount in the last century by focusing on reducing problems to their atomic particles – you know discovered the gene that underlies disease or the neuron that underlies the way our brain works or you know the super specialist that can deliver on a corner of knowledge – but what we’re discovering is that we graduate into the future, we are faced with a world where it’s how the genes connect together that actually determine what our diseases actually do. It’s how the neurons connect together and form networks that create consciousness and behaviour, and it’s in fact how the drugs and the devices and the specialists all work together that actually create the care that we want. And when they don’t fit together, we get the experience we all have – which is that care falls apart. The basics end up being known, but they’re not followed.
And so we were approached by the World Health Organisation several years ago with a project to try to reduce deaths in surgery. I thought how can you possibly do that? But it was in exactly the same kind of problem – the basics were known but not necessarily followed. And so we worked with a team from … from the airline industry to design what emerged as just a checklist – a checklist though that was made specifically to catch the kinds of problems that even experts will make mistakes at doing. Most often basically failures of communications. The checklist had some dumb things – do you have the right patient, do you have the right side of the body you’re operating on, have you given an antibiotic that can reduce the infections by 50 per cent, have you given it at the right time? But the most powerful components are does everybody on the team know each other’s name and role, has the anaesthesia team described the medical issues the patient has? Has the surgeon briefed the team on the goals of the operation, how long the case will take, how much blood they should be prepared to give? Has the nurse been able to outline what equipment is prepared? Are all questions answered? And only then do you begin.
The outcome of this work was a huge reduction in complication rates (down 35%) and deaths (down 47%). The system has been shown to have saved 9000 lives in Scotland alone.
The lectures are followed by an opportunity for the audience to ask questions, and it is here that some of the most telling points were brought out. In response to a question from an operations manager at Heathrow Airport, Gawande highlighted a point about complexity and the limitations of expecting everyone to know their own job.
In fact in order to even come at how we would attack this question in surgery, what we did was we brought in the lead safety engineer from Boeing to come with us. He didn’t know anything about healthcare, but when he saw the way that we even approached the problem of improving outcomes in surgery, he was sort of baffled, you know, that he would watch how I went into an operating room and I’d go into an operating room and I’d just start operating. And he said, “Hold on a minute. Is this really what you do? You don’t … Have you made a plan with every …” “Everybody knows what to do. They all know what to do. You guys know what to do, right?” “Oh yeah, yeah, yeah, we know what to do.” And then we’d watch one thing fall through the cracks and then another and then another. It took him only a moment to step back and say, “You all need some basic communication systems around the idea that a team has to be effective at what they’re doing.” So I think that there are lessons very much coming from other fields.
Here’s the big difference. There are two people in a cockpit trying to make something happen and in many clinical environments it’s many more than that. My mother went for a total knee replacement and I counted the number of people who walked in the room in three days and it was 66 different people. And so the complexity of making 66 people work together – you know you’d have the physical therapist walk in in the morning and they’d say, “What are you doing in bed? You should be out of bed.” And the physical therapist would come in the afternoon and it would be a different person and they’d say, “What are you doing out of bed? You should be in bed.” This is still where we are.
And responding to a suggestion that checklists might ossify and hinder innovation:
That’s precisely the danger. So there’s the bad checklist and the good checklist, right? So the bad one is one that turns people’s brains off. More often than not, the effective checklist – ask people questions that they have to discuss and get their ideas forward – and that was out of a scientific process that we identified and it’s made in ways to help an expert be even better at what they do.
For me, those are the two lasting insights from the lecture. First, checklists need to be as much about communication as they are about giving instructions. And, second, checklists should be structured to draw out additional thought and contributions by the team using them. Both of these insights can usefully inform practice in a range of areas (including the law) and would be sensibly applied in the generation of knowledge materials — whether those come in the form of checklists or otherwise.
Learning and developing
One of the things that comes across clearly in both of these lectures is a commitment to nuanced learning. Like most of his fellow physicians, Gawande is clearly keen on increasing his personal knowledge within his field and beyond. Both lectures depend on insights from other people’s published research, and Gawande shows how those insights have more general application. However, it is obvious that he isn’t interested just in the knowledge. He wants to be able to to express ideas clearly to others, and he does this really well with a coherent narrative thread running through each lecture (this continues in the later lectures too). Finally, he is alert to the way knowledge informs practice. The second lecture is based on a paper describing treatment procedures for hypothermic victims of drowning. However, Gawande extracts from this highly-specialised situation a set of principles that might be relevant to any complex treatment.
Gawande’s approach thus has the following characteristics:
- Breadth of input
- Evidence-based narrative
- Thoughtful generalisation
- Relevant conclusions
Each of those factors increases the immediate and lasting value of the final product to the listener or reader.
Thinking back to some of the knowledge content for which I have been responsible in the past, I am not sure that much of it was as well structured as Gawande’s lectures. As a result it probably had much less value than it could have done — certainly not lasting value.
It’s a good standard to aim for.