Testing… How can you be sure you aren’t doing the wrong thing?

I had a long walk today, accompanied by a number of podcasts. One of them was new to me (although it has been going for some years): The Infinite Monkey Cage. This episode was on the appropriation of quantum physics by various strands of pseudoscience. It was a really interesting discussion about the way scientific concepts are misinterpreted and what might motivate that.

At one point, one of the guests, Jeff Forshaw, made a really important point about the nature of scientific investigation that is often lost on non-scientists. Where does confidence in science come from, given that (by definition) research is providing answers to questions that have never been answered before. At 37′ 42″ in the podcast, he says:

My trust in other scientists comes from — and I often ask this when i am doing things like PhD exams — “so what did you do to demonstrate that this isn’t wrong? How much have you tried to break what you’ve done?” I trust the professional scientists who have spent a lot of time [doing this] (and I expect the answer to that to be “yeah, we tried everything: it just won’t be wrong”).

(I have slightly tidied up the transcript for clarity.)

This process of challenge is inherent to good science — it is built into the peer review that all research goes through before publication. Actively welcoming criticism is also part of scientific culture, as another guest, Ben Goldacre, pointed out at an earlier point in the discussion (34′ 55″ in the podcast):

You know, the Q&A after a work-in-progress seminar or a conference presentation is often a blood bath. But it’s all consensual. In general people don’t take it personally — its a consenting intellectual S&M activity — and we know that it’s good for our soul. We welcome it, and we want it because we know that’s how we will purify our ideas.

Lake District lintelThis made me think about decisions made in other contexts. In particular, how often do clients challenge the advice their lawyers give them in this way. I know that some will — and hard. Equally, I am sure that some are looking for reassurance that their preferred course of action is permissible and so are not inclined to push their lawyers to prove that what they are hearing is not wrong. Similarly, when firms make their own business decisions, can they always be sure that those decisions are pure and trustworthy?

One of the Cognitive Edge methods can be useful here. This is Ritual Dissent, which can be seen as a way of using Jeff Forshaw’s questions in the context of business decisions or choices — subjecting them to robust critique and testing so that the wider organisational community can comfortably trust them.

This technique, along with others derived from the same source, has the power to lead organisations to much better decision-making. Please get in touch if you are interested in knowing more about how your firm might benefit.

If I Only Had a Brain — how to become the wisest in Oz

Last week, a random tweet by James Grandage prompted a chain of thought. He tweeted:

My response was to suggest that he had it already: a brain.

On reflection, however, it appears that James was seeking what many firms want — a brain for the whole organisation. To be able to create and recall institutional memories, to process sensations gathered by ears and eyes and to use those sensations to engage with other organisations (or people) and their brains.

In the name of knowledge management, many organisations have created databases and repositories that are intended to operate as brains as far as the technology will allow. Unfortunately, their actual performance often falls somewhat short of this promise. Why might this be?

One answer is suggested by the experience of the Scarecrow in Frank L. Baum’s Wizard of Oz. You will recall that he accompanied Dorothy on her journey to Oz in order to ask the Wizard for a brain, because that is what he wants above all else. As they travel down the Yellow Brick Road, the Scarecrow’s shows by his actions that in fact he has a brain, and can use it. When they get to Oz, he is recognised as the wisest man there.

Many law firms are on a similar journey. They labour in the belief that all they need to complete themselves is a know-how system, or database, or whatever terminology they use to describe their brain. In reality, they have one — distributed amongst their people — which they often use to spectacular effect. (For examples, see the FT’s report on Innovative Lawyers, which highlights a range of activities — very few (if any) of which depend on the existence of a KM system.)

Often, however, brains (whether individual or organisational) are used spectacularly poorly. I suspect that this is partly why KM databases fail so well: people just use them badly — they don’t use them, or they don’t volunteer their insights to them. (There are other, better, reasons, but I want to concentrate on this one for now.)

How actively do people use their own brains to reflect and learn from their experiences? Or to seek information or insight that challenges what they think they know? I must confess that I see little of this. (I try to do it myself, but I am sure I have blind spots where I accept a partial view of reality, rather than continuing to seek a better truth.) I am sure this critique and creativity happens, but for most people it is concentrated in areas where they are already experts. For lawyers, that is their area of legal expertise — not the work that goes on around them to support the firm in other ways.

As an example of this, consider the know-how system. Whilst the research I linked to above (and again here), dates from 2007, I still see people advocating such repositories as the cure-all for law firms’ knowledge ailments. At the very least, they ought surely to recognise that there is a contrary view and argue against it?

Another example that comes up repeatedly is the assertion that creative thought depends on using one’s right brain, rather than the analytical left brain. However, this depends on an understanding of neuroscience that was undermined twelve years ago. The origin of the left-right brain model was the research of Roger Sperry, who was awarded the Nobel Prize in 1981. Despite the attractiveness of this model (especially to a range of management authors), neuroscience, like all the sciences, does not stand still — all theories are challengeable.

The watershed year is 1998, when Brenda Milner, Larry Squire, and Eric Kandel published a breakthrough article in the journal Neuron, “Cognitive Neuroscience and the Study of Memory.” Kandel won the Nobel Prize two years later for his contribution to this work. Since then, neuroscientists have ceased to accept Sperry’s two-sided brain. The new model of the brain is “intelligent memory,” in which analysis and intuition work together in the mind in all modes of thought. There is no left brain; there is no right. There is only learning and recall, in various combinations, throughout the entire brain.

Despite the fact that this new model is just as easy to understand, people still fall back on the discredited left-right brain model. Part of the reason, I think, is that they don’t see it as their responsibility to keep up with developments in neuroscience. But surely using 30-year-old ideas about how the brain works brings a responsibility to check every now and then that those ideas are still current.

Something similar happens with urban legends. Here’s a classic KM legend: Stewart Brand on the New College roof beams.

It’s a good story, but not strictly true. In fact the beams had been replaced with pitch pine during the 18th century, the plantation from which the oak came was not planted until a date after the hall was originally built, and forestry practice is such that oak is often available for such a use.

It is not the case that these oaks were kept for the express purpose of replacing the Hall ceiling. It is standard woodland management to grow stands of mixed broadleaf trees e.g., oaks, interplanted with hazel and ash. The hazel and ash are coppiced approximately every 20-25 years to yield poles. The oaks, however, are left to grow on and eventally, after 150 years or more, they yield large pieces for major construction work such as beams, knees etc.

If we rely too heavily on documents and ideas that are familiar (and comfortable), we run the risk of selling ourselves short. As Simon Bostock has recently pointed out, there is almost invariably more interesting stuff in what we have not written down than in what we have captured (or identified as ‘lost knowledge’). Referring to another KM story (NASA have lost the knowledge that would be necessary to get to the moon again), he points out that what was really lost was not the documentation, but the less tangible stuff.

This means, basically, that even if NASA had managed to keep track of the ‘critical blueprints’, they would have been stuffed. Design trade-offs are the stuff of tacit knowledge. Which usually lives inside stories, networks, snippets of shoptalk, chance sneaky peeks at a colleague’s notes, bitter disputes and rivalries…

In knowledge terms, we’re about to live through another Black Death, another NASA-sized readjustment.

Smart organisations will recognise this in advance and avoid the archaeological dig at the junkyard, the museum and the old-folk’s home.

Archaeology is interesting, and can shed light on past and present activities, but we don’t use Grecian urns to keep food in any more. We use new stuff. The new stuff (whatever it might be) should be our continuing focus. That’s how we should use our brains, and how those supporting effective knowledge use should encourage brain-use in their organisations.

Getting attention — the comedy approach

One of the joys of Twitter is that people one follows often point to things that one would otherwise have missed. It was by that route that I became aware of the work of Chris Atherton. She is a specialist in visual perception, cognition and presentation skills. I first encountered her work when someone pointed me to her Slideshare presentation, “Visual attention: a psychologist’s perspective”, which provides a high-level overview of the issue of cognitive load in presentations.

Chris’s blog is full of valuable insights, as is her twitterstream. Her recent post on giving presentations is a great example. I especially like the way it starts — she was going to send some thoughts about presentations to a friend, but it got out of hand.

So instead of sending my friend an email, I wrote this blog post. It’s ostensibly about the mistakes students make when they give presentations, but really it’s about how the only rules you need to know about giving a good presentation are the ones about human attention.

It’s a great post, and full of really usable advice. Unlike many pontificators about Powerpoint, Chris shuns all those rules about structure.

Knowing which rules to follow and which to break is mostly a matter of practice and experience — which you may not have. So ignore, or at least treat with extreme suspicion, anything that sounds like a rule. Common rules include:

  • Use X lines of text/bullet-points per slide
  • Plan one slide for every N seconds of your talk
  • The 10/20/30 rule

These all sound perfectly sensible, but the trouble with rules is that people cling to them for reassurance, and what was originally intended as a guideline quickly becomes a noose.

Ultimately, good presenters just need to bear one thing in mind:

Concentrate on the rules of attention. The thing you most want during a presentation is people’s attention, so everything you do and say has to be about capturing that, and then keeping it. The rules of attention are more or less universal, easier to demonstrate empirically than rules about specific slide formats, and can be neatly summarised as follows: people get bored easily.

Chris then elaborates on what some of those rules are. I would summarise them here, but that would deprive you of the experience of reading her post and the excellent comments on it. I just want to single out one of those comments because it threw something into sharp focus for me.

At the end of a substantial comment, Martin Shovel remarked:

A thesis should be expressed in the form of a proposition – i.e. a sentence – the simpler and shorter the better! – that asserts or denies something about the content. ‘My holiday in Italy’ isn’t propositional; whereas ‘holidays in Italy are a nightmare’ is. It’s good to think of your proposition in the following way. Imagine you’re about to give your presentation when the fire-alarm suddenly goes off. Now you find yourself with only 30 seconds in which to sum up the point of your presentation – what you say in those 30 seconds should be your proposition.

Reading this, I was reminded of Robert McKee’s Story, and of the experience of watching a good comedian. In his exposition of good screenwriting McKee is clear that the script needs to hold the audience’s attention (the theme of bonding with the audience runs through the book), and that it often does that by tantalising the audience. Here he is at the very start of the book, for example:

When talented people write badly it’s generally for one of two reasons: Either they’re blinded by an idea they feel compelled to prove or they’re driven by an emotion they must express. When talented people write well, it is generally for this reason: They’re moved by a desire to touch the audience.

[…]

No film can be made to work without an understanding of the reactions and anticipations of the audience. You must shape your story in a way that both expresses your vision and satisfies the audience’s desires. The audience is a force as determining of story as any other element. For without it, the creative act is pointless.

A good stand-up comedian often does a similar thing. For example, here (jump to 3’33” for the relevant section) is Alun Cochrane sharing his thoughts on trains, peaches and Red Bull (depending on where you work, this may contain language that is NSFW):

The way he builds the scenario layer by layer retains the audience’s attention and even allows him room for digressions. It is a lesson worth learning. Few comedians or screenplays use bullet points to make their point (apart from the rare examples where bullet points are the point). They command attention by tantalising, asking questions without obvious answers, by engaging the audience’s brains.

Getting attention isn’t just a necessity for scriptwriters, comedians or lecturers. I think anyone who has a message to convey, in whatever format, (including driving organisational change) needs to be good at this.

Learning from failure or success

In a round up following KM Australia, back in August, Shawn Callahan has challenged the notion that we learn best from failure. I think he has a point — the important thing is learning, not failure.

Harris Hawk missing the quarry

Here’s Shawn’s critique.

During the conference I heard a some speakers recount the meme, “we learn best from failure.” I’m not sure this is entirely true. Anecdotally I remember distantly when I read about the Ritz Carlton approach to conveying values using stories and I’m now delivering a similar approach to a client on the topic of innovation. Here I’ve learned from a good practice. As Bob Dickman once told me, “you remember what you feel.” I can imagine memory being a key first step to learning. And some research shows it’s more complex than just learning from failure. Take this example. The researchers take two groups who have never done ten pin bowling and get them bowling for a couple of hours. Then one group is taken aside and coached on what they were doing wrong and how they could improve. The other group merely watches an edited video of what they were doing right. The second group did better than the first. However there was no difference with experienced groups.

I wish I could access the linked study — Shawn’s summary and the abstract sound very interesting. Here’s the abstract.

On the basis of laboratory research on self-regulation, it was hypothesized that positive self-monitoring, more than negative self-monitoring or comparison and control procedures, would improve the bowling averages of unskilled league bowlers (N =60). Conversely, negative self-monitoring was expected to produce the best outcome for relatively skillful league bowlers (N =67). In partial support of these hypotheses, positive self-monitors significantly improved their bowling averages from the 90-game baseline to the 9- to 15-game postintervention assessment (X improvement = 11 pins) more than all other groups of low-skilled bowlers; higher skilled bowlers’ groups did not change differentially. In conjunction with other findings in cognitive behavior therapy and sports psychology, the implications of these results for delineating the circumstances under which positive self-monitoring facilitates self-regulation are discussed.

Based on these summaries, I would draw a slightly different conclusion from Shawn’s. I think there is a difference between learning as a novice and learning when experienced. Similarly, the things that we learn range from the simple to the complex. (Has anyone applied the Cynefin framework to learning processes? My instinct suggests that learning must run out when we get to the chaotic or disordered domains. I think we can only learn when there is a possibility of repeatability, which is clearly the case in the simple and complicated domains, and may be a factor in moving situations from the complex to one of the other domains.)

The example Dave Snowden gives of learning from failure is actually a distinction between learning from being told and learning by experience.

Tolerated failure imprints learning better than success. When my young son burnt his finger on a match he learnt more about the dangers of fire than any amount of parental instruction cold provide. All human cultures have developed forms that allow stories of failure to spread without attribution of blame. Avoidance of failure has greater evolutionary advantage than imitation of success. It follows that attempting to impose best practice systems is flying in the face of over a hundred thousand years of evolution that says it is a bad thing.

In the burned finder scenario, success (not touching a burning match) is equivalent to lack of experience. Clearly learning from a lack of experience will be less effective than learning from (even a painful) experience. By contrast, the bowling example provides people with a new experience (bowling) and then gives them an opportunity to contemplate their performance (which was almost certainly poor). However, whatever the state of their performance, it is clear what the object of the activity is and therefore ‘success’ can be easily defined — ensure that this heavy ball leaves your hand in such a way that it knocks down as many pins as possible by the time it reaches the far end of the lane. As the natural tendency of learners at early stages in the learning process is to concentrate on the negative aspects of their performance (I can’t throw the ball hard enough to get to the end of the lane, or it keeps going in the gutter), it is understandable that a learning strategy which focuses on success could have better results than one that merely explains why the bad things happen.

In the bowling experiment, no difference was found between the negative and positive approaches when experienced bowlers were studied. All this suggests to me is that we need more work in this area, especially considering learning in the complicated or complex domains. Even for experienced bowlers, the set of variables that affect the passage of a bowling ball from one end of the lane to the other is a predictable one. There is not just one cause and effect, but the laws of physics dictate that the relationships between all the causes should have predictable outcomes. By contrast, much of what interests us with regard to knowledge and learning in organisational environments does not depend on simple causal relationships.

In those complicated or complex organisational situations, I think we can learn more from our own failures than other people’s successes (which I think is the point that Dave Snowden is making). I think Shawn is also right to suggest that we can learn from our own successes too. However, that can only be the case if we take the time to analyse exactly what was the cause of the success. So we need a commitment to learning (which brings us back to deliberate practice, amongst other things) and we need the insight into our actions and activities that allows us to analyse them effectively. I think the will to learn is often present, but insight is often missing when we consider successful initiatives, possibly because the greater distance between cause and effect means that we cannot be confident that success is a product of any given cause. On the other hand, it is usually easier to identify causes of failure, and the process of failure also provides an incentive to work out what went wrong.

As for the quality of the lessons learned from failure or success, I am doubtful that any firm conclusion could be drawn that as a general rule we learn better from failure or from success. However, as we become more experienced and when we deal with fewer simple situations, we will inevitably learn more from failure than success — we will have more experience of failure than success, and other people’s successes are of limited or no value. So, although we can learn from our successes, my guess is that more of our learning flows from failure.

It feels like there is more research to do into these questions.

We are all in this together

A couple of links to start with: John Stapp and “Has ‘IT’ Killed ‘KM’?

Picture credit: Bill McIntyre on Flickr

I don’t have much truck with heroes. Many people do great things, in the public eye and otherwise, and it seems invidious to single certain individuals out mainly because they are better known than others who are equally worthy of credit. However, I make an exception for John Stapp.

Every time you get into a car and put on a seat belt (whether required to by law or not), you owe a debt to Dr Stapp. As a doctor in the US Air Force, he took part in experiments on human deceleration in the late 1940s. During the Second World War it had been assumed that the maximum tolerable human deceleration was 18G (that is, 18 times the force of gravity at sea level), and that death would occur above that level. The Air Force wanted to test whether this was really true, and so a research project was set up. In order to test the hypothesis, an anthropomorphic dummy was to be shot down a test track and abruptly brought to a halt. Measuring equipment would be used to gauge the effect of the deceleration on the dummy. An account of the project is provided in the Annals of Improbable Research. That account indicates that Stapp had little confidence in the dummy.

While the brass assigned a 185-pound, absolutely fearless, incredibly tough, and altogether brainless anthropomorphic dummy — known as Oscar Eightball — to ride the Gee Whiz, David Hill remembers Stapp had other ideas. On his first day on site he announced that he intended to ride the sled so that he could experience the effects of deceleration first-hand. It was a statement that Hill and everyone else found shocking. “We had a lot of experts come out and look at our situation,” he remembers. “And there was a person from M.I.T. who said, if anyone gets 18 Gs, they will break every bone in their body. That was kind of scary.”
But the young doctor had his own theories about the tests and how they ought to be run, and his nearest direct superiors were over 1000 miles away. Stapp’d done his own calculations, using a slide rule and his knowledge of physics and human anatomy, and concluded that the 18 G limit was sheer nonsense. The true figure he felt might be twice that if not more.

In the event, Oscar the dummy was used merely to test the efficacy of the test track and the ballistic sled on which his seat was first accelerated and then decelerated. Once that was done, testing could start.

Finally in December 1947 after 35 test runs, Stapp got strapped into the steel chariot and took a ride. Only one rocket bottle was fired, producing a mere 10 Gs of force. Stapp called the experience “exhilarating.” Slowly, patiently he increased the number of bottles and the stopping power of the brakes. The danger level grew with each passing test but Stapp was resolute, Hill says, even after suffering some bad injuries. And within a few months, Stapp had not only subjected himself to 18 Gs, but to nearly 35. That was a stunning figure, one that would forever change the design of airplanes and pilot restraints.

The initial tests were done with the subject (not always Stapp) facing backwards. Later on, forward-facing tests were done as well. Over the period of the research, Stapp was injured a number of times. Many of these injuries had never been seen before — nobody had been subjected to such extreme forces. Some were more mundane — he broke his wrist twice; on one occasion resetting the fracture himself as he walked back to his office. It is one thing to overcome danger that arises accidentally, quite another to put oneself directly in such extreme situations.

And he did it for the public good.

…while saving the lives of aviators was important, Kilanowski says Stapp realized from the outset that there were other, perhaps even more important aspects to his research. His experiments proved that human beings, if properly restrained and protected, could survive an incredible impact.

Cars at the time were incredibly dangerous places to be. All the padding, crumple zones and other safety features that we now take for granted had yet to be introduced.

Improving automobile safety was something no one in the Air Force was interested in, but Stapp gradually made it his personal crusade. Each and every time he was interviewed about the Gee Whiz, Kilanowski notes, he made sure to steer the conversation towards the less glamorous subject of auto safety and the need for seatbelts. Gradually Stapp began to make a difference. He invited auto makers and university researchers to view his experiments, and started a pioneering series of conferences. He even managed to stage, at Air Force expense, the first ever series of auto crash tests using dummies. When the Pentagon protested, Stapp sent them some statistics he’d managed to dig up. They showed that more Air Force pilots died each year in car wrecks than in plane crashes.

While Stapp didn’t invent the three point auto seatbelt, he helped test and perfect it. Along with a host of other auto safety appliances. And while Ralph Nader took the spotlight when Lyndon Johnson signed the 1966 law that made seatbelts mandatory, Stapp was in the room. It was one of his real moments of glory.

Ultimately, John Stapp is a hero to me because he was true to his convictions — he had a hypothesis and tested it on himself. In the modern business vernacular, he ate his own dogfood. Over and above that, he did it because he could see a real social benefit. His work, and (more importantly) the way he did it, has directly contributed to saving millions of lives over the last 60 years. Those of us who seek to change our environments, whether at work or home, or in wider society, should heed his example. If there are things that might make a difference, we shouldn’t advocate them for others (even dummies) without checking that they work for us.

Now, the other link. Greg Lambert at the 3 Geeks and a Law Blog has extended the critique of IT failing to spot and deal with the current financial crisis by suggesting that KM is equally to blame.

Knowledge Management was originally an idea that came forth in the library field as a way to catalog internal information in a similar way we where cataloging external information. However, because it would be nearly impossible for a librarian to catalog every piece of internal information, KM slowly moved over to the IT structure by attempting to make the creator of the information (that would be the attorney who wrote the document or made the contact) also be the “cataloger” of the information. Processes were created through the use of technology that were supposed to assist them in identifying the correct classification. In my opinion, this type of self-cataloging and attempt at creating a ultra-structured system creates a process that is:

  1. difficult to use;
  2. doesn’t fit the way that lawyers conduct their day-to-day work;
  3. gives a false sense of believing that the knowledge has been captured and can be easily recovered;
  4. leads to user frustration and “work around” methods; and
  5. results in expensive, underutilized software resources.

In a comment on that post, Doug Cornelius says:

I look at KM 1.0 as being centralized and KM 2.0 as being personalized. The mistake with first generation KM and why it failed was that people don’t want to contribute to a centralized system.

We have to be careful, as Bill Ives points out, not to throw out the baby in our enthusiasm to replace the 1.0 bathwater with nice fresh 2.0 bubbles. However, Greg and Doug do have a point. We made a mistake in trying to replicate the hundreds or thousands of databases walking round our organisations with single inanimate repositories.

The human being is an incredible thing. It comes with a motive system and an incredibly powerful (but probably unstructured) data storage, computation and retrieval apparatus. Most (probably all) examples of homo sapiens could not reproduce the contents of this apparatus, but they can produce answers to all sorts of questions. The key to successful knowledge activities in an organisation, surely, is to remember that each one of these components adds a bit of extra knowledge value to the whole.

Potentially, then, we are all knowledge heroes. When we experiment with knowledge, the more people who join in, the better the results. And the result here should be, as Greg points out, to “help us face future challenges.” We can only do that by taking advantage of the things that the people around us don’t realise that they know.

The conundrum focus

A discussion is currently taking place on the ActKM mailing list about the theoretical underpinnings of knowledge management. Joe Firestone, reaching into the language of philosophy, has consistently taken the view that KM only makes sense when related to the need to improve underlying knowledge processes:

I see [knowledge management] more as a field defined by a problem, with people entering it because they’re interested in some aspect of the problem that their specific knowledge seems to connect with.

Unfortunately, in more quotidian language, the word ‘problem’ suggests difficulties that need to be overcome, but sometimes KM is actually not dedicated to overcoming difficulties but to taking maximum advantage of opportunities. When Joe refers to a ‘problem’ I think he means it as a puzzle or conundrum: “how do we fill this knowledge gap?” Stated thus, I think this is a less objectionable aim for KM.

What about the nature of the conundrums that face organisations? Rightly, in linking to an earlier post of mine, Naysan Firoozmand at the Don’t Compromise blog suggested that there was a risk of vagueness in my suggestion (channelling David Weinberger) that KM might be about improving conversations in organisations.

Which is all true and good and inspiring, except I want to wave my arm about frantically like the child at the back of class and shout ‘But Sir, there’s more … !’. There’s a difference between smarter and wise that’s the same difference as the one between data and information: the former is a raw ingredient of the latter. And – when it comes to organisational performance and leadership (which is our focus here, rather than KM itself) – simply being smarter isn’t the whole story. Clever people still do stupid things, often on a regular (or worse, repeated) basis. Wise people, on the other hand, change their ways.

This is a fair challenge. Just improving the conditions for exchange of knowledge is not enough on its own. (Although I would argue that it is still an improvement on an organisation where conversations across established boundaries are rare.) There are additional tasks on top of enabling conversation or other knowledge interactions, such as selecting the participants (as Mary Abraham made clear in the post that started all this off), guiding the interaction and advising on possible outcomes.

Those additional tasks all help to bring some focus to knowledge-related interactions. The next issue relates to my last blog post. In doing what we do, we always need to ask where the most value can be generated. The answer to that question, in part, is driven by the needs expressed by others in the organisation — their problems or conundrums. However, not all problems can be resolved to generate equal value to the organisation.

The question, “what value?” is an important one, and reminds us that focus on outcomes is as important as avoiding vagueness in approach. How can we gauge how well our KM activities will turn out? Some help is provided, together with some scientific rigour, by Stephen Bounds (another ActKM regular) who has created a statistical model for KM interventions using a Monte Carlo analysis. His work produces an interesting outcome. It suggests that on average, the more general a KM programme, the less likely it is to succeed. In fact, that lack of success kicks in quite quickly.

To maximise the chance of a course of action that will lead to measurable success, knowledge managers should intervene in areas where one or more of the following conditions hold:

  • occurrences of knowledge failures are frequent
  • risks of compound knowledge failure are negligible or non-existent
  • substantial reductions in risk can be achieved through a KM intervention (typically by 50% or more)

Where possible, the costs of the intervention should be measured against the expected savings to determine the likelihood of benefits exceeding KM costs.

So: simple, narrowly defined KM activities are more likely to succeed, all other things being equal. Success here is defined as it should be, as making a contribution to reductions in organisational costs (or, potentially, improving revenue). Stephen’s analysis is really instructive, and could be very useful in encouraging people away from a “one size fits all” organisation-wide KM programmes.

In sum, then, our work requires us to identify the conundrums that need to be solved, together with the means by which they should be addressed, and to define the outcomes as clearly as possible for the individuals involved and for the organisation. We cannot hope to resolve all organisational conundrums by improving knowledge sharing. So how do we choose which ones to attack, and how do we conduct that attack? Those are questions we always need to keep in mind.

Why are we doing this KM thing?

I was reading Strategic Intuition (there will be more on this fascinating book at a later date) on the train home yesterday, and was prompted to ask myself an odd question: “why are we doing knowledge management? What will be different, and for whom?”

The passage that made me ask this question was a description of a firefighter’s decision-making process.

Never once did he set a goal, list options, weigh the options, and decide among them. First he applied pressure, then he picked the strongest but newest crew member to bear the greatest weight of the stretcher, and then in the truck they put the victim into the inflatable pants. Formal protocol or normal procedure certainly gave him other options — examine the victim for other wounds before moving him, put the victim into the inflatable pants right away, and assign someone experienced to bear the greatest weight of the stretcher — but Lieutenant M never considered them.

The researcher whose work is described here (Gary Klein) started out with the hypothesis that the decision-making process would conform to the model of a defined goal, followed by iterative consideration of a series of options. However, he rapidly discovered that this model was wrong. Instead, what he saw in the experts that he studied (not only firefighters, but soldiers in battle, nurses, and other professionals) was overwhelmingly intuitive weighing of single options. (There is more in the book about why this is.)

We often talk about decision-making processes, and one of the goals of knowledge management is often to improve those processes by, for example, ensuring better access to information, or by honing the processes themselves (the HBR article by Dave Snowden and Mary Boone on “A Leader’s Framework for Decision Making” is an excellent example of the latter). Although these activities may well improve decision-making, those decisions are ultimately made by people — not processes. The question I posed for myself, then, was: what impact does KM have on people? Exactly how will they be better at decision-making as a result of our work?

My instinctive answer is that I want them to become experts (and therefore able to act swiftly and correctly in an emergency) in whatever field they work in. That means that we should always return our focus to the people in our organisations, and respond to their needs (taking into account the organisation’s direction and focus), rather than thinking solely about building organisational edifices. The more time that is spent on repositories, processes, structures, or documentation, the less is available for working with people. In becoming experts in our own field, we also need to be more instinctive.

Coincidentally, I read two blog posts about experts over the weekend. The first was Arnold Zwicky bringing some linguistic sanity to counter fevered journalistic criticism of ‘experts’ and ‘expertise’.

Kristof is undercutting one set of “experts”, people who propose to predict the future. Lord knows, such people are sitting ducks, especially in financial matters (though I believe they do better in some other domains), and it’s scarcely a surprise that so many of them get it wrong.

Other “experts” offer aesthetic judgments… and still others exhibit competence in diagnosis and treatment…, and stlll others simply possess extensive knowledge about some domain…

The links between these different sorts of expert/expertise are tenuous, though not negligible. Meanings radiate in different directions from earlier meanings, but the (phonological/orthographic shapes of the) words remain. The result is the mildly Whorfian one that people are inclined to view the different meanings as subtypes of a single meaning, just because they are manifested in the same phonological/orthographic shapes. So experts of one sort are tainted with the misdeeds of another.

Expertise that results from real experience, study, insight, rationality and knowledge does not deserve to be shunned as mere pontification. It can save lives.

The other blog post, by Duncan Work, is a commentary on a New Scientist report about how people react to advice they believe to be expert. It appears that key areas in their brains simply turn off — they surrender the decision-making process to the expert.

This phenomenon has both adaptive and non-adaptive effects.

It is evolutionarily adaptive by being a “conformity-enforcing” phenomenon that can kick in when a large group needs to quickly move in the same direction in order to survive a big threat.   It’s also adaptive when the issues are extremely complex and most members of the population don’t have the knowledge or experience to really evaluate the risks and make a good decision.

It is evolutionarily non-adaptive when there is still a lot of confusion around the issue, when the experts themselves don’t agree, and when many experts are guided by narrow interests that don’t serve the group (like increasing and protecting their own personal prestige and wealth).

The real problem is not just that many of the crises now facing businesses are founded in actions, decisions and behaviours that few people understand. It is that we make no distinction between different categories of expert, and so we follow them all blindly. At the same time, as the New York Times op-ed piece critiqued by Zwicky illustrates, many of us do not actually respect experts. In fact, what we don’t respect are people who style themselves experts, but who are actually driven by other interests (as Work points out).

So if our KM work is at least in part to make people into experts, we probably need to rescue the word from the clutches of people who profess expertise without actually having any.

Power isn’t everything

I have been reacquainting myself with some of the materials science reading that I did as part of my Physics studies over 30 years ago. My brain is too far removed from the maths to deal with the more technically complex stuff, but there is a classic pair of books by J.E Gordon that are easily accessible to the lay reader: The New Science of Strong Materials, or Why You Don’t Fall Through the Floor and Structures, or Why Things Don’t Fall Down. Reading the latter, I was struck by some of the insights in the chapter on shear and torsion, more from a historical perspective than an engineering one.

Gordon reflects on the development of the aeroplane, and remarks that some aspects of the new aeronautical engineering were easier to tackle than others. 

The aeroplane was developed from an impossible object into a serious military weapon in something like ten years. This was achieved almost without benefit of science. The aircraft pioneers were often gifted amateurs and great sportsmen, but very few of them has much theoretical knowledge. Like modern car enthusiasts, they were generally more interested in their noisy and unreliable engines than they were in the supporting structure, about which they knew little and cared less. Naturally, if you hot the engine up sufficiently, you can get almost any aeroplane into the air. Whether it stays there depends upon problems of control and stability and structural strength which are conceptually difficult. (p.259)

He then goes on to tell the story of the German monoplane, the Fokker D8, which initially had an unfortunate habit of losing its wings when pulling out of a dive. As a result, the Germans could not capitalise on its obvious speed advantage over the British and French biplanes. Only once Fokker had analysed the effect of the relevant forces on the wings did he realise that the loads imposed on the plane were causing the wings to twist in a way that could not be controlled by the pilots. Once the design of the wings was changed so that they no longer twisted, the D8 served its purpose much more effectively.

Gordon makes a similar observation with regard to automobile development.

The pre-war vintage cars were sometimes magnificent objects, but, like vintage aircraft, they suffered from having had too much attention paid to the engine than to the structure of the frame or chassis. (p.270)

Reading this, I wondered whether organisational KM efforts have had similar shortcomings. Certainly, in many businesses, the KM specialists proceed by trial and error, rather than careful scientific study. There is also a tendency (driven in part by the need for big strong metrics and RoI) to focus on things like repositories and databases. Are these the powerful engines of KM, destined to shake apart when faced with conceptually difficult structural challenges? I suspect they may be.

Instead of concentrating on raw power, we need to work out what our KM activities actually do to the structure of the organisation, and how they affect the parts different people play in making the business a success. In doing that, we may find that small changes make a significant difference. It is not an easy task, but it is a worthwhile one.

Recognition and understanding

It is important to us that people listen to our needs, understand them and adapt to them. We know this about ourselves, but very few of us can naturally empathise with others. One reason for this, I think, is that human beings are almost infinitely complex and yet our brains cannot cope with this variety.

So what do we do? We create archetypes. We categorise. There are even people who classify themselves (and others) according to whether they were a first, second or third child (fourth children fall into the same category as the first-born). I wonder whether this is because in small communities (with close genetic links) such generalisations are likely to be accurate. As our circles of acquaintance become larger, their weaknesses become more obvious, but as we also struggle to do without them we depend more heavily on them.

It is with these thoughts in mind that I read Graham Durant-Law’s recent blog post, and remembered Dave Snowden’s short rant against Myers-Briggs. They both point to the complete absence of scientific evidence for summing people up in a small number of categories. Graham also poses a number of questions:

Why do these modern archetypes have credibility and how do these they help us? Why are they any better than Jung’s original archetypes? Where are they best used and what problems do they solve?

I can’t answer any of these, but I am interested in the way in which we think they might help us. Going back to my starting point, we want to be able to understand people (whether our managers, our team, our clients and customers, or our families) in order to work better with or for them, or to get along with them as well as possible. Doing that well is excessively hard. However, by referring to archetypes or categories we can make a reasonable attempt at empathy (especially for the relationships where a ‘quick fix’ will do).

We are fooling ourselves. If any of these relationships is worth pursuing, it must be worth the real effort that it takes to recognise someone as an individual with unique needs, desires, concerns, preoccupations and quirks. Archetypes and categories only conceal that reality.