Fighting the right battles

Perhaps a more appropriate title for this post might be “not fighting the wrong battles.”

Over the past few weeks, I have come to a realisation that at various points in my career I have spent too long trying to achieve things that were actually impossible.

They weren’t impossible because they couldn’t be done. They were impossible because something about the organisation made them so.

Sometimes these were small projects, sometimes major programmes of change. The detail is irrelevant. The point is that, even though I could persuade important people to join me on the journey, I didn’t spot that success depended on a number of factors that would never fall into place.

So what are the right battles? Simple: the ones that are genuinely winnable. Anything else is a waste of effort. Hope rarely triumphs against institutional inertia. No matter how much someone wants something to happen, if success depends on someone else who doesn’t care or who actively works against it, it will never happen.

What does winnable look like? The key is to be sure about the essential components. The following questions should help.

  • What is the bare minimum to demonstrate success?
  • What resources are needed to make the project work?
  • Will the support you are promised really materialise, or are people just paying lip-service to your ideas?

The first question is probably the most important, but the hardest to answer. It’s important because you have to know when a project comes to an end. Goals like ‘creating a knowledge culture’ are really difficult because they seems to promote a worthwhile end, but it is impossible to say when the job is done. If something can’t be said to be complete, its success or failure cannot be assessed. What measurable change is there?

Breaking a vague notion into measurable components then leads to the next two questions, which are simply a means of gauging how likely it is that the end might be reached. Few projects, especially in the knowledge context, can be completed without significant assistance from other areas of the organisation:

  • IT: is this a technology project that needs to fit with other things that the business is demanding?
  • HR: does your project affect the way incentives are managed across the organisation? Is that an easy change, or something that demands significant realignment?
  • Finance: few things are free — what other costs are coming up, and how are they prioritised?

The reality is that few organisations can do everything that might be suggested. Projects will often be dropped because there isn’t the resource this year or because too many other things are happening in a similar area. But if the things you want to achieve keep hitting resistance, it is more likely that your goals don’t fit what the organisation is comfortable with. In other words, you’re trying to achieve the impossible.

What to do in such a situation? In general, there are only three real options when faced with difficult challenges: accept things; change them; or leave.

  • If change in the organisation is impossible, can your goals be achieved in different ways?
  • If no change is possible, is there any merit in staying just to maintain the status quo (and possibly make some minor tweaks)?
  • If neither of these is attractive, take the decision to leave as quickly as possible. The sunk cost fallacy applies.


Why do you want to ‘do KM’?

My recommendation to anyone new to knowledge management is to start by reading and reflecting on David Gurteen’s presentation to KM Middle East in 2011, “Don’t do KM.”

Despite David’s high profile, and the fact that this message has been repeated by him and many others over the past four years, I still see the same mistake being made. But it’s now being made at an organisational level, and that causes problems further down the line.

Here’s an example. A law firm has decided that it should have a knowledge management function. So headhunters are briefed to find someone to lead that function. Sadly, neither the firm nor the headhunters understand what is needed.

The firm probably has a sense of what might need fixing, but they don’t know what measures could be taken. The headhunters have a better understanding of the ambit of traditional KM, but may not be allowed any insight into the firm’s real needs.

The result: a role description that indicates how important KM is (“a strategic function”), but also lists various ‘information assets’ that need to be managed. In short, a description of KM that limits the function to pre-defined boundaries separated from the performance of the firm.

In reality, of course, a role description can be ignored. But it acts as an anchor. Presented like this, it is difficult for a new recruit to persuade the firm that they shouldn’t ‘do KM’. It also means that investment in change or in unexpected activities that would make a real difference are harder to justify.

By contrast, advertisements for leadership roles in business development and marketing are much more likely to refer to the need for things like “new and innovative approaches on winning business”, “driving forward pioneering initiatives”, or “distinctive client experience”. Even though firms may have a better idea of what might be involved in this discipline, they rarely dictate at the outset in detail what these roles should do. The result is that these recruits are trusted much more to lead the firm (not just their own teams) in the right direction.

Just as people with ‘knowledge management’ in their titles should avoid ‘doing KM’, firms should avoid thinking that they need KM. They don’t. They may need to use their knowledge better because they have identified a problem. That’s a much better starting point for recruitment. You don’t need a Knowledge Director or CKO just because everyone else has one.

Spending time and money

In my last post, I mentioned the stresses that a GC might be under and how that might manifest itself as a shortage of time. Something similar is at play when one considers financial constraints. Often those who have money to spend have very little or no capability to make more. Anyone who makes demands on people’s time or money needs to be aware of the limited nature of those resources, and what else is competing for them.

Office frozen in time (at the Highland Folk Museum)

A number of thoughts flow from this observation, which may be useful for people offering legal and other services as well as those providing internal business support.

What do they get in return?

If you do something in the expectation that someone else will commit time or money to it (or both), they need to feel that they will get something in return. This is most obviously expressed in financial terms as a return on investment, but that is only the most tangible form. At the other extreme, broadcasters and the film and music industries (for example) create products that take time to consume and often have to be paid for. In return for investing their time and money in a film, TV programme, book, or album, the audience need to feel that their lives have been enhanced in some way. They need, in Lord Reith’s words, to be informed, educated or entertained.

Crucially, investing in one thing often excludes the possibility of investing in another. If I choose to watch Mad Men (as I do), I will spend at least 92 hours (and probably more) of my time doing so. Those 92 hours can’t be spent doing something else — I can’t read a book, do some work, or go for a drive at the same time. Likewise, the financial cost of acquiring the right to watch the programme (by DVD or pay-TV subscription) means that I have reduced my capacity to buy other things. The impossibility of spending twice has repercussions on both sides of the equation — for those spending time/money and those demanding it.

Sometimes people know what they get when investing in something. This may not be conscious — slumping in front of a mindless TV show with a glass of wine after a hard day’s work may seem worthless, but it provides a valuable opportunity to relax and unwind. Sometimes it needs to be explained what the return on investment might be. Time spent on marketing may feel like a waste, for example, but not doing it will almost inevitably lead to a drop in income.

By contrast, people seem to be pretty poor at evaluating investment choices. I have referred previously to the work of Dan Ariely and other behavioural economists on choice and different kinds of value. In particular, people overvalue things they already have compared to future goods. That generally makes it hard to persuade people to stop doing something inefficient and start doing something new and better.

Original artist unknown, see linked page for attribution

Learning (or not) from past spending patterns

One significant consequence of the way we value our expenditure of time and money, and yet fail to understand its cost, is a tendency to misunderstand change. This may have an impact on sellers as well as buyers.

A good example of this can be seen in the music business. For decades, recorded music was a dominant form of entertainment. From the 1950s until well into the 1990s, significant amounts of people’s leisure budget would be committed to vinyl or (later CDs). As a result, some (by no means all) recording artists and others in the music industry became quite wealthy. The possibility of riches attracted some to the business. Now, with the growth of streaming services such as Spotify, people can listen to recorded music without having to own a copy. As a result, it appears that less money accrues to the original creators than they have been used to.

One response to this drop in income is to reject the whole model — to withdraw from streaming services altogether. Another is to claim that such services should recompense artists at a higher level than they do currently. I suspect that neither of those options will work.

The problem is that, in general, people just don’t have as much money to spend on recorded music as they did. It is rare now to see people regularly “buying two CDs, a DVD and maybe a book – fifty quid’s worth.” Instead they have to commit £20-40 per month on a mobile phone contract, £10 to Spotify, even more for cable TV and broadband subscriptions. There just isn’t the money available to go back to the old way of doing things. Artists demanding that the clock should be turned back are wasting their breath. The simple economic fact is that people don’t generally value music as much as they used to.

Something similar happens within organisations. Without improvements in profitability or increases in income, the amount of money available for investment is finite. When external advisers or internal support teams demand more, their demands will only be ignored. That’s why businesses hate it when legal costs overrun. It is difficult to ignore those demands for payment, but the fact that costs have escalated is a clear indication that the lawyers have failed to understand how the client’s business works. Historically, lawyers’ demands for sustained income have been much more difficult to ignore than recording artists. As new ways of providing legal support move to the mainstream, it will become increasingly easy for clients to choose cheaper (and often better) ways of resolving issues than using traditional law firms.

What’s coming next?

Some folk in the music business appear to have been caught unawares by the changes in the way that people consume their product, and the consequent impact on their income. In fact, services like Spotify were a natural result of developments that they tried to fight (such as illicit peer-to-peer services like Napster and Limewire) and changes in other industry sectors (such as the growth of smartphones and broadband internet services).

New ways of finding and consuming music showed customers how much easier it could be to listen to what they wanted — no need to go to a shop to buy a CD  to play in an expensive player of some kind. They also introduced people to the idea that music could be cost-free, albeit illegally. Once those ideas became more mainstream (as ideas tend to), they became hard to rebut. On that analysis, Spotify is actually an improvement. The options were that artists would either not be reimbursed at all for their efforts, or be paid at a much lower rate than they would prefer.

Had musicians analysed the financial impact of novel areas of consumer spending, they might have realised that their command of a large proportion of that budget was threatened. It appears that few did. By contrast, the music labels and distributors did understand. They did deals with Spotify and the like, so that those services could flourish legally. Those deals had to be done against the background of the streaming services’ likely revenues from subscriptions and advertising, and were therefore informed by how much consumers were realistically going to spend on music.

It is fair to say that few people could have predicted precisely what would happen to the music industry. However, understanding what was going on in and around it should have led anyone to the conclusion that there would be less money available than there had been previously. I suspect that people are still spending as much time as before actually listening to music, so I can see how musicians may be aggrieved that listeners are getting more for their money than they did previously. That may just mean that the music industry was particularly lucky in the past and that luck has now run out.

There are lessons for other sectors where the pace of change has been a bit slower. Everything you do that costs someone time or money is contingent on them continuing to agree to that expenditure. Keep an eye on the things that might reduce their interest in the way you do things.

  • Don’t dismiss the upstarts competing directly for your work (even if they are doing so illicitly). The likelihood is that if people like what they do, it will form some part of the future.
  • Be aware of how your customers/clients are spending their time and money. If more interesting things are happening somewhere else, you need to move to be with them, whatever it costs you. The alternative is the equivalent of the £3 CD or the £5 DVD.
  • Consider the possibility that the riches of the past were abnormal, and that the future may be much leaner. Don’t depend on a return to good times.

Be irrational about irrationality

Given my focus here on challenging traditional assumptions about knowledge and the law, it would be negligent of me not to draw attention to a concise Scientific American blog from last month that points up a key flaw in much popular writing about the psychology of decision-making.

The shortcomings of our rationality have been thoroughly exposed to the lay audience. But there’s a peculiar inconsistency about this trend. People seem to absorb these books uncritically, ironically falling prey to some of the very biases they should be on the lookout for: incomplete information and seductive stories. That is, when people learn about how we irrationally jump to conclusions they form new opinions about how the brain works from the little information they recently acquired. They jump to conclusions about how the brain jumps to conclusions and fit their newfound knowledge into a larger story that romantically and naively describes personal enlightenment.

This is not a new problem, but it is enhanced by the proliferation of this kind of literature, and the way that the message of these books is amplified by blogs and tweets. I confess to being part of this chorus, so this is a conscious effort to help myself avoid being sucked into unwavering belief.

Ultimately, we need to remember what philosophers get right. Listen and read carefully; logically analyze arguments; try to avoid jumping to conclusions; don’t rely on stories too much. The Greek playwright Euripides was right: Question everything, learn something, answer nothing.

KMers can do anything: is that wise?

Ron Friedmann has spotted a trend for law firm KM people to branch into new activities, such as legal project management, alternative fee arrangements, and so on. He also offers a hypothesis for this:

So why does KM continue to expand beyond its core remit today? My theory is that KM professionals span multiple disciplines and think laterally. They can handle complex problems that fall outside the boundaries of other support functions. Moreover, successful KM professionals have gained the confidence of lawyers; many come from the practice; others have worked closely with lawyers for a long time. Whatever their background, they develop excellent rapport with partners and practice groups. Of course, many are lawyers and in the caste system that defines BigLaw, that is a big plus.

A number of people have supported his observations in comments on the post, which Ron has extracted into a separate post. For example, Patrick DiDomenico (CKO at Gibbons PC and blogger at LawyerKM) says:

I’m head KM (CKO) at my firm, but I also manage the library and litigation support department, have an active role in our E-Discovery Task Force, and am the social media evangelist (among other things). My role as a former practicing litigator at my firm has a lot to do with what I now do for the firm. The fact that I do these things does not make them “KM activities.” Rather, these are some of the things that the head of KM happens to do.

And Meredith Williams of Baker Donelson agrees:

These days CKOs and KM professionals are being asked to expand their roles further and further in addition to continuing many traditional KM tasks. As Patrick referenced, I too aid in multiple projects that are not traditional KM such as Social Media, Competitive Intelligence, E-Discovery, Legal Project Management, Alternative Fee Arrangements and Mobility.

In part, I think what Ron describes is in fact a change in what we understand to be part of KM (in any organisation). Social media is an example — one of the things that traditional document- and repository-based KM spectacularly failed to do was to draw people together to share their knowledge. Various forms of social media now allow us to address that challenge. From that perspective, law firms are just the same as other organisations.

However, there are a couple of other interpretations which I find more troubling.

In The End of Lawyers? Richard Susskind bemoans the tendency lawyers have to describe their jobs by reference to anything other than advising clients on the law. He talks of lawyers referring to themselves more as project managers or commercial advisors. (I still need to retrieve my copy, so I’m afraid I can’t provide a better quotation or reference.) Putting aside the question whether they are actually any good at those roles, it is odd that many lawyers would prefer to be thought of as gifted amateurs turning their hands to any odd job that comes along, rather than talented and focused professionals — masters of their own specialisms. That tendency really comes to the fore in knowledge roles. Amongst all the functions that modern law firms need to support their core fee-earning function (take your pick from HR, finance, marketing, IT, office services, sales, building and facilities management, training, library, etc.) the knowledge team is often alone in recruiting predominantly from the ranks of practising lawyers. In all those other areas, firms are willing to accept the advice and insight provided by functional specialists, but it appears that the non-legal KMer has yet to make an appreciable impact. 

One consequence of this ‘lawyers can do anything’ attitude is that the firm is less likely to get the benefits that come from the wider perspective and expertise of the knowledge professional. The benefit is that the knowledge support the firm gets reflects what lawyers need. I think there is merit on both sides, but there is a risk that a firm using lawyers in these roles may find that they learn little from the interesting approaches to knowledge development and use in other organisations and contexts. They may just get the usual precedents and know-how.

(By coincidence, Tim Bratton opens a similar can of worms when he suggests that firms could use lawyers in a dedicated client relationship role:

Is there a role in large City law firms for a lawyer who has no billing targets but whose role is to act effectively as an account manager for a small number of major clients?  I think there is.  But this would only work if it is a real role, it cannot be farmed out to business development or marketing.  To succeed from a client perspective it has to be a role undertaken by a lawyer.

As a general counsel, Tim may favour lawyers. However, not all law firm clients are lawyers — many are finance directors, bankers, commercial managers, company secretaries. Should firms employ relationship managers that match those roles too? And are clients prepared to accept the greater (albeit hidden) cost of employing lawyers as relationship managers? In fact, client relationship management is widely practised in other professional services firms (especially advertising, for example). Why should firms turn their back on that expertise or develop it themselves at huge cost?)

My other concern is that when firms take the view that their knowledge people can be directed to any new project (possibly with only a tenuous link to their core knowledge focus) they aren’t really demonstrating respect for those people or their activities. If your role is valued by the organisation, it will project you in it. The procurement manager who monitors the firm’s supplier relationships and negotiates hard to keep the costs of contracts down is unlikely to find themselves diverted into managing working capital, even if that role uses very similar skills. When a firm asks their knowledge leader to take on consideration of the firm’s billing structures and alternative fee arrangements, I wonder why it was felt that (a) the knowledge work could be scaled down and (b) the expertise of the firm’s own accountants and business managers could be ignored in favour of the gifted amateur. A callous interpretation might be that in fact the firm does not value the knowledge function at all, and so its senior people are fair game for diversion to other (probably equally unvalued) projects.

On the other hand, the response might be that these new activities are actually highly valued and so it is important for a senior, respected person to lead them. This is a compelling argument, but it calls to mind the advice to CEOs that I found in an HBR blog last year. In the fourth of a series of conversations on personal productivity with Bob Pozen, chairman emeritus of MFS Investment Management and senior lecturer at Harvard Business School, he was asked “How do you decide what to spend your time on when you’re the boss?” His response was interesting:

Top executives usually say they set their priorities and then figure out how to implement them. But in this process many executives make a critical mistake. I’ve noticed this when I’ve mentored new CEOs. They say, “Here are the top five priorities for the company. Who would be the best at carrying out each priority?” Then they come up with themselves as the answer in all five areas. It might be the correct answer, but it’s the wrong question.

The question is not who’s best at performing high-priority functions, but which things can you and only you as the CEO get done? If you don’t ask yourself that question, your time allocations are bound to be wrong.

For Pozen, then, senior people should stick to the things that truly need their attention. To do otherwise dilutes their attention and limits the opportunities for development of others in the organisation. He actually extends this principle further down the business:

What about those of us who aren’t CEOs?

The key, I’ve found, is to become messianic about the principle that everybody owns their own space. This is the human resources analogy to bottom-up investing.

Under this approach, every employee is viewed as the owner of a small business — his or her division, or subdivision or working group; the performance of this unit is his or her responsibility. As the boss, my role is to provide my reports with resources, give them guidance and help them do battle with other people in the broader organization. But they own their own unit.

If law firms’ knowledge leaders are really to be respected and to ‘own their own unit’ they need to be protected from distractions that take them away from that core responsibility. They and the firm get the best results that way.

Another response might be that some of these new projects are experimental, and may not persist. That is fair: why invest in something if it may be temporary? But look at this from a different angle: if you aren’t investing in it, might you be guaranteeing that it will be temporary? Here’s an alternative approach: given that (as ever) law firms are facing many of these issues some time after other organisations, why not buy in expertise on a fixed (but renewable) contract? If you want to explore how matters might be managed or billed differently, why not take on people from the major consulting businesses or accountancy firms to see if their experiences in non-legal professional services firms might be transferable? If you are, in Pozen’s terms, messianic about people owning their own space, and you are exploring a new space, get a new person to lead the exploration.

Knowledge leaders should, by all means, explore new ways of developing and using knowledge in the firm (and they may be able to contribute that expertise to the new activities), but (a) that should not be seen as a change in KM itself and (b) respect for the knowledge function is best expressed by not drawing its people into unrelated new projects.

If I Only Had a Brain — how to become the wisest in Oz

Last week, a random tweet by James Grandage prompted a chain of thought. He tweeted:

My response was to suggest that he had it already: a brain.

On reflection, however, it appears that James was seeking what many firms want — a brain for the whole organisation. To be able to create and recall institutional memories, to process sensations gathered by ears and eyes and to use those sensations to engage with other organisations (or people) and their brains.

In the name of knowledge management, many organisations have created databases and repositories that are intended to operate as brains as far as the technology will allow. Unfortunately, their actual performance often falls somewhat short of this promise. Why might this be?

One answer is suggested by the experience of the Scarecrow in Frank L. Baum’s Wizard of Oz. You will recall that he accompanied Dorothy on her journey to Oz in order to ask the Wizard for a brain, because that is what he wants above all else. As they travel down the Yellow Brick Road, the Scarecrow’s shows by his actions that in fact he has a brain, and can use it. When they get to Oz, he is recognised as the wisest man there.

Many law firms are on a similar journey. They labour in the belief that all they need to complete themselves is a know-how system, or database, or whatever terminology they use to describe their brain. In reality, they have one — distributed amongst their people — which they often use to spectacular effect. (For examples, see the FT’s report on Innovative Lawyers, which highlights a range of activities — very few (if any) of which depend on the existence of a KM system.)

Often, however, brains (whether individual or organisational) are used spectacularly poorly. I suspect that this is partly why KM databases fail so well: people just use them badly — they don’t use them, or they don’t volunteer their insights to them. (There are other, better, reasons, but I want to concentrate on this one for now.)

How actively do people use their own brains to reflect and learn from their experiences? Or to seek information or insight that challenges what they think they know? I must confess that I see little of this. (I try to do it myself, but I am sure I have blind spots where I accept a partial view of reality, rather than continuing to seek a better truth.) I am sure this critique and creativity happens, but for most people it is concentrated in areas where they are already experts. For lawyers, that is their area of legal expertise — not the work that goes on around them to support the firm in other ways.

As an example of this, consider the know-how system. Whilst the research I linked to above (and again here), dates from 2007, I still see people advocating such repositories as the cure-all for law firms’ knowledge ailments. At the very least, they ought surely to recognise that there is a contrary view and argue against it?

Another example that comes up repeatedly is the assertion that creative thought depends on using one’s right brain, rather than the analytical left brain. However, this depends on an understanding of neuroscience that was undermined twelve years ago. The origin of the left-right brain model was the research of Roger Sperry, who was awarded the Nobel Prize in 1981. Despite the attractiveness of this model (especially to a range of management authors), neuroscience, like all the sciences, does not stand still — all theories are challengeable.

The watershed year is 1998, when Brenda Milner, Larry Squire, and Eric Kandel published a breakthrough article in the journal Neuron, “Cognitive Neuroscience and the Study of Memory.” Kandel won the Nobel Prize two years later for his contribution to this work. Since then, neuroscientists have ceased to accept Sperry’s two-sided brain. The new model of the brain is “intelligent memory,” in which analysis and intuition work together in the mind in all modes of thought. There is no left brain; there is no right. There is only learning and recall, in various combinations, throughout the entire brain.

Despite the fact that this new model is just as easy to understand, people still fall back on the discredited left-right brain model. Part of the reason, I think, is that they don’t see it as their responsibility to keep up with developments in neuroscience. But surely using 30-year-old ideas about how the brain works brings a responsibility to check every now and then that those ideas are still current.

Something similar happens with urban legends. Here’s a classic KM legend: Stewart Brand on the New College roof beams.

It’s a good story, but not strictly true. In fact the beams had been replaced with pitch pine during the 18th century, the plantation from which the oak came was not planted until a date after the hall was originally built, and forestry practice is such that oak is often available for such a use.

It is not the case that these oaks were kept for the express purpose of replacing the Hall ceiling. It is standard woodland management to grow stands of mixed broadleaf trees e.g., oaks, interplanted with hazel and ash. The hazel and ash are coppiced approximately every 20-25 years to yield poles. The oaks, however, are left to grow on and eventally, after 150 years or more, they yield large pieces for major construction work such as beams, knees etc.

If we rely too heavily on documents and ideas that are familiar (and comfortable), we run the risk of selling ourselves short. As Simon Bostock has recently pointed out, there is almost invariably more interesting stuff in what we have not written down than in what we have captured (or identified as ‘lost knowledge’). Referring to another KM story (NASA have lost the knowledge that would be necessary to get to the moon again), he points out that what was really lost was not the documentation, but the less tangible stuff.

This means, basically, that even if NASA had managed to keep track of the ‘critical blueprints’, they would have been stuffed. Design trade-offs are the stuff of tacit knowledge. Which usually lives inside stories, networks, snippets of shoptalk, chance sneaky peeks at a colleague’s notes, bitter disputes and rivalries…

In knowledge terms, we’re about to live through another Black Death, another NASA-sized readjustment.

Smart organisations will recognise this in advance and avoid the archaeological dig at the junkyard, the museum and the old-folk’s home.

Archaeology is interesting, and can shed light on past and present activities, but we don’t use Grecian urns to keep food in any more. We use new stuff. The new stuff (whatever it might be) should be our continuing focus. That’s how we should use our brains, and how those supporting effective knowledge use should encourage brain-use in their organisations.

Knowledge sharing: it may not be what you think it is

John Tropea is one of my top Twitter friends for sharing interesting links and insights. Yesterday, he unearthed a great blog post from Patrick Lambe dating from 2006 (“If We Can’t Even Describe Knowledge Sharing, How Can We Support It?“). Patrick’s post starts calmly enough:

A combination of two very different incidents reminded me this week of just how incompetent we still are in KM at capturing the complexity, richness and sophistication of human knowledge behaviours. In the first incident I was asked to do a blind review of an academic paper on knowledge sharing for a KM conference. In the second, knowledge sharing was very much a matter of life and death. Although they shared a common theme, they might as well have represented alien universes.

From there, he becomes a bit more immoderate:

Let’s look at the conference paper first. After working my way through the literature review (a necessary evil), I started into the research proposal with my stomach starting to knot up and a growing sense of incredulity.

Although the authors had adopted Davenport & Prusak’s perfectly respectable definition of knowledge as a “fluid mix of framed experience, values, contextual information, and expert insight” it was becoming increasingly apparent as I worked my way into the paper that what they really meant by “knowledge sharing” was confined to contributing to and consuming from an online KM system. The research being described was designed to identify the factors that would indicate propensity for or against said behaviours. A knowledge sharing system that could, theoretically, be engineered.

Shame on them. After a good decade of practical effort and research focused on KM, how can people still think so mechanically and bloodlessly?

Justly immoderate, I think. Read on to see why.

Tonderghie Steading

It has to be right that knowledge in action is more valuable to organisations than inactive knowledge. Rory Stewart’s walking and engaging with people, as I wrote yesterday, shows one way in which high quality insight into complex systems can come from simple interactions rather than formal organised learning and knowledge. This is a point that Patrick made at greater length in an excellent paper he wrote in 2002 called “The Autism of Knowledge Management” (it’s a 23-page PDF downloadable from the linked blog post).

It depresses me that I have only just discovered this paper. Patrick wrote an incredibly useful critique of some traditional and ingrained organisational attitudes to e-learning and knowledge sharing. It should be much more widely known.

Here is his starting point:

There is a profound and dangerous autism in the way we describe knowledge management and e-learning. At its root is an obsessive fascination with the idea of knowledge as content, as object, and as manipulable artefact. It is accompanied by an almost psychotic blindness to the human experiences of knowing, learning, communicating, formulating, recognising, adapting, miscommunicating, forgetting, noticing, ignoring, choosing, liking, disliking, remembering and misremembering.

Once he has expanded on this, carefully defining what he means by ‘autism’ and ‘objects’ in this context, Patrick then presents and deals with five myths that arise as a result of this way of thinking. These are the myths of reusability, universality, interchangeability, completeness, and liberation. Of these, the one that struck me most was the myth of completeness:

The myth of completeness expresses the content architects’ inability to see beyond the knowledge and learning delivery. Out of the box and into the head, and hey presto the stuff is known. The evidence for this is in the almost complete lack of attention to what happens outside the computerised storage and delivery mechanism – specifically, what people do with knowledge, how it transitions into action and behaviour. How many people in knowledge management are talking about synapses, or the soft stuff that goes on in people’s heads? Is it simply assumed, that once the knowledge is delivered, it has been successfully transferred?


Knowledge only has value if it emerges into actions, decisions and behaviours – that much is generally conceded. But few content-oriented knowledge managers think through the entire lifecycle of the knowledge objects they deal in. Acquiring a knowledge artefact is only the first stage of what’s interesting about knowledge. We don’t truly know until we have internalised, integrated into larger maps of what we know, practised, repeated, made myriad variations of mistake, built up our own personalised patterns of perception and experience.

I can think of few more succinct and clear expressions of the process of knowing. In the organisational context, we need to be sure that everyone takes responsibility for developing their own knowledge — they cannot just plug themselves into a knowledge system or e-learning package. This statement shows why. The impact of this personal responsibility becomes clear within the section on the myth of interchangeability, where Patrick makes a valuable point about information and insight that resonated especially given my blog post from yesterday.

Beyond a basic informational level (and value added knowledge and learning need to go far beyond basic informational levels), when I have a specific working problem such as how to resolve a complex financial issue, the last thing I want is a necklace of evenly manufactured knowledge nuggets cross-indexed and compiled according to the key words I happen to have entered into the engine. Google can give me that, in many ways more interestingly, because it will give me different perspectives, different depths and different takes.

What really adds value to my problem-solving will be an answer that cuts to the chase, gives me deep insight on the core of my problem, and gives me light supporting information at the fringes of the problem, with the capability to probe deeper if I feel like it. Better still if the answer can be framed in relation to something I already know, so that I can call more of my own experience and perceptions into play. Evenness and interchangeability will not work for me, because life and the situations we create are neither even, nor made up of interchangeable parts.

We do have an evolved mechanism for achieving such deep knowledge results: this is the performance you can expect from a well-networked person who can sustain relatively close relationships with friends, colleagues and peers, and can perform as well as request deep knowledge services of this kind.

I suspect that (whether inside our organisations or otherwise) we can all identify people whose personal networks add significant value to their work and those around them. (And probably plenty whose silo mentality brings problems rather than focus.)

In his conclusion, Patrick presents “six basic principles that seem to work consistently in our knowledge and learning habits; principles that knowledge management and e-learning technologies need to serve.” These are:

  1. Highly effective knowledge performers prefer knowledge fragments and lumps to highly engineered knowledge parts.
  2. Parts need to talk to their neighbours.
  3. The whole is more important than the parts.
  4. Knowledge artefacts provide just enough to allow the user to get started in the real world.
  5. Learning needs change faster than learning design.
  6. Variety is the spice of life.

I need to read this section again — it didn’t resonate as well for me as the rest of the paper. That said, reading the paper again will be a delight rather than an imposition. I recommend it highly to anyone with an interest in knowledge and learning processes, and the systems we create to support them.

Thinking like a designer?

Over the last week, I have noticed a flurry of blog posts and articles referring to “design thinking.” This may just be a clustering illusion, though — the idea is not new, nor can I see any particular reason why it would surface now more than before. What I read does puzzle me, though.

San Gimignano

Let’s start with what is meant by design thinking.

Compare and contrast: Design Observer, October 2009: “What is Design Thinking Anyway?” and Design Observer, November 2007: “Design Thinking, Muddled Thinking.”

A quote from the latter first:

When the word “critical” is attached to the word “thinking,” the result, “critical thinking,” is a term that has clear, well defined, and well-understood meaning — certainly in the academic community, if not generally. As a counter example, the same cannot, for instance, be said about the term “art thinking.” This is not a term that can be used in any precise or meaningful way. Why? Because it could mean painting or sculpture; it could mean figurative or abstract; it could mean classical or modern or contemporary. Because it embodies so many contradictory notions, it is imprecise to the point of being meaningless — and therefore, completely understandably, it is not much used, if at all.

“Design thinking” is as problematic a term as “art thinking.” Design thinking could refer to architecture, fashion, graphic design, interior design, or product design; it could mean classical or modern or contemporary. It’s imprecise at best and meaningless at worst. More muddled thinking.

But then the more recent article takes a different view:

One popular definition is that design thinking means thinking as a designer would, which is about as circular as a definition can be. More concretely, Tim Brown of IDEO has written that design thinking is “a discipline that uses the designer’s sensibility and methods to match people’s needs with what is technologically feasible and what a viable business strategy can convert into customer value and market opportunity.” [Tim Brown, “Design Thinking” Harvard Business Review, June 2008, p. 86.] A person or organization instilled with that discipline is constantly seeking a fruitful balance between reliability and validity, between art and science, between intuition and analytics, and between exploration and exploitation. The design-thinking organization applies the designer’s most crucial tool to the problems of business. That tool is abductive reasoning.

Then there is this. Having adopted the “design thinking is thinking like a designer” approach, this site (curated by one Nicolae) goes on as follows.

When design is stripped from forming, shaping and styling, there is a process of critical thinking and creative solving at the very core of the profession. By consciously understanding and documenting this process, a new field within the design domain emerges that deals with the creativity DNA of the design mind. When properly understood and harvested, one can transfer the creative DNA from design into virtually any discipline regardless of brain direction. This process has been recognized by thought leaders as an extremely valuable tool for fostering creativity and driving innovation.

However, this is as far as it goes — there is no further analysis of what this “process of critical thinking and creative solving” might be (apart from a meaningless allusion to the left brain-right brain dichotomy, which is a widespread fallacy[1]). So that takes us no further. (I confess that in my original draft, I was much ruder.)

The reference in this week’s Design Observer piece to abductive reasoning takes us a bit further. Here is what wikipedia currently has to say about that, by comparison with better-known forms of reasoning.

allows deriving b as a consequence of a. In other words, deduction is the process of deriving the consequences of what is assumed. Given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. It is true by definition and is independent of sense experience. For example, if it is true (given) that the sum of the angles is 180° in all triangles, and if a certain triangle has angles of 90° and 30°, then it can be deduced that the third angle is 60°.
allows inferring a entails b from multiple instantiations of a and b at the same time. Induction is the process of inferring probable antecedents as a result of observing multiple consequents. An inductive statement requires empirical evidence for it to be true. For example, the statement ‘it is snowing outside’ is invalid until one looks or goes outside to see whether it is true or not. Induction requires sense experience.
allows inferring a as an explanation of b. Because of this, abduction allows the precondition a to be inferred from the consequence b. Deduction and abduction thus differ in the direction in which a rule like “a entails b” is used for inference. As such abduction is formally equivalent to the logical fallacy affirming the consequent or Post hoc ergo propter hoc, because there are multiple possible explanations for b.

At this stage, then, abduction doesn’t look too promising as a means of solving problems. However, it might be attractive as a tool to suggest solutions which can then be tested separately. This is the way I imagine it being used — as an exploratory technique. This is supported by exploring a reference later in the article to Charles Sanders Peirce. His lecture “The First Rule of Logic” is apposite here. Peirce argued that whatever mode of reasoning is chosen, “inquiry of any type… has the vital power of self-correction and of growth.” Following from this, “it may truly be said that there is but one thing needful for learning the truth, and that is a hearty and active desire to learn what is true.” We then come to the heart of his argument.

Upon this first, and in one sense this sole, rule of reason, that in order to learn you must desire to learn and in so desiring not be satisfied with what you already incline to think, there follows one corollary which itself deserves to be inscribed upon the wall of every city of philosophy,

Do not block the way of inquiry.

Although it is better to be methodical in our investigations, and to consider the Economics of Research, yet there is no positive sin against logic in trying any theory which may come into our heads, so long as it is adopted in such a sense as to permit the investigation to go on unimpeded and undiscouraged.

This opens the way to the kind of instinctive, hunch-following process that appears to be presented now as “design thinking.” I am far from sure that such thought processes are unique to designers or, even, more prevalent in that community. Peirce’s suggested open-mindedness in seeking solutions, followed by clear-headed assessment of the merit of those solutions, is a model that many professionals follow, designers or not.

Neil Denny, in a post critiquing some lawyers’ thinking, points to Edward de Bono’s concept of Po. This idea is essentially the same as abduction — thinking of answers that are entirely distinct from the obvious answers in order to reach a new and achievable solutions. As Neil puts it,

Po lifts us out of the normal patterns of thinking. It does not ask “Is this a good idea?” which invites a critical progression of “…And if not, why not.” Instead, po says “Let’s just accept that the following statement, however nonsensical, however illogical is a good idea. Now, what is good about it? What would work or how would it benefit our organisation, or our clients.”

The idea or the suggestion itself is put forward to stimulate the discussion. The idea can be discarded later once it has identified benefits or methodologies.

As Neil indicates, it is the discussion, or the process by which traditional logical tests are applied, where the work really happens. Going back, again, to an old post of mine, James Webb Young’s A Technique for Producing Ideas (chronologically only slightly closer to de Bono than to Peirce) is just another expression of the same basic process.

The process can be distilled into a small set of key points:

  1. Desire to learn, adapt, or create
  2. Always be open to possibilities (however odd they may seem)
  3. Choose potential solutions intuitively and imaginatively
  4. Test the chosen solutions rigorously
  5. Discard failed (and failing) solutions (including the status quo), however attractive they may appear
  6. Learn, adapt or create
  7. Return to the beginning

This is a hard discipline, and it has to be maintained for best results.

Interestingly, if you persist in concentrating on the things you already know and are familiar with, if you avoid opening your eyes to the widest variety of options, you are likely to be persistently unlucky. Richard Wiseman has reached this conclusion after studying luck and luckiness for some years.

[U]nlucky people miss chance opportunities because they are too focused on looking for something else. They go to parties intent on finding their perfect partner and so miss opportunities to make good friends. They look through newspapers determined to find certain types of job advertisements and as a result miss other types of jobs. Lucky people are more relaxed and open, and therefore see what is there rather than just what they are looking for.

My research revealed that lucky people generate good fortune via four basic principles. They are skilled at creating and noticing chance opportunities, make lucky decisions by listening to their intuition, create self-fulfilling prophesies via positive expectations, and adopt a resilient attitude that transforms bad luck into good.

Wiseman’s work is extremely interesting, and worth exploring in more detail. (For those in Manchester at the end of the month there is even an opportunity to hear him speak as part of the Manchester Science Festival.)

It is important, however, not to get too carried away with intuition. When dealing with abstract problems, our brains tend to think in a way that can lead inexorably to error. The clustering illusion that I referred to at the beginning, together with a host of other cognitive errors, can be a real problem when assessing probability and statistics, for example, as Ben Goldacre specialises in showing us. If design thinking just means being supremely imaginative and doggedly intuitive, it is not likely to be a formula for success. If however, it is a shorthand for creative thinking coupled with critical assessment against objective standards (whether those are rules of logic or just client imperatives), then it is undeniably good.

But let’s not allow the designers to think it is their unique preserve.

[1] The reasons why this fallacy persists are beyond my scope here. However, the idea of a clear division is a fallacy. Although the mechanism is not fully understood, the brain almost certainly needs to involve both halves to function properly. Take this statement by Jerre Levy, in “Right Brain, Left Brain: Fact and Fiction,” Psychology Today, May 1985, for example:

The two-brain myth was founded on an erroneous premise: that since each hemisphere was specialized, each must function as an independent brain. But in fact, just the opposite is true. To the extent that regions are differentiated in the brain, they must integrate their activities. Indeed, it is precisely that integration that gives rise to behaviour and mental processes greater than and different from each region’s contribution. Thus, since the central premise of the mythmakers is wrong, so are all the inferences derived from it.

The New Scientist has also covered the issue (only available in full to subscribers, although it is possible to find versions of the article around the internet).

Do we want success or failure?

Reading this interview of Steve Ballmer, I was struck by his answer to the question, “How do you assess job candidates?”:

If they come from inside the business, the best predictor of future success is past success. It’s not 100 percent, but it’s a reasonable predictor.

This “success breeds success” mindset is, I think, mistaken. It is a relation of the thought process that leads to books like Good to Great. Just because a person or business has been successful does not mean that we know why they have been successful. Their previous success may just be a question of luck, rather than good judgment. Correlation does not imply causation — that is just sloppy thinking. (Unsurprisingly, Ballmer recommends one of Jim Collins’s books as a particularly useful text.)

An example of a better approach is provided in this Edutopia video by Randy Nelson of Pixar, talking about the way that NASA selected its astronauts.

Their first search was this depth-based search, and what they found was there are far too many people who were deep — who were very good. They couldn’t use that as a filter. They realised what they wanted was not merely people who were successful, and in fact maybe that was what they couldn’t afford, in their depth-based search. They needed to find people who had failed and recovered.

Those who had failed and hadn’t recovered were not applying — they weren’t around any more (we’re talking about test pilots, for the most part) — that filters out one group!

So that ended up being the way that the astronaut corps was chosen — they were looking for people who had not simply avoided failure, but rather those who had seen failure and had figured out how to turn it into something. The core skill of innovators is error-recovery, not failure-avoidance.

The whole video is not very long, and is full of little gems like this one. It is certainly a much more thoughtful approach to the problem than Steve Ballmer’s.

Where do lawyers come from…?

From a number of directions, there is a lot of son et lumière at the moment about the relationships between legal education and law firms and law firms and their in-house clients. As someone who has sat on two of the three sides of these fences, I naturally have a view.

Before I started working in a law firm eight years ago, I spent nearly 13 years teaching law — for the greater part of that time at the University of Bristol. During that period there was considerable debate (fostered for the most part by the late Peter Birks) about the proper relationship between the legal academy and the profession (I speak of a singular profession, although there are actually two in England and Wales — solicitors and barristers). Birks was adamant that the legal profession should prefer law graduates to non-law graduates, but that the profession should leave the question of defining a suitable law degree to the universities. I thought he was wrong about the former question, but right about the latter. My view has not changed in the years I have spent since then observing lawyers at work.

As a law teacher, I saw many students who had clearly signed up for a law degree solely for the purpose of smoothing their progress towards a lucrative career in a commercial law firm. Some of them really resented the subjects that they were required to complete in order to get a qualifying degree, but which they saw as irrelevant to legal practice. Since I taught two of those subjects (Public Law and Jurisprudence), this resentment was plainer to me than it might have been to some of my colleagues. (Since then, many of my former students have said that in retrospect they value the wider perspective on the law that those courses gave them.)

At the same time, I knew many young lawyers who had studied law, but who spent much of their time wishing they had been able to read further into subjects that interested them more, whether that be History, Physics, or Underwater Basket Weaving. That made me wonder whether the right approach would be to turn Law into a postgraduate degree. (In the Anglo-Scottish tradition, Law is an undergraduate degree, with a postgraduate professional component for those intending to go into practice.) I do not now think that would be right — such an approach would effectively exclude from legal studies those with a genuine interest in law as a human and social science, but who had no intention of becoming joining the profession.

The natural conclusion of these views is that the legal profession should be open to those with law and non-law undergraduate degrees. That is the position in England and Wales today, as it has been since the profession became closed to non-graduates. Certainly, non-law graduates should be required to take a postgraduate course in law, but I do not think they should be excluded altogether. My observations of lawyers in practice has not changed this conclusion — without knowing someone’s academic history, I have found it impossible to tell whether or not they have a law degree. That does not prevent those with law degrees being convinced that they have a right to priority entry into the legal profession, as some of the comments on this report in The Lawyer illustrate.

One of the reasons why a law degree is not an essential prerequisite to a legal career in the England and Wales is that the vocational training of lawyers takes place entirely after the degree is obtained. I have been intrigued by the discussion of the value of a JD in business and the subsequent discussion between Ron Friedmann and Doug Cornelius, captured on Ron’s blog. Historically, only 70% or less of English law graduates enter the legal profession (I wish I had a citation for this, but I haven’t been able to track one down — it was certainly my recollection of Bristol graduates). In some other European countries, where Law is also an undergraduate degree, the proportion is even lower. In Italy, for example, there is a long-standing tradition (exemplified by Gianni Agnelli — nicknamed “l’Avvocato”) of law graduates going directly into commerce and business. Ron and Doug’s discussion makes it clear that European assumptions about the merits of legal study are not shared by our North American counterparts.

And what of that vocational training? Toby Brown has argued powerfully that BigLaw contributes significantly to the development of lawyers who can then turn their back on those firms and strike out on their own. This argument is even stronger in England and Wales. Once our fledgling lawyers leave the classroom and the lecture hall, they still need two more years (in the case of solicitors) before they can call themselves qualified. That two years on a training contract is typically spent in medium-sized to large law firms. (A search on LawCareers.Net suggests 180 firms in that category, which will typically have 5-100 places on offer each year. In addition, another 750 small firms are listed, but most of these will have less than two places on offer.) The solicitors’ profession therefore depends heavily on large commercial firms to train their new blood.

Which brings me to clients. My guess is that all clients of all law firms everywhere are pressing for lower fees (or at least reduced legal costs). If those fees are considered to be solely reimbursement for services rendered, law firms run the risk of short-changing themselves: of failing to be recognised for the wider benefit that they offer to the legal profession — especially its future. Many in-house legal teams in commerce, industry and the public sector add to the pool of qualified lawyers by offering training contracts. However, their contribution is small compared to the training work that law firms do, and to the numbers of qualified lawyers employed in those teams. My guess is that there is a net flow of qualified lawyers from private practice into in-house teams. The problem for those businesses is that their short-term cash-flow concerns might cause a shortfall in the pool of available talent in the longer term by making it more difficult for the firms to offer as many training contracts as the market will need in the future.

At the beginning of the year, I read a powerfully-argued polemic comparing major law firms to a dysfunctional coffee-shop.

Then I notice a coffeehouse that I had never seen before. It’s surprising because it’s bigger than normal and has a very staid, conservative name. More like a string of names, actually, followed by a “P.C.” I take this to mean “professional coffeehouse,” or something.

The first thing I notice inside is that the décor is heavy on the mahogany and expensive modern art. A sign on the wall talks about how they have stores in 30 states and eight countries, and that they just opened a location in Shanghai. The sign suggests that they’re very excited about this.

I go to the counter and I’m greeted by a tired-looking twentysomething. Her nametag says she’s a “Coffee Associate.”

When I’m all but delirious from my lack of caffeine, my barista finally tells me that my latte is ready. It seems well made, and it tastes fine, although I would have preferred to have it more quickly. The young woman thanks me and wishes me a good day.

“But I haven’t paid you yet.”

“Oh, don’t worry,” she says. “We’ll send you an invoice.”

Nearly two months later, I receive an envelope with the name of the coffee company on it. By now, I’ve already forgotten what I had gotten. I open the envelope and nearly faint.

And so on.

In fact, I don’t think major law firms are coffee shops. They are more like the motor dealer servicing departments. When one buys one’s luxury car new or nearly new, the need to maintain its resale value as far as possible means that one tends to go to the most expensive (but hopefully most up-to-date) place for regular servicing and repairs — the franchised dealer or service outlet. As the car gets older, and knowledge of the technology in it is more pervasive, it makes more sense to save money by finding a local mechanic who can work on it. But the local mechanic can only do that if he can tap into the expertise coming out of the main dealership. He and, by extension, you the customer depend on that expertise. You have paid for it in the past by using the main dealer, and now you can reap the reward by using a cheaper alternative. This analogy is still not perfect, but it is not as pernicious as the coffee shop one. Making coffee is not as complex as maintaining a modern car, which is nowhere near as tricky as training a lawyer.