Jargon or vocabulary?

The British news media appear to be unanimous in approving the Local Government Association’s call for less jargon and more plain English in the documents created by local councils. Unfortunately, in their quest for a story, they appear to have missed an opportunity to look critically at what the LGA is advocating.

In December 2007, the LGA sent to councils a list of “100 words that all public sector bodies should avoid when talking to people about the work they do and the services they provide.” That sounds like a sensible thing to do, doesn’t it? Well, yes — if the concern is that the language that councils use is making life difficult for people who want or need to use their services. If, on the other hand, their view is that all council documents should have these terms removed, then I would be worried that this advice could dilute the accuracy or effectiveness of those documents. What the LGA appears to have done is failed to make a distinction between documents for public users of local authority services and internal discussion papers, for example.

As a result, the 100 “non-words” include mutants such as “predictors of beaconicity” alongside comprehensible, but non-standard, terms like “core message”. Bizarrely, it also suggests that the phrase “most important” should replace “priority”. Why? Is importance more difficult for people to understand than priority?

Today, the LGA has doubled the size of the “bad words” list, and reiterated its demand for councils to use plain English. New on the list are words like “taxonomy” and “proactive” (neither of which need be used at all, according to the LGA). In fact, the alternatives suggested by the LGA can be just as cumbersome or confusing as the original word or phrase: can anyone tell me why the phrase “devil in the detail” is more acceptable than “cautiously welcome”? There are even inaccuracies: “privatisation” is not a synonym for “outsourcing” — an outsourced service can be provided by another public body.

Looking down the list, I see very few words or phrases that actually appear in my local council’s public documents. On the other hand, I am sure that many of them appear in their internal working papers or in documents that deal with technically complex matters. I think that is perfectly acceptable.

The point about jargon is that some of it is actually useful. It may be used to exclude people from understanding something, in which case it should be shunned, but often a simple word or phrase encapsulates an idea or concept economically in a way that is acceptable to all those who use it. For many years (and possibly still) people at IBM maintained a dictionary of their jargon. The 1990 version of that document ran to 65 pages, but not one of the words or phrases in it could be defined by a simpler word or shorter phrase.

I think many organisational activities (including knowledge-related work) depend on good outward communication as well as effective internal discussion. It is clearly counterproductive if the language we use in our outward communication exclude people who need to know about our work. On the other hand, use of a rich technical language and vocabulary can improve the efficiency and effectiveness of our work. Branding everything unusual as “jargon” and calling indiscriminately for its banning is pointless and two-faced: the LGA illustrates the hypocrisy in its use of a number of the hated words in its own mission statement.

Measuring maturity

There is a small number of meta-questions about knowledge management that people regularly grapple with. The most obvious is “what is knowledge management?” After that, the next most frequently asked must be “how do you measure KM success?” I have found at least 23 answers (or challenges) to that question, and there are undoubtedly more. I recently found an interesting commentary on the measurement game in a different context, which might shed some light on the matter.

I maintain a watching brief on the higher education sector in the UK. Partly for nostalgic reasons, partly to see trends that might affect our future lawyers, and partly because serendipity is part of this job and I think that only comes with practised observation. So I couldn’t miss Jonathan Wolff’s recent insight into the way in which the UK funding and quality agencies monitor universities.

Suppose you have applied for a job, any job. You are at one of those macho interviews where the panel members compete to see who can make you sweat the most. And this is the winning question: how do you plan to monitor and evaluate your own performance in the role? … 

Suppose your job is in business of some sort and, ultimately, you are employed to make the company money… In the end, the only thing that matters, then, is the profit you bring in. But it may take some time to build up a client base and to gather the dosh. It would be foolish to say that in the short term you should be judged on how much profit you make for the company. Rather you should monitor your activity: how many meetings you have taken, how many letters and emails you have sent, how many briefings you have been to. But, of course, that is only for openers. If the meetings don’t result in business, then you are wasting your time. So in the second phase of monitoring, you stop counting meetings and start counting things like contracts signed, goods shipped, turnover generated, or any other objective sign of real interaction.

But, once more, this is only an interim goal. You are there not to generate turnover, but profit. And once you have been around long enough that is the only thing that matters. In the third and final phase you count how much you make for the company, and stop worrying about meetings, letters or contracts signed. Who cares about how many of these there are if the bottom line stays juicy enough?

Pithily put, and accurate too. (Perhaps one should expect nothing less from a professor of philosophy at the institution inspired by Jeremy Bentham.) Unfortunately, Wolff’s tale does not end there. Our universities are stuck at the first stage — they can only monitor and measure the most obvious stuff they do. They haven’t worked out how to demonstrate how well they do at their core tasks: educating students and producing excellent research. They know that those are the bottom line (the profit equivalent), but they cannot measure how close they get to it.

The lesson from business is that over time, if you can’t count the right thing, counting the wrong thing isn’t a substitute. It isn’t even just a distraction. It is the road to ruin.

As a result, our universities are trapped in an immature relationship with their market and their paymasters. My memory of that relationship is that it was characterised (on both sides) by petulance, truculence and pedantry. I don’t think things have changed much in the last seven years.

Where does that leave KM? We go through the same phases. In the early days we demonstrate the value of our work by showing people the simple numbers — this many documents created, stored or accessed; that many people involved in knowledge sharing. Later on, we can look at the quality of this stuff — how good are these documents, is there good feedback on knowledge sharing. Ultimately, though, we need to work out what our bottom line is: what are we here for and how good are we at delivering that value. In any given organisation that may take a while, but if we stick at simple measures we shouldn’t be surprised if our paymasters and clients see us as an irrelevance. If we can show the impact of our work on profitability, we should always aim to do so (and loudly). Nobody is going to blow our trumpet for us.

Some things about KM that we now know are wrong

There are a few things that act as talismans for traditional knowledge management. Here’s a couple of blog posts undermining commonly-held KM superstitions.

Superstition 1: We need an expertise directory

This sounds like a great idea. Clearly “know-who” is an essential part of good knowledge management. Without it, how can we justify David Weinberger’s claim that “A knowledge worker is someone whose job entails having really interesting conversations at work.” So what should we do? The obvious answer: get everyone to add their details to an expertise directory.

My instinct is that this approach is doomed to failure. In order for an expertise directory of this kind to work, a couple of things need to converge. First, we need to be able to identify what information might be useful to people in the future. This obligation might fall on the system designer — to build a taxonomy that encompasses all possible future eventualities. Or it might rest with each individual — to describe in free text what they do in a way that includes all the topics that might be relevant. That’s a challenge. The other thing is that the right people (as many people as possible) need to contribute.

My experience, and the reported experience of IBM (over a much larger, and therefore more authoritative, sample) is that this approach fails because neither of these factors is realistically achievable.

After almost 10 years of from-the-executives, repetitive, consistent pressure, only 60% of all IBM profiles are kept updated.(Note that Lotus Connections Profiles is the productized version of IBM BluePages, which has been around since 1998.) And that’s even with an automated email sent out every 3 months to remind people to update their profiles, plus a visual progress bar indicating how complete or incomplete a user’s profile is, plus people’s first-line managers constantly reminding them to update their profile.

So what should go in its place?

Once we gave Contributors the choice about how to share their knowledge and experience, we found that they were more likely to contribute using these social options, since they realized that the result would be fewer emails, IMs and phone calls asking for their basic expertise.

“Read my blog.”… “Check out my bookmarks.”… “Look at my activity templates.”… “Read my community forum.”

…became the new ‘RTFM‘, if you will.

Now, once Seekers find an expert via Profiles, they are able to consume some of their knowledge and expertise without disrupting them. The nature of the remaining email/IM/phone requests from Seekers were about their deeper experience, their knowledge that will always remain tacit.

In practice social bookmarking, internal blogging, communities and activity tracking (all “in the flow”) beats voluntary confession of expertise (“above the flow”). The tools? For (and by) IBM: Dogear for social bookmarking and Connections for blogging, communities and activities. Surely law firms (even those without social networking tools) should have a head start in this area. There is huge scope for leveraging the information about people’s work in existing databases: document management systems, billing and time-recording databases, CRM systems. If we get our systems to talk to each other, we can enable real human conversations.

(For those who prefer a visual approach, there is a video.)

Superstition 2: KM efforts need incentives

I think I have said before that I am not a fan of knowledge repositories and the Field of Dreams triumph of hope over experience. Received wisdom says that in order for such know-how systems to work well, people need to be encouraged to use them. Neil Richards was sceptical, and asked for people’s experiences. An unscientific approach, to be sure, but the anecdotal evidence is unequivocal. Incentives don’t work. Some quotes:

While an initial advocate of incentive programs for fee earner participation in KM programs, over time I found it tended to be the same fee earners participating each time and, in most cases, these fee earners informed me they would have participated in the program regardless of whether or not there had been an incentive program.

They decided to offer a bottle of wine to the person who made the most contributions. At the next annual meeting of the group, one of the team members indeed received a bottle for having made four or five contributions over the year. (The firm’s target was four a year.) And that was the end of the program! Never revived or spoken of again. The contribution rate, which was always fairly low, didn’t change, either during or after the contest.

 —

We have tried incentives for KM participation, and I don’t want to go there again.  Our worst mistakes were done when we deployed our global Knowledge Management program for Customer support back in 2000.  One country unit decided to give away a Swiss army knife to every engineer that wrote 10 knowledge objects. This was one of our larger Country units, so we got >1000 knowledge objects written (and very armed and dangerous engineers…). Why did this fail: There was no incentive on writing anything useful, or to adhere to any of the internal format guidelines. These poor knowledge objects polluted the search for ALL country units for years.

I am looking forward to Neil’s promised further thoughts on incentives, because I think one of the real challenges for knowledge management is to embed good knowledge-related behaviours in the organisation.

(A footnote to the expertise directory issue: a comment on the blog post refers to people’s use of profiles on Myspace and Facebook. I have entries in Facebook and LinkedIn, amongst others, and I find it hard to keep them up to date. However, I also catalogue my library on Librarything, and iTunes synchronises my listening habits to last.fm. These information flows combine in Facebook to give people a picture of my interests without me having to lift a finger.)

Recognition and understanding

It is important to us that people listen to our needs, understand them and adapt to them. We know this about ourselves, but very few of us can naturally empathise with others. One reason for this, I think, is that human beings are almost infinitely complex and yet our brains cannot cope with this variety.

So what do we do? We create archetypes. We categorise. There are even people who classify themselves (and others) according to whether they were a first, second or third child (fourth children fall into the same category as the first-born). I wonder whether this is because in small communities (with close genetic links) such generalisations are likely to be accurate. As our circles of acquaintance become larger, their weaknesses become more obvious, but as we also struggle to do without them we depend more heavily on them.

It is with these thoughts in mind that I read Graham Durant-Law’s recent blog post, and remembered Dave Snowden’s short rant against Myers-Briggs. They both point to the complete absence of scientific evidence for summing people up in a small number of categories. Graham also poses a number of questions:

Why do these modern archetypes have credibility and how do these they help us? Why are they any better than Jung’s original archetypes? Where are they best used and what problems do they solve?

I can’t answer any of these, but I am interested in the way in which we think they might help us. Going back to my starting point, we want to be able to understand people (whether our managers, our team, our clients and customers, or our families) in order to work better with or for them, or to get along with them as well as possible. Doing that well is excessively hard. However, by referring to archetypes or categories we can make a reasonable attempt at empathy (especially for the relationships where a ‘quick fix’ will do).

We are fooling ourselves. If any of these relationships is worth pursuing, it must be worth the real effort that it takes to recognise someone as an individual with unique needs, desires, concerns, preoccupations and quirks. Archetypes and categories only conceal that reality.

Nobody expects…

There is an interesting article in the NY Times last week: The Advantages of Closing a Few Doors, which looks at the work of Dan Ariely on decision-making. Ariely has just published a book, Predictably Irrational, and he has a website with the same name. The NYT article focuses on a particular aspect of his work — what happens when we try to keep our options open.

It is a natural human characteristic to invest effort in maintaining a number of different alternative courses of action. Inevitably this costs time and money (and encourages disappointment — as I mentioned in an earlier post, the more we know about something the harder it is to be satisfied with a choice against it). Lawyers often benefit from this — part of a client’s investment in indecision is represented by our fees. This behaviour is predictable, but irrational. According to Ariely, unpredictable rationality can help us make better decisions earlier. We would also avoid wasting our limited resources on options that we will never actually choose.

I have recent experience of this. We are in the process of choosing between two options that are extremely closely matched. Neither choice would be wrong. Either would be entirely defensible. The longer I think about the options and balance the different pros and cons, the more difficult it will be to find the time to implement whichever choice I make. It is time to stop dithering and be rational — just choose one.

 Via Kottke.

Projects, choice and satisfaction

Patrick Lambe points to an article in the Des Moines Register reporting on research done at the University of Iowa.

The team’s paper, “The Blissful Ignorance Effect,” shows that people who have only a little information about a product are happier with their purchases than people who have more information, the U of I reported. The paper will be published in an issue of the Journal of Consumer Research.

“We found that once people commit to buying or consuming something, there’s a kind of wishful thinking that happens and they want to like what they’ve bought,” Nayakankuppam said in a prepared statement. “The less you know about a product, the easier it is to engage in wishful thinking. The more information you have, the harder it is to kid yourself.”

This is not a surprising conclusion to anyone who has read Barry Schwartz’s book, The Paradox of Choice, or seen the video of his presentation at TED in 2005.

Psychologist Barry Schwartz takes aim at a central belief of western societies: that freedom of choice leads to personal happiness. In Schwartz’s estimation, all that choice is making us miserable. We set unreasonably high expectations, question our choices before we even make them, and blame our failures entirely on ourselves. His relatable examples, from consumer products (jeans, TVs, salad dressings) to lifestyle choices (where to live, what job to take, whom and when to marry), underscore this central point: Too many choices undermine happiness.

There is a resonance in this for me. When we do projects, we spend a long time ruminating over a massive range of choices: which supplier should we go with; whose solution fits our needs better; how should we customise the system; how can we meet the (conflicting) expectations of people in the firm; and so on. The issues identified by Schwartz and by the Iowa researchers are magnified when we have to make choices on behalf of the firm. We, making choices, are less likely to be happy that we have done the right thing in the end than if we were choosing a solution just for ourselves. People in the firm, for whom the choice is made, are much more likely to challenge the result than if they had been involved or had been choosing for themselves.

In understanding our psychology better, Schwartz offers us a hope of satisfaction. If we recognise that too many choices undermine our happiness, we may become happier with our selection: we would have been as unhappy with any other choice that we might have made. Likewise, in managing projects, we can be more resolute in the decisions that we make by recognising that any choice will make some people unhappy, and that the least happiness will result from trying to please everyone.

The only challenge after that is to persuade people that the outcome is for the best, in the best of all possible worlds.