Valuing KM: some hard figures

In general, I am not keen to get bogged down in debates about the financial value of knowledge management, or the RoI of particular activities. To an extent this is because I am not well-versed in financial management, and I suspect that those who are sometimes use their expertise as a black art in a way that constrains experimentation and innovation. Also, for knowledge-intensive businesses (like law firms) it should actually be difficult to argue against effective management of knowledge activities — they are a basic health requirement, not a luxury. However, a couple of recent blog posts (together with an old memory and a conference presentation) have brought the value question to the fore for me.

Some time ago, I attended a two-day workshop on knowledge management in law firms (probably the only formal KM training I have had). One of the principles that stuck with me was that KM value can be judged by how well it supports the core elements of law firm profitability. Memorably, this comes with an acronym: RULES.

  • Realization of billing rates;
  • Utilization of attorneys;
  • Leverage of lawyers;
  • Expense control; and
  • Speed of billings and collections.

KM can help improve all of these in one way or another, and it is always useful to take time to contemplate whether we are doing our best in each of these areas. As usual, it is also important to distinguish the knowledge component from other areas of management. KM is not about improvements in time recording, for example — that may be a joint effort between IT (building a system to automate timesheets), HR (designing processes to help partners recognise good practice and manage poor time-keepers), Finance (communicating the impact of good time-keeping, billing, etc), and BD (collating feedback from clients on good and bad practice). However, along with these functions, KM people will have a part to play — perhaps by unpacking what lawyers actually do when they work and exposing where the pinch-points are, or developing clear checklists and guidance to ensure that there are as few obstacles as possible to doing all the important elements of the job.

One of the interesting points in profitability is leverage. As Toby Brown makes clear in his 3 Geeks… post today, many partners fail to understand the financial importance of driving work down to the lowest effective level.

Yet most firms don’t get this. Primarily because comp systems reward a different behavior. They’re not designed to reward profits – they reward hours and revenue. This is the case since these compensation systems were designed under a different model. This was a cost-plus business model, where profit was built into prices (a.k.a. rates). So partners have not focused on the metric of profitability in this fashion.

Once partners understand this, then it becomes quite natural to shift work to its lowest cost, effective labor source. Ron Baker will likely appreciate this statement: Tasks should be performed at their cheapest, most effective, level of timekeeper. This behavior will lead to improved profitability for law firms. But more importantly, this same behavior will lead to lower costs of service for clients. On a simple, illustrative level this means partners should not be performing tasks associates or paralegals can perform sufficiently well. Doing so undermines profits and raises costs for clients.

That point about clients is important. One of the discredited arguments against law firm KM was to claim that “KM is about saving time, and we don’t need to do that because we charge our clients for our time and so saving it undermines our income stream.”

That was always a poor argument (and to be honest is a bit of a straw man), but now we know how much the economy has affected our clients and most firms, if not all, profess to understand their clients. However much lawyers try to empathise, many of them will miss the impact of overruns on legal fees. For me it was brought home by Tony Williams in the keynote I referred to in my last post. He pointed out that in addition to delivering commercial legal solutions for their companies, General Counsel will be under pressure from their Finance Directors to manage costs to a pre-determined budget. Any overrun on that budget will require a many-fold increase in turnover to cover the cost.

For example, take Tesco, which appears to have a net profit margin of about 4% at present. (I know nothing of that business, apart from being an occasional user of its retail services (usually under duress). All information replicated here is taken at face value from public sources.) In rough terms, this means for every £100,000 of revenue, Tesco spends £96,000, and only makes £4,000 profit. Any cost overrun eats directly into the profit (it can’t come from anywhere else), and so has to be matched with a significantly greater increase in sales. A law firm acting for Tesco that allows costs on a given transaction to increase by just £12,000 (maybe three associates taking a day and a half longer than they should have done on the job) will require the supermarket to make £300,000 more in sales just to maintain its margin. Which partner wants to tell their client that because of the firm’s shoddy KM, the client needs to find an additional £300,000 revenue? Maybe the RoI on KM needs to be measured by reference to a reduction in the number of difficult conversations partners have with clients?

The other point about valuing KM was made very forcefully by Nick Milton, developing a point made by Larry Prusak.

When I was at the KMRussia conference with Larry last week, he asked a question which made me really think hard, and its an interesting question for anyone concerned with KM metrics.

He asked “What percentage of a company’s non-capital spend, is spent on knowledge”?

Now I would be thinking in terms of 3% maybe – perhaps the training budget, or perhaps the budget spent on conferences, but Larry suggested that would be quite wrong.

His answer was – 60%

60% of an organisations non-capital spend, is spent on knowledge.

The 60% figure is difficult to pin down — it depends on what other non-capital costs a business has. (For a law firm, rent may be a higher cost than for many other businesses.) The basic equation is simple enough, though:

Take the company wages bill, take away what this bill would be if everyone was paid as a new graduate, and that’s the investment in knowledge. After all, if knowledge was not valuable, you could staff the company with smart young graduates at a fraction of the cost. The only reason you don’t, is because knowledge is valuable.

Nick’s post got me thinking. How much do law firms value knowledge and, more interestingly, what return do they get on it? That latter point was not part of Nick’s argument, but it is one that can be explored quite easily for a business (like a law firm) that charges directly for the use of its knowledge. Just as one can get a figure for the value of knowledge by totting up a notional wages bill as if everyone was a raw recruit, one can do the same for the return on this value by calculating notional fee income for these raw recruits and comparing that figure with the actual fee income.

I have postulated an imaginary law firm: with 1060 staff and partners, a total of 120 partners in three grades, 500 other qualified fee-earners (in four bands), plus 75 trainees. The 365 support staff are grouped into five bands. Unfortunately, it is not possible to embed Google spreadsheets here, but this link will go to the full set of data.

Using some rough data for salaries (I have given the partners a salary for the purposes of the calculation, even though they would usually see a share of profit), fee rates, and so on, I have arrived at the following figures.

  • Actual salary bill: £56,350,000
  • Actual revenue: £178,887,500
  • Notional salaries: £25,750,000
  • Notional revenue: £93,200,000

Please take a look at the figures in the spreadsheet and suggest amendments n the comments — I don’t claim that this is a perfect model. However, it does suggest that this firm pays its people a knowledge premium of £30,600,000 annually, in return for which it recoups additional income of £85,687,500. This looks like a pretty spectacular return on investment to me.

Knowing together, better

I am a bit of an e-mail hoarder, so occasionally I go back into the store and find an apparently random message that strikes a new chord. So it was when I stumbled across a message from Kaye Vivian to the ActKM mailing list dating back to July 2008. Her e-mail simply drew attention to an article by Richard McDermott on communities of practice (CoPs). More significantly, Richard had listed six characteristics shared by CoPs that successfully matured into dynamic entities (rather than withering away).

Cloister, Canterbury Cathedral

To date, I have not explored the potential benefits of CoPs for knowledge purposes. Within law firms, self-organised or mandated groups are the norm. At one extreme, there is the practice group or client team, and at the other there may be groups of like-minded individuals with a common interest (such as trainee solicitors) who cluster together for support when necessary. Some of these groups work as CoPs by sharing knowledge and learning incidental to their main purpose. Reading Richard McDermott’s article, however, I thought his conclusion probably had wider resonance than just for CoPs.

So what are Richard’s six characteristics? Kaye’s e-mail referred to a post of Stan Garfield’s in which he summarised this part of the article, but Richard actually started by pointing to factors inhibiting flourishing CoPs:

When starting, communities often need to build momentum as they discover what knowledge is useful to share. Once they’ve been going for a few years, three other problems often inhibit communities’ ability to maintain the spark they had during their early years — loss of momentum, loss of attention and localism.

Once these problems are overcome, six factors are evident in successful CoPs:

Not all communities at mid-life suffer these limitations. Some are vital, full of energy and add value to both their members and the company. The most vital of the communities we reviewed shared six characteristics — clear purpose, active leadership, critical mass of engaged members, sense of accomplishment, high management expectations and real time.

Whilst I have no experience with CoPs, I think these characteristics also hold good for successful collaboration of many different types. For example, organisational wiki use works well and adds value when we see the factors manifested in the following ways.

  1. Clear purpose: A wiki which has a defined purpose (creating a resource, for example, or managing a project) flourishes where unfocussed efforts fail.
  2. Active leadership: As Stuart Mader points out in his book, Wikipatterns, a number of key roles have grown up around good wiki use. One of those is the wiki champion: “A passionate, enthusiastic champion is essential to the success of wiki…”
  3. Critical mass of engaged members: Because of the 90:9:1 principle, a significant number of people is necessary to generate valuable wiki contributions.
  4. Sense of accomplishment: One of the advantages of good wikis over traditional CoPs is that as they grow the contributions of members naturally accrete and can provide a real sense of accomplishment. By the same token, if nothing is happening with the wiki people will see it and are unlikely to be encouraged to turn it round.
  5. High management expectations: Whilst many wikis are established as grass-roots activities, they can still benefit from interest being shown by senior people in the organisation. Whilst there is an argument that Enterprise 2.0 might result in less hierarchical organisations, it is still the case that people respond to traditional management and leadership.
  6. Real time: This is where wikis can score over traditional CoPs. Whereas CoPs may require additional time (McDermott refers to one organisation where there was an expectation that 10% of people’s time was dedicated to community activities), wikis can be the place where some aspects of work actually take place (in preference to e-mail, for example). This success factor is probably better worded as real commitment.

And what does success look like? For Richard McDermott, CoPs are successful when they achieve a significant level of influence in the organisation.

But to play this role effectively, communities need to be more than informal discussion groups. They need to be empowered to be full-fledged elements of the organization, legitimately exercising influence without formal authority.

The same is probably true of wikis.

Getting attention — the comedy approach

One of the joys of Twitter is that people one follows often point to things that one would otherwise have missed. It was by that route that I became aware of the work of Chris Atherton. She is a specialist in visual perception, cognition and presentation skills. I first encountered her work when someone pointed me to her Slideshare presentation, “Visual attention: a psychologist’s perspective”, which provides a high-level overview of the issue of cognitive load in presentations.

Chris’s blog is full of valuable insights, as is her twitterstream. Her recent post on giving presentations is a great example. I especially like the way it starts — she was going to send some thoughts about presentations to a friend, but it got out of hand.

So instead of sending my friend an email, I wrote this blog post. It’s ostensibly about the mistakes students make when they give presentations, but really it’s about how the only rules you need to know about giving a good presentation are the ones about human attention.

It’s a great post, and full of really usable advice. Unlike many pontificators about Powerpoint, Chris shuns all those rules about structure.

Knowing which rules to follow and which to break is mostly a matter of practice and experience — which you may not have. So ignore, or at least treat with extreme suspicion, anything that sounds like a rule. Common rules include:

  • Use X lines of text/bullet-points per slide
  • Plan one slide for every N seconds of your talk
  • The 10/20/30 rule

These all sound perfectly sensible, but the trouble with rules is that people cling to them for reassurance, and what was originally intended as a guideline quickly becomes a noose.

Ultimately, good presenters just need to bear one thing in mind:

Concentrate on the rules of attention. The thing you most want during a presentation is people’s attention, so everything you do and say has to be about capturing that, and then keeping it. The rules of attention are more or less universal, easier to demonstrate empirically than rules about specific slide formats, and can be neatly summarised as follows: people get bored easily.

Chris then elaborates on what some of those rules are. I would summarise them here, but that would deprive you of the experience of reading her post and the excellent comments on it. I just want to single out one of those comments because it threw something into sharp focus for me.

At the end of a substantial comment, Martin Shovel remarked:

A thesis should be expressed in the form of a proposition – i.e. a sentence – the simpler and shorter the better! – that asserts or denies something about the content. ‘My holiday in Italy’ isn’t propositional; whereas ‘holidays in Italy are a nightmare’ is. It’s good to think of your proposition in the following way. Imagine you’re about to give your presentation when the fire-alarm suddenly goes off. Now you find yourself with only 30 seconds in which to sum up the point of your presentation – what you say in those 30 seconds should be your proposition.

Reading this, I was reminded of Robert McKee’s Story, and of the experience of watching a good comedian. In his exposition of good screenwriting McKee is clear that the script needs to hold the audience’s attention (the theme of bonding with the audience runs through the book), and that it often does that by tantalising the audience. Here he is at the very start of the book, for example:

When talented people write badly it’s generally for one of two reasons: Either they’re blinded by an idea they feel compelled to prove or they’re driven by an emotion they must express. When talented people write well, it is generally for this reason: They’re moved by a desire to touch the audience.


No film can be made to work without an understanding of the reactions and anticipations of the audience. You must shape your story in a way that both expresses your vision and satisfies the audience’s desires. The audience is a force as determining of story as any other element. For without it, the creative act is pointless.

A good stand-up comedian often does a similar thing. For example, here (jump to 3’33” for the relevant section) is Alun Cochrane sharing his thoughts on trains, peaches and Red Bull (depending on where you work, this may contain language that is NSFW):

The way he builds the scenario layer by layer retains the audience’s attention and even allows him room for digressions. It is a lesson worth learning. Few comedians or screenplays use bullet points to make their point (apart from the rare examples where bullet points are the point). They command attention by tantalising, asking questions without obvious answers, by engaging the audience’s brains.

Getting attention isn’t just a necessity for scriptwriters, comedians or lecturers. I think anyone who has a message to convey, in whatever format, (including driving organisational change) needs to be good at this.

It’s mine and I will choose what to do with it

This isn’t a political blog, and it is a coincidence that I came across a couple of things that chime with each other on the same day that the UK government has started to reverse from its enthusiastic promotion of ID cards for all.

The first juicy nugget came from Anne Marie McEwan. In writing about social networking tools and KM, she linked some of the requirements for successful social software adoption (especially the need for open trusting cultures) to the use of technology for monitoring.

And therein lies a huge problem, in my strong view. Open, trusting, transparent cultures? How many of them have you experienced? That level of monitoring could be seen as a version of Bentham’s Panopticon. Although the research is now quite old, there was a little publicised (in my view) ESRC-funded research project in the UK, The Future of Work, involving 22 universities and carried out over six years. One of the publications from that research was a book, Managing to Change?. The authors note that:

“One area where ICT is rapidly expanding management choices is in monitoring and control systems … monitoring information could connect with other parts of the HRM agenda, if it is made accessible and entrusted to employees for personal feedback and learning. This has certainly not happened yet and the trend towards control without participation is deeply disquieting.

If ICT-based control continues to be seen as a management prerogative, and the monitoring information is not shared with employees, then this is likely to become a divisive and damaging issue.”

On the other hand, the technology in the right hands and cultures creates amazing potential for nurturing knowledge and innovation.

What struck me about this was that (pace Mary Abraham’s concerns about information disclosure), people quite freely disclose all sorts of information about themselves on public social networking sites, such as Facebook, LinkedIn, Twitter, and so on. The fact is that some of this sharing is excessive and ill-advised, but even people who have serious reservations about corporate or governmental use of personal information lose some of their inhibition.

Why do they do this? In part it may be naïveté, but I think sometimes this sharing is much more knowing than that. What do they know, then? The difference between this voluntary sharing and forced disclosure is the identification of the recipients and (as Anne Marie recognises) trust. Basically, we share with people, not with organisations.

The second thing I found today was much more worrying. The UK Government is developing a new strategy for sharing people’s personal information between different government departments. It starts from a reasonable position:

We have a simple aim. We want everyone who interacts with Government to be able to establish and use their identity in ways which protect them and make their lives easier. Our strategy seeks to deliver five fundamental benefits. In future, everyone should expect to be able to:

  • register their identity once and use it many times to make access to public services safe, easy and convenient;
  • know that public services will only ask them for the minimum necessary information and will do whatever is necessary to keep their identity information safe;
  • see the personal identity information held about them – and correct it if it is wrong;
  • give informed consent to public services using their personal identity information to provide services tailored to their needs; and
  • know that there is effective oversight of how their personal identity information is used.

All well and good so far, but then buried in the strategy document is this statement (on p.11):

When accessing services, individuals should need to provide only a small amount of information to prove that they are who they say they are. In some situations, an individual may only need to use their fingerprint (avoiding the need to provide information such as their address).

But I can change my address (albeit with difficulty). I can never change my fingerprints. And fingerprints are trivially easy to forge. Today alone, I must have left prints on thousands of surfaces. All it takes is for someone to lift one of those, and they would have immediate access to all sorts of services in my name. (An early scene in this video shows it being done.

What I really want to be able to do is something like creating single-use public keys where the private key is in my control. And I want to be able to know and control where my information is being used and shared.

Going back to KM, this identity crisis is what often concerns people about organisationally forced (or incentivised) knowledge sharing. Once they share, they lose control of the information they provided. They also run the risk that the information will be misused without reference back to them. It isn’t surprising that people react to this kind of KM in the same way that concerned citizens have reacted to identity cards in the UK: rather than No2ID, we have No2KM (stop the database organisation).

The conundrum focus

A discussion is currently taking place on the ActKM mailing list about the theoretical underpinnings of knowledge management. Joe Firestone, reaching into the language of philosophy, has consistently taken the view that KM only makes sense when related to the need to improve underlying knowledge processes:

I see [knowledge management] more as a field defined by a problem, with people entering it because they’re interested in some aspect of the problem that their specific knowledge seems to connect with.

Unfortunately, in more quotidian language, the word ‘problem’ suggests difficulties that need to be overcome, but sometimes KM is actually not dedicated to overcoming difficulties but to taking maximum advantage of opportunities. When Joe refers to a ‘problem’ I think he means it as a puzzle or conundrum: “how do we fill this knowledge gap?” Stated thus, I think this is a less objectionable aim for KM.

What about the nature of the conundrums that face organisations? Rightly, in linking to an earlier post of mine, Naysan Firoozmand at the Don’t Compromise blog suggested that there was a risk of vagueness in my suggestion (channelling David Weinberger) that KM might be about improving conversations in organisations.

Which is all true and good and inspiring, except I want to wave my arm about frantically like the child at the back of class and shout ‘But Sir, there’s more … !’. There’s a difference between smarter and wise that’s the same difference as the one between data and information: the former is a raw ingredient of the latter. And – when it comes to organisational performance and leadership (which is our focus here, rather than KM itself) – simply being smarter isn’t the whole story. Clever people still do stupid things, often on a regular (or worse, repeated) basis. Wise people, on the other hand, change their ways.

This is a fair challenge. Just improving the conditions for exchange of knowledge is not enough on its own. (Although I would argue that it is still an improvement on an organisation where conversations across established boundaries are rare.) There are additional tasks on top of enabling conversation or other knowledge interactions, such as selecting the participants (as Mary Abraham made clear in the post that started all this off), guiding the interaction and advising on possible outcomes.

Those additional tasks all help to bring some focus to knowledge-related interactions. The next issue relates to my last blog post. In doing what we do, we always need to ask where the most value can be generated. The answer to that question, in part, is driven by the needs expressed by others in the organisation — their problems or conundrums. However, not all problems can be resolved to generate equal value to the organisation.

The question, “what value?” is an important one, and reminds us that focus on outcomes is as important as avoiding vagueness in approach. How can we gauge how well our KM activities will turn out? Some help is provided, together with some scientific rigour, by Stephen Bounds (another ActKM regular) who has created a statistical model for KM interventions using a Monte Carlo analysis. His work produces an interesting outcome. It suggests that on average, the more general a KM programme, the less likely it is to succeed. In fact, that lack of success kicks in quite quickly.

To maximise the chance of a course of action that will lead to measurable success, knowledge managers should intervene in areas where one or more of the following conditions hold:

  • occurrences of knowledge failures are frequent
  • risks of compound knowledge failure are negligible or non-existent
  • substantial reductions in risk can be achieved through a KM intervention (typically by 50% or more)

Where possible, the costs of the intervention should be measured against the expected savings to determine the likelihood of benefits exceeding KM costs.

So: simple, narrowly defined KM activities are more likely to succeed, all other things being equal. Success here is defined as it should be, as making a contribution to reductions in organisational costs (or, potentially, improving revenue). Stephen’s analysis is really instructive, and could be very useful in encouraging people away from a “one size fits all” organisation-wide KM programmes.

In sum, then, our work requires us to identify the conundrums that need to be solved, together with the means by which they should be addressed, and to define the outcomes as clearly as possible for the individuals involved and for the organisation. We cannot hope to resolve all organisational conundrums by improving knowledge sharing. So how do we choose which ones to attack, and how do we conduct that attack? Those are questions we always need to keep in mind.

Golden oldies

Every now and then, it is useful to be reminded that there is little new in the world. The most recent example for me was a pointer to a 1993 article by Peter Drucker in The Independent.


Drucker’s article presents, in the pithiest form possible, a clear denunciation of many commonly-held truths about managing businesses. These are the five sins.

  1. The worship of high profit margins and of ‘premium pricing’
  2. Mispricing a new product by charging ‘what the market will bear’
  3. Cost-driven pricing
  4. Slaughtering tomorrow’s opportunity on the allure of yesterday
  5. Feeding problems and starving opportunities

A couple of these really stood out for me. Cost-driven pricing is at the heart of many business models. Many law firms hear it from their suppliers, and they pass it on to their clients. Neither of these is sustainable. As Drucker puts it:

Most American and practically all European companies arrive at their prices by adding up costs and putting a profit margin on top. And then, as soon as they have introduced the product, they have to cut the price, redesign it at enormous expense, take losses and often drop a perfectly good product because it is priced incorrectly. Their argument? ‘We have to recover our costs and make a profit.’

This is true, but irrelevant. Customers do not see it as their job to ensure a profit for manufacturers. The only sound way to price is to start out with what the market is willing to pay – and thus, it must be assumed, what the competition will charge – and design to that price specification.

Cost-driven pricing is the reason there is no American consumer electronics industry any more. If Toyota and Nissan succeed in pushing the German luxury car makers out of the US market it will be a result of their using price-led costing.

Starting out with price and then whittling down costs is more work initially. But in the end it is much less work than to start out wrong and then spend loss-making years bringing costs into line.

Another sin is one that some KM activities mistakenly support and promote: “feeding problems and starving opportunities.” KM is often seen as a toolkit to improve problem-solving, but for Drucker, “All one can get by ‘problem-solving’ is damage containment. Only opportunities produce results and growth.”

What do we do in our knowledge work to feed opportunities, rather than dwell on problems?

Social norms and knowledge sharing

Dan Ariely’s book, Predictably Irrational, is a really eye-opening read. He deconstructs a number of traditional economic constructs with humour and insight. Most importantly, he uses careful experimentation to demonstrate exactly how irrational we are.

In the video above, Ariely talks about the difference between people’s behaviour in a situation governed by social norms by comparison with market norms. He examines this difference in Chapter 4 of the book: “The Cost of Social Norms.” Reading this chapter, I thought I had found the answer to why incentives do not work in knowledge management initiatives.

Ariely’s argument is that in a situation governed by social norms, people will help without thought of a financial reward. On the other hand, interactions governed by market norms are very different.

The exchanges are sharp-edged: wages, prices, rents, interest, and costs-and-benefits. Such market relationships are not necessarily evil or mean — in fact, they also include self-reliance, inventiveness, and individualism — but they do imply comparable benefits and prompt payments. When you are in the domain of market norms, you get what you pay for — that’s just the way it is. (p. 68)

The trouble is that whilst knowledge sharing is at its heart a social activity, it takes place in an environment governed by market norms — the workplace. Naturally enough, there is an inclination to want to recognise good knowledge behaviours in the only way that an employer knows: financially. As Neil Richards has explained, this just does not work. Ariely describes an experiment in which people were asked to perform a mundane and fruitless task on a computer. One group was paid $5 for the task, another group just 50¢, and a third was asked to do it as a favour. The productivity of the $5 group was slightly lower than the ‘favour’ group, but the 50¢ group was over 50% less productive than the others.

Perhaps we should have anticipated this. There are many examples to show that people will work much more for a cause than for cash. A few years ago, for instance, the AARP asked some lawyers if they would offer less expensive services to needy retirees, at something like $30 an hour. The lawyers said no. Then the program manager at AARP had a brilliant idea: he asked the lawyers if they would offer free services to needy retirees. Overwhelmingly, the lawyers said yes.

What was going on here? How could zero dollars be more attractive than $30? When money was mentioned, the lawyers used market norms and found the offer lacking, relative to their market salary. When no money was mentioned they used social norms and were willing to volunteer their time. Why didn’t they just accept $30, thinking of themselves as volunteers who received $30? Because once market norms enter our considerations, social norms depart. (p. 71, my emphasis)

It is possible to use gifts to thank people for their efforts, and still stay inside the social norms. However, if one suggests that the gift has a monetary value, the market norms reassert themselves. Although Ariely doesn’t say so, I suspect that using small-scale rewards on a regular basis (such as a box of chocolates for the best contribution to know-how every month) would also be regarded as market-related. Gifts need to be a surprise to be valued as part of a social interaction.

Later in this chapter, Ariely describes how a social situation can take a long time to recover from being drawn into the market. He tells a story of a childrens’ nursery that had previously used social sanctions (guit, mainly) to control parents who picked their children up late. When the nursery started to impose fines for lateness instead, parents applied market thinking and the incidences of lateness increased. When the fines were removed, the parents continued to pick up late as they had done in the fines era — guilt no longer worked as a sanction.

One problem for some law firms is that they have given knowledge management responsibilities to a specific group of people (Professional Support Lawyers, or equivalent). Because those people (rewarded according to the market) have a defined role, it can be difficult to motivate others in the firm to share knowledge as a social obligation. Unfortunately, the market value of effective knowledge sharing is almost certainly more than most employers could afford. “Money, as it turns out, is very often the most expensive way to motivate people. Social norms are not only cheaper, but often more effective as well.” (p. 86)

Having established that the balance between social and market norms is a very senstive one, Ariely is still convinced that there is a real place for social norms in the workplace.

If corporations started thinking in terms of social norms, they would realize that these norms build loyalty and — more important — make people want to extend themselves to the degree that corporations need today: to be flexible, concerned, and willing to pitch in. That’s what a social relatinonship delivers. (p. 83)

As well as these thoughts on knowledge sharing in the enterprise, Ariely’s chapter explains much to me about the success of so-called social computing tools (and also why they are well-named). They play on the genuine human desire to comply with social norms of exchange, assistance, generosity and collaboration. The challenge is to import this desire into the organisational context, without running into market norms.

Your boom is not my boom

I am currently reading Generation Blend: Managing Across the Technology Age Gap. (There will be a review when I have finished it.) In the first chapter there is a graph of the birth rate in the United States which brought home to me how much our unarticulated assumptions matter.

Here is the graph (taken from Wikipedia):

This shows a clear increase in birth rate between 1946 and 1962 (known as the Baby Boom), followed by a slump between 1963 and 1980 (Generation X) and a rise again between 1981 and 2000 (Generation Y, or the Millennials). Compare this with the birth rate in the UK, as illustrated in the graph below (drawn using figures from the Office of National Statistics).


I have shaded three areas in the UK chart, marking years after the 1930s in which the birth rate rose significantly above the norm. The peaks occurring in the periods 1944-49 and 1957-1972 exceeded the mean for the century (just over 700,000 births per annum, apart from a slight dip below this in 1945). I have marked another bulge between 1986 and 1996, but the birth rate in these years is still below the mean for the century (the peak year is 1990, with 706,140 births — 2000 below the mean). By comparison, the birth rate over the same period in the US exceeded that in some years of their post-war baby boom.

For me, this difference between the US and UK is striking. It means that we need to be careful when using terms like “baby boom” and when assessing the impact of generational change in the workplace. As the US Bureau of Labor Statistics noted in 1985 (“Recent trends in unemployment and the labor force, 10 countries”):

In North America, birth rates peaked in the late 1950’s . In Western Europe, however, the peak occurred in the early to mid-1960’s, which coincided with the tapering off of North American birth rates. In Australia and Japan, the peak was reached much later, in the 1970’s.

In the United States and Canada, the children born during the baby boom reached working age in the early 1970’s, whereas those in Western European countries reached working age nearly 10 years later, during a period of generally declining economic growth . For Australia and Japan, the entry of the baby-boom generation is just beginning or yet to come.

There are two significant implications for the workplace. The first is that the UK baby boomers will be retiring ten years after those in the US. As a result, whereas the US needs to cope now with the “yawning gap in skills, experience, leadership, knowledge, and experience” (as Generation Blend puts it) that the loss of this cohort will bring, UK businesses have another decade to work out how to respond. On the other hand, the United States can almost count on their Millennials to replace the Baby Boomers, given the significant similarity in the birth rates. In the UK we do not have that luxury — there are not enough people in Generation Y to fill the places of the retiring generation. As a result there is almost certainly a different dynamic between the generations in the UK than there is in the US, and those of us on this side of the Atlantic need to be conscious of this when taking our cue from American studies and commentary.

Do you know where you’re going to?

Via James Mullan, here are “35 tips for getting started with social media.” The list is positioned thus:

If you are going to start using social media, you should at least have an understanding of what it’s about. Social media is not about the tools, the tools are only a facilitator.

Up to a point, Lord Copper. Actually, this is an interesting list, but it is not particularly coherent. Anyone facing the world of social media needs to answer a simple question for themselves: “why am I doing this?” There are many possible answers:

  • To find out more about the world of Web 2.0
  • To connect with people I already know
  • To connect with people I don’t yet know who have a common interest
  • To position myself or my business in this new market
  • To make money
  • To contribute information and knowledge

…and so on.

Some of these aims are honourable, some less so. That’s fine — the whole gamut of relationships can be facilitated by these tools. But you need to know what you want from them. Before working through this list of 35 tips, you need to be able to judge whether any one of them will help you serve your vision of what you want from social media. You also need to be aware that the authors of lists like these may have a different vision from yours.

The same is true for shorter lists. Kevin O’Keefe has named his top three social media tools for law firms. They are blogs, Twitter and LinkedIn. That may be true for those firms (and their clients and potential clients) that are comfortable with those tools. If they are just a me-too choice, that will be glaringly obvious to others. That is because the main goal of these tools is connection. If you or your firm feels more comfortable connecting in a different way (whether that is Web 2.0 or not), do that instead. Those you connect with will respect you for it.

And if you do follow Kevin’s advice, connect properly. Clients find it irritating enough when law firms stop producing traditional briefings. Imagine their discontent when you are no longer connecting with them via a blog that they have come to know and respect.

So what do you want from your social media? What will success look like? Can you sustain your interest in it for the long term? Once you have answered those questions, you are ready to think about the tools you need and a strategy for deploying them.

Yes — you do need a strategy. Think about e-mail. That is just a tool. It facilitates connections. But it has become a monster for many people because we didn’t think properly about how we intended to use it and the limits we should put on it. All the social media tools that look today like fluffy kittens also have the potential to become monsters as scary as e-mail. If we bear that in mind when giving them house-room, we might be able to cope better when they start to grow.

(Hat tips to Mary Abraham and Doug Cornelius for the link to Kevin’s post.)

Prescriptivity and appropriateness

One of the links in my blogroll is to Language Log, which is home to some of the most rigorous blogging on the internet. As its name indicates, it deals with language and linguistics, but in the broadest possible sense. So its authors have taken on sex differences and biological determinism, science journalism, lolcats, and legal language. However, one of the best posting categories is “Prescriptivist Poppycock.” When you need a break from pedants whingeing about split infinitives and dangling prepositions, this is where to come.

David Crystal’s book, The Fight for English (subtitled “How language pundits ate, shot, and left”) is also an attack on prescriptivist poppycock. In it, he describes how language pedantry developed during the eighteenth century, and outlines how an understanding of appropriate language can help people to understand grammar and language generally. (A point completely lost on this Amazon reviewer.) This is why appropriateness matters:

One of the aims of education, whether by parents or teachers, is to instil appropriate behaviour. If we behave inappropriately, we risk social sanctions. Language is a form of social behaviour, and it is subject to these sanctions as is everything else. The main aim of language education has thus to be the instilling into children of a sense of linguistic appropriateness — when to use one variety or style rather than another, and when to appreciate the way in which other people have used one variety or style rather than another. This is what the eighteenth-century prescriptive approach patently did not do.

When he turns to the history of grammar teaching in the UK, Crystal’s reduces his argument to a simple analogy. (Until the mid-1960s, English language teaching in the UK depended heavily on prescriptive texts. After that point, virtually no grammar was taught as part of the school syllabus. From the 1990s, following a period of intense academic study of English language and grammar, the National Curriculum for English incorporated language teaching that (a) balanced the study of language structure and the study of language use, and (b) aimed to instil a sense of language awareness in children.) The balance is important:

The basic problem [with historic English teaching] was that there was no means of relating the analytical skills involved in doing grammar to the practical skills involved in speaking, listening, reading, and writing. The grammarians argued that there just had to be a connection — that any child who learned to parse would inevitably end up being a better user of its language. But there was nothing at all inevitable about it. And there was an obvious counter-argument, best summed up in an analogy. I have a friend who is a wonderful car mechanic, but he is a terrible driver.

The analogy is worth developing. To be a good driver takes a lot more than knowledge of how a car engine works. All kinds of fresh sensitivities and awarenesses are involved. Indeed, most of us learned to drive with next to no understanding of what goes on inside the bonnet. It is the same with language. …[S]omething else has to happen if children are to use a knowledge of grammar in order to become better speakers, listeners, readers, or writers. A connection has to be made — and, more to the point, demonstrated.

Reading this passage, I was reminded of something else I read today. In the Anecdote blog, Shawn Callahan quotes a passage from John Medina’s book, Brain Rules. Here are the first couple of sentences:

Any learning environment that deals with only the database instincts [our ability to memorise things] or only the improvisatory instincts [our ability to imagine things] ignores one half of our ability. It is doomed to fail.

I had intended to write about this anyway, because it struck me that an approach to legal education (and, by extension, KM) that focuses on things like transaction processes and prescribed documents (held in databases) does not help to develop the creative and improvisatory instinct in lawyers. I have a feeling that many lawyers find improvisation difficult (please excuse the generalisation), and so they are happiest with KM that creates know-how databases and precedent banks. Such an approach does not actually serve them as well as they think it does.

As for the legal education point: a story from my wife. She is a corporate partner, with 20 years experience. A couple of years ago she was leading a very complex transaction, but the other side was represented by a much more inexperienced lawyer. More significantly, it was clear that this lawyer had been taught some standard transaction processes and had not developed enough imagination to see that the clients’ goals could be more readily met by diverging from the standard. Because of this, my wife and both sets of clients were frustrated until the other lawyer finally gave up on her approach and caved in. At this point, I am not privy to the details, but my guess is that the result of this change of heart was not particularly beneficial her client. At the very least, her intransigence will have prolonged the deal and increased its cost to both parties.

Prescriptivism may be dying out in the British educational system, but it is alive and well in law firms. In the current climate, how long will clients stand for it? And what are we doing to connect lawyers’ database instincts with their improvisory instincts in order to give them the understanding to become better advisors?