Knowledge sharing: it may not be what you think it is

John Tropea is one of my top Twitter friends for sharing interesting links and insights. Yesterday, he unearthed a great blog post from Patrick Lambe dating from 2006 (“If We Can’t Even Describe Knowledge Sharing, How Can We Support It?“). Patrick’s post starts calmly enough:

A combination of two very different incidents reminded me this week of just how incompetent we still are in KM at capturing the complexity, richness and sophistication of human knowledge behaviours. In the first incident I was asked to do a blind review of an academic paper on knowledge sharing for a KM conference. In the second, knowledge sharing was very much a matter of life and death. Although they shared a common theme, they might as well have represented alien universes.

From there, he becomes a bit more immoderate:

Let’s look at the conference paper first. After working my way through the literature review (a necessary evil), I started into the research proposal with my stomach starting to knot up and a growing sense of incredulity.

Although the authors had adopted Davenport & Prusak’s perfectly respectable definition of knowledge as a “fluid mix of framed experience, values, contextual information, and expert insight” it was becoming increasingly apparent as I worked my way into the paper that what they really meant by “knowledge sharing” was confined to contributing to and consuming from an online KM system. The research being described was designed to identify the factors that would indicate propensity for or against said behaviours. A knowledge sharing system that could, theoretically, be engineered.

Shame on them. After a good decade of practical effort and research focused on KM, how can people still think so mechanically and bloodlessly?

Justly immoderate, I think. Read on to see why.

Tonderghie Steading

It has to be right that knowledge in action is more valuable to organisations than inactive knowledge. Rory Stewart’s walking and engaging with people, as I wrote yesterday, shows one way in which high quality insight into complex systems can come from simple interactions rather than formal organised learning and knowledge. This is a point that Patrick made at greater length in an excellent paper he wrote in 2002 called “The Autism of Knowledge Management” (it’s a 23-page PDF downloadable from the linked blog post).

It depresses me that I have only just discovered this paper. Patrick wrote an incredibly useful critique of some traditional and ingrained organisational attitudes to e-learning and knowledge sharing. It should be much more widely known.

Here is his starting point:

There is a profound and dangerous autism in the way we describe knowledge management and e-learning. At its root is an obsessive fascination with the idea of knowledge as content, as object, and as manipulable artefact. It is accompanied by an almost psychotic blindness to the human experiences of knowing, learning, communicating, formulating, recognising, adapting, miscommunicating, forgetting, noticing, ignoring, choosing, liking, disliking, remembering and misremembering.

Once he has expanded on this, carefully defining what he means by ‘autism’ and ‘objects’ in this context, Patrick then presents and deals with five myths that arise as a result of this way of thinking. These are the myths of reusability, universality, interchangeability, completeness, and liberation. Of these, the one that struck me most was the myth of completeness:

The myth of completeness expresses the content architects’ inability to see beyond the knowledge and learning delivery. Out of the box and into the head, and hey presto the stuff is known. The evidence for this is in the almost complete lack of attention to what happens outside the computerised storage and delivery mechanism – specifically, what people do with knowledge, how it transitions into action and behaviour. How many people in knowledge management are talking about synapses, or the soft stuff that goes on in people’s heads? Is it simply assumed, that once the knowledge is delivered, it has been successfully transferred?


Knowledge only has value if it emerges into actions, decisions and behaviours – that much is generally conceded. But few content-oriented knowledge managers think through the entire lifecycle of the knowledge objects they deal in. Acquiring a knowledge artefact is only the first stage of what’s interesting about knowledge. We don’t truly know until we have internalised, integrated into larger maps of what we know, practised, repeated, made myriad variations of mistake, built up our own personalised patterns of perception and experience.

I can think of few more succinct and clear expressions of the process of knowing. In the organisational context, we need to be sure that everyone takes responsibility for developing their own knowledge — they cannot just plug themselves into a knowledge system or e-learning package. This statement shows why. The impact of this personal responsibility becomes clear within the section on the myth of interchangeability, where Patrick makes a valuable point about information and insight that resonated especially given my blog post from yesterday.

Beyond a basic informational level (and value added knowledge and learning need to go far beyond basic informational levels), when I have a specific working problem such as how to resolve a complex financial issue, the last thing I want is a necklace of evenly manufactured knowledge nuggets cross-indexed and compiled according to the key words I happen to have entered into the engine. Google can give me that, in many ways more interestingly, because it will give me different perspectives, different depths and different takes.

What really adds value to my problem-solving will be an answer that cuts to the chase, gives me deep insight on the core of my problem, and gives me light supporting information at the fringes of the problem, with the capability to probe deeper if I feel like it. Better still if the answer can be framed in relation to something I already know, so that I can call more of my own experience and perceptions into play. Evenness and interchangeability will not work for me, because life and the situations we create are neither even, nor made up of interchangeable parts.

We do have an evolved mechanism for achieving such deep knowledge results: this is the performance you can expect from a well-networked person who can sustain relatively close relationships with friends, colleagues and peers, and can perform as well as request deep knowledge services of this kind.

I suspect that (whether inside our organisations or otherwise) we can all identify people whose personal networks add significant value to their work and those around them. (And probably plenty whose silo mentality brings problems rather than focus.)

In his conclusion, Patrick presents “six basic principles that seem to work consistently in our knowledge and learning habits; principles that knowledge management and e-learning technologies need to serve.” These are:

  1. Highly effective knowledge performers prefer knowledge fragments and lumps to highly engineered knowledge parts.
  2. Parts need to talk to their neighbours.
  3. The whole is more important than the parts.
  4. Knowledge artefacts provide just enough to allow the user to get started in the real world.
  5. Learning needs change faster than learning design.
  6. Variety is the spice of life.

I need to read this section again — it didn’t resonate as well for me as the rest of the paper. That said, reading the paper again will be a delight rather than an imposition. I recommend it highly to anyone with an interest in knowledge and learning processes, and the systems we create to support them.

Book review: No More Consultants

Sometimes it is too easy to think (and write) of knowledge-related activities in the abstract. I am guilty of this myself, and I have many books which address the topic in that way — even when they provide examples it is difficult to think of them in concrete real-world terms. Geoff Parcell and Chris Collison’s new book, No More Consultants, provides a welcome dose of reality.

Bridge below Haddon Hall

This new book follows their earlier work, Learning to Fly, but has a much narrower focus. As a result, I think it is probably even more useful. The premise of No More Consultants is simple. In part it is provided by the book’s subtitle “we know more than we think,” but that is just the background. What Parcell and Collinson have done in the new book is to provide a workable framework for organisations to ascertain when and why they can rely on the expertise and experience of their own people, rather than calling in consultants. (Consultants can relax — the final chapter explains that better organisational understanding can lead to more fruitful engagements.)

The basic tool that Parcell and Collison introduce, explain, and show in use is what they call the ‘River Diagram’. This is a way of visualising the levels of performance in an organisation with regard to defined competences. A large gap between the level competence in different parts of the organisation provides opportunities for knowledge sharing.


In order to get to the river diagram, the organisation needs to identify an area for change and define detailed levels of performance. The next stage is for different parts of the organisation to assess their own level of performance. The sum of all this work is expressed in the river diagram, and each organisational unit can then decide where to focus their efforts to change by calling on the experience of other parts of the organisation (or even externally).

Within this basic framework, Parcell and Collison are able to spend some time fleshing out a number of key techniques, including facilitation, envisioning future developments, and peer assists. They provide a range of examples of the tools and techniques in use, ranging from development of HIV/AIDS programmes in Africa and India to knowledge sharing between Great Ormond St children’s hospital and the Ferrari F1 team. Along the way, they are also able to provide insights into ways of dealing with a number of recurring challenges to change, such as the ‘not invented here’ syndrome.

Despite the fact that the book is an invaluable guide to practical knowledge sharing, it is carefully not positioned as such. Because of this, it is more likely to find a receptive audience beyond the normal KM community. This attractiveness is enhanced by the clarity and concreteness with which its central ideas are expressed.

Finally, this book does not just exist between two hard covers. Just as they did with Learning to Fly, Parcell and Collison have created an online presence for the book. Whereas Learning to Fly was complemented by a mailing list, No More Consultants is supported by a more nuanced Ning community. This allows resources related to the book to be shared and discussed, and makes it possible for people using the book to share their experiences in one place. It will be interesting to watch how people use the space to develop the book beyond Parcell and Collison’s core text.

Speaking of social software and KM

Last week, Headshift hosted an “insight event” to showcase the report on social software for law firms written by Penny Edwards and Lee Bryant. I was honoured to be asked to present, along with Sam Dimond of Clifford Chance and Steve Perry of Freshfields.

Nick Holmes wrote a great summary of the event on his blog, Binary Law, and I intended to post the notes for my session here, but Penny has now done a really impressive job of transcribing our three presentations, together with Lee’s opening remarks. I am particularly impressed because she was listening into the event from Amsterdam, and I gather the sound quality was not particularly good.

Penny’s four posts on the Headshift blog are as follows:

As well as the presentations, we had some great questions from the audience and an opportunity for offline social networking. I only wish we could have had longer to discuss all the issues that people raised. Many thanks to Penny for putting the event together, and to Lars Plougmann for hosting it. (By the way, I think the term “insight event” is a really good one.)

Now and then

A couple of days ago, Patrick Lambe posted a really thoughtful piece considering the implications of heightened awareness from the new generation of social software tools as opposed to the traditional virtues of long-term information storage and access. If you haven’t read it, do so now. (Come back when you have finished.)

Laid down

The essence of Patrick’s piece is that when we focus our attention on the here and now (through Twitter or enterprise micro-blogging, for example), we forget to pay attention to the historically valuable information that has been archived away. This is not a problem with technology. He points to interesting research on academics’ use of electronic resources and their citation patterns.

How would online access influence knowledge discovery and use? One of his hypotheses was that “online provision increases the distinct number of articles cited and decreases the citation concentration for recent articles, but hastens convergence to canonical classics in the more distant past.”

In fact, the opposite effect was observed.

As deeper backfiles became available, more recent articles were referenced; as more articles became available, fewer were cited and citations became more concentrated within fewer articles. These changes likely mean that the shift from browsing in print to searching online facilitates avoidance of older and less relevant literature. Moreover, hyperlinking through an online archive puts experts in touch with consensus about what is the most important prior work—what work is broadly discussed and referenced. … If online researchers can more easily find prevailing opinion, they are more likely to follow it, leading to more citations referencing fewer articles. … By enabling scientists to quickly reach and converge with prevailing opinion, electronic journals hasten scientific consensus. But haste may cost more than the subscription to an online archive: Findings and ideas that do not become consensus quickly will be forgotten quickly.

Now this thinning out of long term memory (and the side effect of instant forgettability for recent work that does not attract fast consensus) is observed here in the relatively slow moving field of scholarly research. But I think there’s already evidence (and Scoble seems to sense this) that exactly the same effects occur when people and organisations in general get too-fast and too-easy access to other people’s views and ideas. It’s a psychosocial thing. We can see this in the fascination with ecologies of attention, from Tom Davenport to Chris Ward to Seth Godin. We can also see it in the poverty of attention that enterprise 2.0 pundits give to long term organisational memory and recordkeeping, in the longer term memory lapses in organisations that I have blogged about here in the past few weeks…

Jack Vinson adds another perspective on this behaviour in a post responding to Patrick’s.

I see another distinction here.  The “newer” technologies are generally about user-engagement and creation, whereas the “slower” methods are more focused on control and management activities much more so than the creation.  Seen in this light, these technologies and processes spring from the situation where writing things down was a time-consuming process.  You wanted to have it right, if you went to that much effort.  Unfortunately, the phrase “Document management is where knowledge goes to die” springs to mind.

In knowledge management, we are trying to combine the interesting knowledge that flows between people in natural conversation as well as the “hard knowledge” of documented and proven ideas and concepts.  KM has shown that technology just can’t do everything (yet?) that humans can do.  As Patrick says, technology has been a huge distraction to knowledge management.

I think Jack’s last comment is essential. What we do is a balance between the current flow and the frozen past. What I find fascinating is that until now we have had few tools to help  us with the flow, whereas the databases, archives, taxonomies and repositories of traditional KM and information management have dominated the field. I think Patrick sounds an important warning bell. We should not ignore it. But our reaction shouldn’t be to reverse away from the interesting opportunities that new technologies offer.

It’s a question (yet again) of focus. Patrick opens his post with a complaint of Robert Scoble’s.

On April 19th, 2009 I asked about Mountain Bikes once on Twitter. Hundreds of people answered on both Twitter and FriendFeed. On Twitter? Try to bundle up all the answers and post them here in my comments. You can’t. They are effectively gone forever. All that knowledge is inaccessible. Yes, the FriendFeed thread remains, but it only contains answers that were done on FriendFeed and in that thread. There were others, but those other answers are now gone and can’t be found.

Yes, Twitter’s policy of deleting old tweets is poor, but even if they archived everything the value of that archive would be minimal. Much of what I see on Twitter is related to the here and now. It is the ideal place to ask the question, “I’m looking at buying a mountain bike. For $1,000 to $1,500 what would you recommend?” That was Scoble’s question, and it is time-bound. Cycle manufacturers change their offering on a seasonal and annual basis. The cost of those cycles also changes regularly. The answer to that question would be different in six months time. Why worry about storing that in an archive?

Knowledge in law firms is a curious blend of the old and the new. Sometimes the law that we deal with dates back hundreds of years. It is often essential to know how a concept has been developed over an extended period by the courts. The answer to the question “what is the current position on limitations of liability in long-term IT contracts?” is a combination of historic research going back to cases from previous centuries and up to the minute insight from last week’s negotiations on a major outsourcing project for a client. It is a real combination of archived information and current knowledge. We have databases and law books to help us with the archived information. What we have been lacking up until recently is an effective way of making sure that everyone has access to the current thinking. As firms become bigger and more scattered (across the globe, in some cases) making people aware of what is happening across the firm has become increasingly difficult.

Patrick’s conclusion is characteristically well expressed.

So while at the level of technology adoption and use, there is evidence that a rush toward the fast and easy end of the spectrum places heavy stresses on collective memory and reflection, at the same time, interstitial knowledge can also maintain and connect the knowledge that makes up memory. Bipolarity simply doesn’t work. We have to figure out how to see and manage our tools and our activities to satisfy a balance of knowledge needs across the entire spectrum, and take a debate about technology and turn it into a dialogue about practices. We need to return balance to the force.

That balance must be at the heart of all that we do. And the point of balance will depend very much on the demands of our businesses as well as our interest in shiny new toys. Patrick is right to draw our attention to the risks attendant on current awareness, but memory isn’t necessarily all it is cracked up to be. We should apply the same critical eye to everything that comes before us — how does this information (or class of information) help me with the problems that I need to solve? The answer will depend heavily on your organisational needs.

Making time

One of the things that can prevent us from getting things done is time, and how we manage it. Even without anyone else’s help (or hindrance), the average worker has to deal with procrastination and thinker’s block.


When those challenges are added to the need to work with colleagues and clients in a managed environment, things can get even more difficult. It is easy to get carried with the flow of life and work without really thinking about how best to use one’s time. Clients have demands to which lawyers are keen to respond, and most firms have financial imperatives that require particular approaches to work management. One consequence is that it can be hard to find time to do other things. In fact, in many organisations, this is intended. Tony Quinlan highlights the problem:

The drive for efficiency and perfect accounting for time is a constant anachronism — and far too much attention goes there, with added implications that activities like lunchbreaks and socialising were wasting time or somehow detrimental to the organisation. It’s often the implication that a work contract indicates a straight exchange of salary for workhours, and that any hours used at work for non-efficient work purposes is time stolen from the organisation. A very dangerous mindset to get into — and one that I’ve challenged more than a few times at conferences (typically, someone talking about email and spam and how many hours can be saved, with a spurious figure of what that means on the bottom line. Spare me.)

The contractual exchange of time for money is absolutely explicit in a law firm, where fee-earners record time in six-minute blocks, which then get converted into bills for clients. (I know many firms are moving away from the extreme version of that model, but very few of them have actually done away with the need to record time.) This can have a corrosive effect on any activities (including knowledge sharing) that are not “fee-earning” or which make it harder to reach time-related targets. Tony goes on to recall life in a more relaxed working environment.

I remember the tea trolley at Racal, back in the 1980s when I was testing radar systems.  It was actually a very useful social space — a specified point in the day when a bunch of people from different areas and specialisms met and talked as we waited to buy anything that I’d probably not allow my children to have today.

There’s a serious denigration of such social spaces these days, usually on efficiency or bottom-line grounds but (as in the case of smoking rooms) health ones too.  The value was in building cross-functional networks and communication channels and talking in non-formal environments.  And non-policed too, which made them more powerful for sharing problems or warnings of potential future issues.

Like Tony, I think the social aspect of work is crucial. If we make it harder for people to interact casually, we lose a real opportunity for creativity, change and insight. Gossip (of the non-malicious kind) almost always conveys more useful and actionable information than the formal corporate communications channels. (We need those too.)

[I]f the smoking room, the tea trolley, the staff canteen (and lunch hour) are all disappearing, where do we meet other parts of the organisation except in meetings?

A good question, Tony, and one which would frighten many people.

Do we have too many meetings? Possibly, and they may well be poorly focused as well. However, Paul Graham puts his finger on a more subtle issue. Different people are affected by meetings in different ways.

One reason programmers dislike meetings so much is that they’re on a different type of schedule from other people. Meetings cost them more.

There are two types of schedule, which I’ll call the manager’s schedule and the maker’s schedule. The manager’s schedule is for bosses. It’s embodied in the traditional appointment book, with each day cut into one hour intervals. You can block off several hours for a single task if you need to, but by default you change what you’re doing every hour.

When you use time that way, it’s merely a practical problem to meet with someone. Find an open slot in your schedule, book them, and you’re done.

Most powerful people are on the manager’s schedule. It’s the schedule of command. But there’s another way of using time that’s common among people who make things, like programmers and writers. They generally prefer to use time in units of half a day at least. You can’t write or program well in units of an hour. That’s barely enough time to get started.

When you’re operating on the maker’s schedule, meetings are a disaster. A single meeting can blow a whole afternoon, by breaking it into two pieces each too small to do anything hard in. Plus you have to remember to go to the meeting. That’s no problem for someone on the manager’s schedule. There’s always something coming on the next hour; the only question is what. But when someone on the maker’s schedule has a meeting, they have to think about it.

Where do lawyers fit into this model? Are they makers or managers? And clients — where do they fit? I don’t think there is a simple answer. However, it is a question we should always ask. Will this meeting that feels innocuous to me actually disrupt another person’s day to such an extent that they feel unable to spare the time to do something that might deliver more value instead (like chatting to someone as they make a cup of coffee)? Or, alternatively, is this meeting actually the time when something critical gets done — like finding out from a client exactly what their commercial objectives are?

Back to basics

Recently I have caught up with two Ur-texts that I really should have read before. However, the lessons learned are two-fold: the content (in both cases) is still worthy of note, and one should not judge a work by the way it is used.

Recycling in Volterra

In late 1991, the Harvard Business Review published an article by Ikujiro Nonaka containing some key concepts that would be used and abused in the name of knowledge management for the next 18 years (and probably beyond). In “The Knowledge-Creating Company” (reprinted in 2007) Nonaka described a number of practices used by Japanese companies to use their employees’ and others’ tacit knowledge to create new or improved products.

Nonaka starts where a number of KM vendors still are:

…despite all the talk about “brain-power” and “intellectual capital,” few managers grasp the true nature of the knowledge-creating company — let alone know how to manage it. The reason: they misunderstand what knowledge is and what companies must do to exploit it.

Deeply ingrained in the traditions of Western management, from Frederick Taylor to Herbert Simon, is a view of the organisation as a machine for “information processing.” According to this view, the only useful knowledge is formal and systematic — hard (read: quantifiable) data, codified procedures, universal principles. And the key metrics for measuring the value of new knowledge are similarly hard and quantifiable — increased efficiency, lower costs, improved return on investment.

Nonaka contrasts this with an approach that is exemplified by a number of Japanese companies, where managing the creation of new knowledge drives fast responses to customer needs, the creation of new markets and innovative products, and dominance in emergent technologies. In some respects, what he describes presages what we now call Enterprise 2.0 (although, tellingly, Nonaka never suggests that knowledge creation should involve technology):

Making personal knowledge available to others is the central activity of the knowledge-creating company. It takes place continuously and at all levels of the organization. And … sometimes it can take unexpected forms.

One of those unexpected forms is the development of a bread-making machine by the Matsushita Electric Company. This example of tacit knowledge converted into explicit has become unrecognisable in its repetition in numerous KM articles, fora, courses, and so on. Critically, there is no actual conversion — the tacit knowledge of how to knead bread dough is not captured as an instruction manual for bread making. What actually happens is that the insight gained by the software developer Ikuko Tanaka by observing the work of the head baker at the Osaka International Hotel was converted into a simple improvement in the way that an existing bread maker kneaded dough prior to baking. The expression of this observation was a piece of explicit knowledge — the design of a new bread maker, to be sold as an improved product.

That is where the critical difference is. To have any value at all in an organisation, peoples’ tacit knowledge must be able to inform new products, services, or ways of doing business. Until tacit knowledge finds such expression, it is worthless. However, that is not to say that all tacit knowledge must be documented to be useful. That interpretation is a travesty of what Nonaka has to say.

Tacit knowledge is highly personal. It is hard to formalize and, therefore, difficult to communicate to others. Or, in the words of philosopher Michael Polanyi, “We know more than we can tell.” Tacit knowledge is also deeply rooted in action and in an individual’s commitment to a specific context — a craft or profession, a particular technology or product market, or the activities of a work group or team.

Nonaka then explores the interactions between the two aspects of knowledge: tacit-tacit, exlpicit-explicit, tacit-explicit, and explicit-tacit. From this he posits what is now known as the SECI model. In this original article, he describes four stages: socialisation, articulation, combination and internalisation. Later, “articulation” became “externalisation.” It is this stage where technology vendors and those who allowed themselves to be led by them decided that tacit knowledge could somehow be converted into explicit as a business or technology process divorced from context or commitment. This is in direct contrast to Nonaka’s original position.

Articulation (converting tacit knowledge into explicit knowledge) and internalization (using that explicit knowledge to extend one’s own tacit knowledge base) are the critical steps in this spiral of knowledge. The reason is that both require the active involvement of the self — that is, personal commitment. …

Indeed, because tacit knowledge includes mental models and beliefs in addition to know-how, moving from the tacit to the explicit is really a process of articulating one’s vision of the world — what it is and what it ought to be. When employees invent new knowledge, they are also reinventing themselves, the company, and even the world.

The rest of Nonaka’s article is rarely referred to in the literature. However, it contains some really powerful material about the use of metaphor , analogy and mental models to generate new insights and trigger valuable opportunities to articulate tacit knowledge. He then turns to organisational design and the ways in which one should manage the knowledge-creating company.

The fundamental principle of organizational design at the Japanese companies I have studied is redundancy — the conscious overlapping of company information, business activities, and managerial responsibilities. …

Redundancy is important because it encourages frequent dialogue and communication. This helps create a “common cognitive ground” among employees and thus facilitates the transfer of tacit knowledge. Since members of the organization share overlapping information, they can sense what others are struggling to articulate. Redundancy also spreads new explicit knowledge through the organization so it can be internalized by employees.

This silo-busting approach is also at the heart of what has now become known as Enterprise 2.0 — the use of social software within organisations. What Nonaka described as a natural form for Japanese organisations was difficult for Western companies to emulate. The legacy of Taylorism has proved too hard to shake off, and traditional enterprise technology has not helped.

Which is where we come to the second text: Andrew McAfee’s Spring 2006 article in the MIT Sloan Management Review: “Enterprise 2.0:The Dawn of Emergent Collaboration.” This is where the use of Web 2.0 technologies started to hit the mainstream. In reading this for the first time today — already having an an understanding and experience of the use of blogs and wikis in the workplace — it was interesting to see a different, almost historical, perspective. One of the most important things, which we sometimes forget, is McAfee’s starting point. He refers to a study of knowledge workers’ practices by Thomas Davenport.

Most of the information technologies that knowledge workers currently use for communication fall into two categories. The first comprises channels — such as e-mail and person-to-person instant messaging — where digital information can be created and distributed by anyone, but the degree of commonality of this information is low (even if everyone’s e-mail sits on the same server, it’s only viewable by the few people who are part of the thread). The second category includes platforms like intranets, corporate Web sites and information portals. These are, in a way, the opposite of channels in that their content is generated, or at least approved, by a small group, but then is widely visible — production is centralized, and commonality is high.

So, what is the problem with this basic dichotomy?

[Davenport’s survey] shows that channels are used more than platforms, but this is to be expected. Knowledge workers are paid to produce, not to browse the intranet, so it makes sense for them to heavily use the tools that let them generate information. So what’s wrong with the status quo?

One problem is that many users aren’t happy with the channels and platforms available to them. Davenport found that while all knowledge workers surveyed used e-mail, 26% felt it was overused in their organizations, 21% felt overwhelmed by it and 15% felt that it actually diminished their productivity.In a survey by Forrester Research, only 44% of respondents agreed that it was easy to find what they were looking for on their intranet.

A second, more fundamental problem is that current technologies for knowledge workers aren’t doing a good job of capturing their knowledge.

In the practice of doing their jobs, knowledge workers use channels all the time and frequently visit both internal and external platforms (intranet and Internet). The channels,however, can’t be accessed or searched by anyone else, and visits to platforms leave no traces. Furthermore,only a small percentage of most people’s output winds up on a common platform.

So the promise of Enterprise 2.0 is to blend the channel with the platform: to use the content of the communication channel to create (almost without the users knowing it) a content-rich platform. McAfee goes on to describe in more detail how this was achieved within some examplar organisations — notably Dresdner Kleinwort Wasserstein. He also derives a set of key features (Search, Links, Authorship, Tagging, Extensions and Signals (SLATES) to describe the immanent nature of Enterprise 2.0 applications as distinct from traditional enterprise technology.

What interests me about McAfee’s original article is (a) how little has changed in the intervening three years (thereby undermining the call to the Harvard Business Press to rush his book to press earlier than scheduled), and (b) which of the SLATES elements still persist as critical issues in organisations. Effective search will always be a challenge for organisational information bases — the algorithms that underpin Google are effectively unavailable, and so something else needs to be simulated. Tagging is still clearly at the heart of any worthwhile Enterprise 2.0 implementation, but it is not clear to me with experience that users understand the importance of this at the outset (or even at all). The bit that is often missing is “extensions” — few applications deliver the smartness that McAfee sought.

However, the real challenge is to work out the extent to which organisations have really blurred the channel/platform distinction by using Enterprise 2.0 tools. Two things suggest to me that this will not be a slow process: e-mail overload is still a significant complaint; and the 90-9-1 rule of participation inequality seems not to be significantly diluted inside the firewall.

Coincidentally, McAfee has posted on his blog today, asking for suggestions for a new article on Enterprise 2.0, as well as explaining some of the delay with his book.

Between now and the publication date the first chapter of the book, which describes its genesis, goals, and structure, is available for download. I’m also going to write an article about Enterprise 2.0 in Harvard Business Review this fall. While I’ve got you here, let me ask a question: what would you like to have covered in the article?  Which topics related to Enterprise 2.0 should it discuss? Leave a comment, please, and let us know — I’d like to crowdsource the article a bit. And if you have any questions or comments about the book, I’d love to hear them.

I have made my suggestions above, Andy. I’ll comment on your blog as well.

First, think…

I wasn’t at the Reboot Britain conference today, but there were some valuable nuggets in the twitterstream for the #rebootbritain hashtag. Of these, Lee Bryant’s reference to Howard Rheingold’s closing keynote resonated most for me.

@hreingold triage skills vital to new world of flow

The most common challenge I see from people about social software, Enterprise 2.0, whatever you want to call it, is that it looks interesting, but they are busy enough as it is, and can’t we do something about information overload. “Where do you find the time to do all this?” I can point to examples where these technologies can save them time (using a wiki over e-mail, for example), but these are often seen as problematic for some reason or another.

Wood stack

What Lee has spotted in Howard’s keynote is that people are being faced with a new challenge in life and work, and it probably frightens them.

Up until now, much of the information we need (as well as a huge amount that we don’t need) has been selected by someone else. Whether it is stories in a newspaper, TV programmes on the favourite channel or information within an organisation, someone has undertaken the task of choosing what the audience sees. As a result, we often have to live with things we don’t want. For example, I have little interest in most sports, so all newspapers have a sports section that is too long for my needs. Our tolerance for this redundancy is incredible. But we still resist changing it for a situation in which we can guarantee to see just what we want (and more of it).

According to Wikipedia (and this chimes with other accounts that I have read, so I trust it for now), triage was formalised as a means of dealing with large volumes of battlefield casualties in the First World War. One approach to medical emergencies might be to treat them as they arise, irrespective of their chances of survival. However, doing this is likely to lead to pointless treatment of hopeless cases and to a failure to treat those with a chance of survival in time. The result is a waste of resources and a higher than necessary death rate. Triage means that immediate treatment can be focused on those whose chances of survival are not negligible and where urgency is most important. Triage in medical emergencies is now a highly-developed technique, with incredibly effective results. (However much it may be resented by the walking wounded who are inevitably kept waiting in hospital accident & emergency departments.)

What would triage mean for information consumption? In the first place, it means no filtering before triage. One of the causes of information overload is that traditional selectors (the TV scheduler or news editor) inevitably pay no attention to the personal needs or interests of the audience. How could they? So, unlike the A&E department, we cannot rely on a triage nurse to make our choices for us. Rule zero, then, is that everyone does their own triage.

One of the key things about hospital or battlefield triage is that we don’t waste time with it if there is a clear life-saving need. So rule one of information triage is that anything life-threatening for the organisation or for ourselves needs immediate attention.

After that, we can sit down calmly to review and classify information as it comes in. Rule two: only two questions need to be asked. These are: “is this important to me in my role?” and “does this need attention now, or will its message still be fresh later?

Taking the answers to these questions together, we should be able to assess the importance and timeliness of anything that comes up. Anything that is time-bound and important needs attention now. Anything that can wait and is not relevant must be junked.

The final stage isn’t strictly triage, although it might correspond to a medical decision about who treats a patient. Having decided than a piece of information or an information flow is worthy of attention, we need to decide what to do with it. That is rule three: don’t just read it, do something with it. If information is important, it should need action, filing, or onward communication. What form each of those take is not a question for now, but there is no point paying attention to something if you or your organisation immediately loses the benefit of that attention.

Information triage is just like medical triage in that it stops action before thought. That is potentially a huge change if people have been accustomed to taking in pre-digested information flows without any thought and either acting immediately or not acting at all.

That’s all off the top of my head. Have I missed anything?

We are all in this together

A couple of links to start with: John Stapp and “Has ‘IT’ Killed ‘KM’?

Picture credit: Bill McIntyre on Flickr

I don’t have much truck with heroes. Many people do great things, in the public eye and otherwise, and it seems invidious to single certain individuals out mainly because they are better known than others who are equally worthy of credit. However, I make an exception for John Stapp.

Every time you get into a car and put on a seat belt (whether required to by law or not), you owe a debt to Dr Stapp. As a doctor in the US Air Force, he took part in experiments on human deceleration in the late 1940s. During the Second World War it had been assumed that the maximum tolerable human deceleration was 18G (that is, 18 times the force of gravity at sea level), and that death would occur above that level. The Air Force wanted to test whether this was really true, and so a research project was set up. In order to test the hypothesis, an anthropomorphic dummy was to be shot down a test track and abruptly brought to a halt. Measuring equipment would be used to gauge the effect of the deceleration on the dummy. An account of the project is provided in the Annals of Improbable Research. That account indicates that Stapp had little confidence in the dummy.

While the brass assigned a 185-pound, absolutely fearless, incredibly tough, and altogether brainless anthropomorphic dummy — known as Oscar Eightball — to ride the Gee Whiz, David Hill remembers Stapp had other ideas. On his first day on site he announced that he intended to ride the sled so that he could experience the effects of deceleration first-hand. It was a statement that Hill and everyone else found shocking. “We had a lot of experts come out and look at our situation,” he remembers. “And there was a person from M.I.T. who said, if anyone gets 18 Gs, they will break every bone in their body. That was kind of scary.”
But the young doctor had his own theories about the tests and how they ought to be run, and his nearest direct superiors were over 1000 miles away. Stapp’d done his own calculations, using a slide rule and his knowledge of physics and human anatomy, and concluded that the 18 G limit was sheer nonsense. The true figure he felt might be twice that if not more.

In the event, Oscar the dummy was used merely to test the efficacy of the test track and the ballistic sled on which his seat was first accelerated and then decelerated. Once that was done, testing could start.

Finally in December 1947 after 35 test runs, Stapp got strapped into the steel chariot and took a ride. Only one rocket bottle was fired, producing a mere 10 Gs of force. Stapp called the experience “exhilarating.” Slowly, patiently he increased the number of bottles and the stopping power of the brakes. The danger level grew with each passing test but Stapp was resolute, Hill says, even after suffering some bad injuries. And within a few months, Stapp had not only subjected himself to 18 Gs, but to nearly 35. That was a stunning figure, one that would forever change the design of airplanes and pilot restraints.

The initial tests were done with the subject (not always Stapp) facing backwards. Later on, forward-facing tests were done as well. Over the period of the research, Stapp was injured a number of times. Many of these injuries had never been seen before — nobody had been subjected to such extreme forces. Some were more mundane — he broke his wrist twice; on one occasion resetting the fracture himself as he walked back to his office. It is one thing to overcome danger that arises accidentally, quite another to put oneself directly in such extreme situations.

And he did it for the public good.

…while saving the lives of aviators was important, Kilanowski says Stapp realized from the outset that there were other, perhaps even more important aspects to his research. His experiments proved that human beings, if properly restrained and protected, could survive an incredible impact.

Cars at the time were incredibly dangerous places to be. All the padding, crumple zones and other safety features that we now take for granted had yet to be introduced.

Improving automobile safety was something no one in the Air Force was interested in, but Stapp gradually made it his personal crusade. Each and every time he was interviewed about the Gee Whiz, Kilanowski notes, he made sure to steer the conversation towards the less glamorous subject of auto safety and the need for seatbelts. Gradually Stapp began to make a difference. He invited auto makers and university researchers to view his experiments, and started a pioneering series of conferences. He even managed to stage, at Air Force expense, the first ever series of auto crash tests using dummies. When the Pentagon protested, Stapp sent them some statistics he’d managed to dig up. They showed that more Air Force pilots died each year in car wrecks than in plane crashes.

While Stapp didn’t invent the three point auto seatbelt, he helped test and perfect it. Along with a host of other auto safety appliances. And while Ralph Nader took the spotlight when Lyndon Johnson signed the 1966 law that made seatbelts mandatory, Stapp was in the room. It was one of his real moments of glory.

Ultimately, John Stapp is a hero to me because he was true to his convictions — he had a hypothesis and tested it on himself. In the modern business vernacular, he ate his own dogfood. Over and above that, he did it because he could see a real social benefit. His work, and (more importantly) the way he did it, has directly contributed to saving millions of lives over the last 60 years. Those of us who seek to change our environments, whether at work or home, or in wider society, should heed his example. If there are things that might make a difference, we shouldn’t advocate them for others (even dummies) without checking that they work for us.

Now, the other link. Greg Lambert at the 3 Geeks and a Law Blog has extended the critique of IT failing to spot and deal with the current financial crisis by suggesting that KM is equally to blame.

Knowledge Management was originally an idea that came forth in the library field as a way to catalog internal information in a similar way we where cataloging external information. However, because it would be nearly impossible for a librarian to catalog every piece of internal information, KM slowly moved over to the IT structure by attempting to make the creator of the information (that would be the attorney who wrote the document or made the contact) also be the “cataloger” of the information. Processes were created through the use of technology that were supposed to assist them in identifying the correct classification. In my opinion, this type of self-cataloging and attempt at creating a ultra-structured system creates a process that is:

  1. difficult to use;
  2. doesn’t fit the way that lawyers conduct their day-to-day work;
  3. gives a false sense of believing that the knowledge has been captured and can be easily recovered;
  4. leads to user frustration and “work around” methods; and
  5. results in expensive, underutilized software resources.

In a comment on that post, Doug Cornelius says:

I look at KM 1.0 as being centralized and KM 2.0 as being personalized. The mistake with first generation KM and why it failed was that people don’t want to contribute to a centralized system.

We have to be careful, as Bill Ives points out, not to throw out the baby in our enthusiasm to replace the 1.0 bathwater with nice fresh 2.0 bubbles. However, Greg and Doug do have a point. We made a mistake in trying to replicate the hundreds or thousands of databases walking round our organisations with single inanimate repositories.

The human being is an incredible thing. It comes with a motive system and an incredibly powerful (but probably unstructured) data storage, computation and retrieval apparatus. Most (probably all) examples of homo sapiens could not reproduce the contents of this apparatus, but they can produce answers to all sorts of questions. The key to successful knowledge activities in an organisation, surely, is to remember that each one of these components adds a bit of extra knowledge value to the whole.

Potentially, then, we are all knowledge heroes. When we experiment with knowledge, the more people who join in, the better the results. And the result here should be, as Greg points out, to “help us face future challenges.” We can only do that by taking advantage of the things that the people around us don’t realise that they know.

It’s mine and I will choose what to do with it

This isn’t a political blog, and it is a coincidence that I came across a couple of things that chime with each other on the same day that the UK government has started to reverse from its enthusiastic promotion of ID cards for all.

The first juicy nugget came from Anne Marie McEwan. In writing about social networking tools and KM, she linked some of the requirements for successful social software adoption (especially the need for open trusting cultures) to the use of technology for monitoring.

And therein lies a huge problem, in my strong view. Open, trusting, transparent cultures? How many of them have you experienced? That level of monitoring could be seen as a version of Bentham’s Panopticon. Although the research is now quite old, there was a little publicised (in my view) ESRC-funded research project in the UK, The Future of Work, involving 22 universities and carried out over six years. One of the publications from that research was a book, Managing to Change?. The authors note that:

“One area where ICT is rapidly expanding management choices is in monitoring and control systems … monitoring information could connect with other parts of the HRM agenda, if it is made accessible and entrusted to employees for personal feedback and learning. This has certainly not happened yet and the trend towards control without participation is deeply disquieting.

If ICT-based control continues to be seen as a management prerogative, and the monitoring information is not shared with employees, then this is likely to become a divisive and damaging issue.”

On the other hand, the technology in the right hands and cultures creates amazing potential for nurturing knowledge and innovation.

What struck me about this was that (pace Mary Abraham’s concerns about information disclosure), people quite freely disclose all sorts of information about themselves on public social networking sites, such as Facebook, LinkedIn, Twitter, and so on. The fact is that some of this sharing is excessive and ill-advised, but even people who have serious reservations about corporate or governmental use of personal information lose some of their inhibition.

Why do they do this? In part it may be naïveté, but I think sometimes this sharing is much more knowing than that. What do they know, then? The difference between this voluntary sharing and forced disclosure is the identification of the recipients and (as Anne Marie recognises) trust. Basically, we share with people, not with organisations.

The second thing I found today was much more worrying. The UK Government is developing a new strategy for sharing people’s personal information between different government departments. It starts from a reasonable position:

We have a simple aim. We want everyone who interacts with Government to be able to establish and use their identity in ways which protect them and make their lives easier. Our strategy seeks to deliver five fundamental benefits. In future, everyone should expect to be able to:

  • register their identity once and use it many times to make access to public services safe, easy and convenient;
  • know that public services will only ask them for the minimum necessary information and will do whatever is necessary to keep their identity information safe;
  • see the personal identity information held about them – and correct it if it is wrong;
  • give informed consent to public services using their personal identity information to provide services tailored to their needs; and
  • know that there is effective oversight of how their personal identity information is used.

All well and good so far, but then buried in the strategy document is this statement (on p.11):

When accessing services, individuals should need to provide only a small amount of information to prove that they are who they say they are. In some situations, an individual may only need to use their fingerprint (avoiding the need to provide information such as their address).

But I can change my address (albeit with difficulty). I can never change my fingerprints. And fingerprints are trivially easy to forge. Today alone, I must have left prints on thousands of surfaces. All it takes is for someone to lift one of those, and they would have immediate access to all sorts of services in my name. (An early scene in this video shows it being done.

What I really want to be able to do is something like creating single-use public keys where the private key is in my control. And I want to be able to know and control where my information is being used and shared.

Going back to KM, this identity crisis is what often concerns people about organisationally forced (or incentivised) knowledge sharing. Once they share, they lose control of the information they provided. They also run the risk that the information will be misused without reference back to them. It isn’t surprising that people react to this kind of KM in the same way that concerned citizens have reacted to identity cards in the UK: rather than No2ID, we have No2KM (stop the database organisation).

The conundrum focus

A discussion is currently taking place on the ActKM mailing list about the theoretical underpinnings of knowledge management. Joe Firestone, reaching into the language of philosophy, has consistently taken the view that KM only makes sense when related to the need to improve underlying knowledge processes:

I see [knowledge management] more as a field defined by a problem, with people entering it because they’re interested in some aspect of the problem that their specific knowledge seems to connect with.

Unfortunately, in more quotidian language, the word ‘problem’ suggests difficulties that need to be overcome, but sometimes KM is actually not dedicated to overcoming difficulties but to taking maximum advantage of opportunities. When Joe refers to a ‘problem’ I think he means it as a puzzle or conundrum: “how do we fill this knowledge gap?” Stated thus, I think this is a less objectionable aim for KM.

What about the nature of the conundrums that face organisations? Rightly, in linking to an earlier post of mine, Naysan Firoozmand at the Don’t Compromise blog suggested that there was a risk of vagueness in my suggestion (channelling David Weinberger) that KM might be about improving conversations in organisations.

Which is all true and good and inspiring, except I want to wave my arm about frantically like the child at the back of class and shout ‘But Sir, there’s more … !’. There’s a difference between smarter and wise that’s the same difference as the one between data and information: the former is a raw ingredient of the latter. And – when it comes to organisational performance and leadership (which is our focus here, rather than KM itself) – simply being smarter isn’t the whole story. Clever people still do stupid things, often on a regular (or worse, repeated) basis. Wise people, on the other hand, change their ways.

This is a fair challenge. Just improving the conditions for exchange of knowledge is not enough on its own. (Although I would argue that it is still an improvement on an organisation where conversations across established boundaries are rare.) There are additional tasks on top of enabling conversation or other knowledge interactions, such as selecting the participants (as Mary Abraham made clear in the post that started all this off), guiding the interaction and advising on possible outcomes.

Those additional tasks all help to bring some focus to knowledge-related interactions. The next issue relates to my last blog post. In doing what we do, we always need to ask where the most value can be generated. The answer to that question, in part, is driven by the needs expressed by others in the organisation — their problems or conundrums. However, not all problems can be resolved to generate equal value to the organisation.

The question, “what value?” is an important one, and reminds us that focus on outcomes is as important as avoiding vagueness in approach. How can we gauge how well our KM activities will turn out? Some help is provided, together with some scientific rigour, by Stephen Bounds (another ActKM regular) who has created a statistical model for KM interventions using a Monte Carlo analysis. His work produces an interesting outcome. It suggests that on average, the more general a KM programme, the less likely it is to succeed. In fact, that lack of success kicks in quite quickly.

To maximise the chance of a course of action that will lead to measurable success, knowledge managers should intervene in areas where one or more of the following conditions hold:

  • occurrences of knowledge failures are frequent
  • risks of compound knowledge failure are negligible or non-existent
  • substantial reductions in risk can be achieved through a KM intervention (typically by 50% or more)

Where possible, the costs of the intervention should be measured against the expected savings to determine the likelihood of benefits exceeding KM costs.

So: simple, narrowly defined KM activities are more likely to succeed, all other things being equal. Success here is defined as it should be, as making a contribution to reductions in organisational costs (or, potentially, improving revenue). Stephen’s analysis is really instructive, and could be very useful in encouraging people away from a “one size fits all” organisation-wide KM programmes.

In sum, then, our work requires us to identify the conundrums that need to be solved, together with the means by which they should be addressed, and to define the outcomes as clearly as possible for the individuals involved and for the organisation. We cannot hope to resolve all organisational conundrums by improving knowledge sharing. So how do we choose which ones to attack, and how do we conduct that attack? Those are questions we always need to keep in mind.