Back to basics

Recently I have caught up with two Ur-texts that I really should have read before. However, the lessons learned are two-fold: the content (in both cases) is still worthy of note, and one should not judge a work by the way it is used.

Recycling in Volterra

In late 1991, the Harvard Business Review published an article by Ikujiro Nonaka containing some key concepts that would be used and abused in the name of knowledge management for the next 18 years (and probably beyond). In “The Knowledge-Creating Company” (reprinted in 2007) Nonaka described a number of practices used by Japanese companies to use their employees’ and others’ tacit knowledge to create new or improved products.

Nonaka starts where a number of KM vendors still are:

…despite all the talk about “brain-power” and “intellectual capital,” few managers grasp the true nature of the knowledge-creating company — let alone know how to manage it. The reason: they misunderstand what knowledge is and what companies must do to exploit it.

Deeply ingrained in the traditions of Western management, from Frederick Taylor to Herbert Simon, is a view of the organisation as a machine for “information processing.” According to this view, the only useful knowledge is formal and systematic — hard (read: quantifiable) data, codified procedures, universal principles. And the key metrics for measuring the value of new knowledge are similarly hard and quantifiable — increased efficiency, lower costs, improved return on investment.

Nonaka contrasts this with an approach that is exemplified by a number of Japanese companies, where managing the creation of new knowledge drives fast responses to customer needs, the creation of new markets and innovative products, and dominance in emergent technologies. In some respects, what he describes presages what we now call Enterprise 2.0 (although, tellingly, Nonaka never suggests that knowledge creation should involve technology):

Making personal knowledge available to others is the central activity of the knowledge-creating company. It takes place continuously and at all levels of the organization. And … sometimes it can take unexpected forms.

One of those unexpected forms is the development of a bread-making machine by the Matsushita Electric Company. This example of tacit knowledge converted into explicit has become unrecognisable in its repetition in numerous KM articles, fora, courses, and so on. Critically, there is no actual conversion — the tacit knowledge of how to knead bread dough is not captured as an instruction manual for bread making. What actually happens is that the insight gained by the software developer Ikuko Tanaka by observing the work of the head baker at the Osaka International Hotel was converted into a simple improvement in the way that an existing bread maker kneaded dough prior to baking. The expression of this observation was a piece of explicit knowledge — the design of a new bread maker, to be sold as an improved product.

That is where the critical difference is. To have any value at all in an organisation, peoples’ tacit knowledge must be able to inform new products, services, or ways of doing business. Until tacit knowledge finds such expression, it is worthless. However, that is not to say that all tacit knowledge must be documented to be useful. That interpretation is a travesty of what Nonaka has to say.

Tacit knowledge is highly personal. It is hard to formalize and, therefore, difficult to communicate to others. Or, in the words of philosopher Michael Polanyi, “We know more than we can tell.” Tacit knowledge is also deeply rooted in action and in an individual’s commitment to a specific context — a craft or profession, a particular technology or product market, or the activities of a work group or team.

Nonaka then explores the interactions between the two aspects of knowledge: tacit-tacit, exlpicit-explicit, tacit-explicit, and explicit-tacit. From this he posits what is now known as the SECI model. In this original article, he describes four stages: socialisation, articulation, combination and internalisation. Later, “articulation” became “externalisation.” It is this stage where technology vendors and those who allowed themselves to be led by them decided that tacit knowledge could somehow be converted into explicit as a business or technology process divorced from context or commitment. This is in direct contrast to Nonaka’s original position.

Articulation (converting tacit knowledge into explicit knowledge) and internalization (using that explicit knowledge to extend one’s own tacit knowledge base) are the critical steps in this spiral of knowledge. The reason is that both require the active involvement of the self — that is, personal commitment. …

Indeed, because tacit knowledge includes mental models and beliefs in addition to know-how, moving from the tacit to the explicit is really a process of articulating one’s vision of the world — what it is and what it ought to be. When employees invent new knowledge, they are also reinventing themselves, the company, and even the world.

The rest of Nonaka’s article is rarely referred to in the literature. However, it contains some really powerful material about the use of metaphor , analogy and mental models to generate new insights and trigger valuable opportunities to articulate tacit knowledge. He then turns to organisational design and the ways in which one should manage the knowledge-creating company.

The fundamental principle of organizational design at the Japanese companies I have studied is redundancy — the conscious overlapping of company information, business activities, and managerial responsibilities. …

Redundancy is important because it encourages frequent dialogue and communication. This helps create a “common cognitive ground” among employees and thus facilitates the transfer of tacit knowledge. Since members of the organization share overlapping information, they can sense what others are struggling to articulate. Redundancy also spreads new explicit knowledge through the organization so it can be internalized by employees.

This silo-busting approach is also at the heart of what has now become known as Enterprise 2.0 — the use of social software within organisations. What Nonaka described as a natural form for Japanese organisations was difficult for Western companies to emulate. The legacy of Taylorism has proved too hard to shake off, and traditional enterprise technology has not helped.

Which is where we come to the second text: Andrew McAfee’s Spring 2006 article in the MIT Sloan Management Review: “Enterprise 2.0:The Dawn of Emergent Collaboration.” This is where the use of Web 2.0 technologies started to hit the mainstream. In reading this for the first time today — already having an an understanding and experience of the use of blogs and wikis in the workplace — it was interesting to see a different, almost historical, perspective. One of the most important things, which we sometimes forget, is McAfee’s starting point. He refers to a study of knowledge workers’ practices by Thomas Davenport.

Most of the information technologies that knowledge workers currently use for communication fall into two categories. The first comprises channels — such as e-mail and person-to-person instant messaging — where digital information can be created and distributed by anyone, but the degree of commonality of this information is low (even if everyone’s e-mail sits on the same server, it’s only viewable by the few people who are part of the thread). The second category includes platforms like intranets, corporate Web sites and information portals. These are, in a way, the opposite of channels in that their content is generated, or at least approved, by a small group, but then is widely visible — production is centralized, and commonality is high.

So, what is the problem with this basic dichotomy?

[Davenport’s survey] shows that channels are used more than platforms, but this is to be expected. Knowledge workers are paid to produce, not to browse the intranet, so it makes sense for them to heavily use the tools that let them generate information. So what’s wrong with the status quo?

One problem is that many users aren’t happy with the channels and platforms available to them. Davenport found that while all knowledge workers surveyed used e-mail, 26% felt it was overused in their organizations, 21% felt overwhelmed by it and 15% felt that it actually diminished their productivity.In a survey by Forrester Research, only 44% of respondents agreed that it was easy to find what they were looking for on their intranet.

A second, more fundamental problem is that current technologies for knowledge workers aren’t doing a good job of capturing their knowledge.

In the practice of doing their jobs, knowledge workers use channels all the time and frequently visit both internal and external platforms (intranet and Internet). The channels,however, can’t be accessed or searched by anyone else, and visits to platforms leave no traces. Furthermore,only a small percentage of most people’s output winds up on a common platform.

So the promise of Enterprise 2.0 is to blend the channel with the platform: to use the content of the communication channel to create (almost without the users knowing it) a content-rich platform. McAfee goes on to describe in more detail how this was achieved within some examplar organisations — notably Dresdner Kleinwort Wasserstein. He also derives a set of key features (Search, Links, Authorship, Tagging, Extensions and Signals (SLATES) to describe the immanent nature of Enterprise 2.0 applications as distinct from traditional enterprise technology.

What interests me about McAfee’s original article is (a) how little has changed in the intervening three years (thereby undermining the call to the Harvard Business Press to rush his book to press earlier than scheduled), and (b) which of the SLATES elements still persist as critical issues in organisations. Effective search will always be a challenge for organisational information bases — the algorithms that underpin Google are effectively unavailable, and so something else needs to be simulated. Tagging is still clearly at the heart of any worthwhile Enterprise 2.0 implementation, but it is not clear to me with experience that users understand the importance of this at the outset (or even at all). The bit that is often missing is “extensions” — few applications deliver the smartness that McAfee sought.

However, the real challenge is to work out the extent to which organisations have really blurred the channel/platform distinction by using Enterprise 2.0 tools. Two things suggest to me that this will not be a slow process: e-mail overload is still a significant complaint; and the 90-9-1 rule of participation inequality seems not to be significantly diluted inside the firewall.

Coincidentally, McAfee has posted on his blog today, asking for suggestions for a new article on Enterprise 2.0, as well as explaining some of the delay with his book.

Between now and the publication date the first chapter of the book, which describes its genesis, goals, and structure, is available for download. I’m also going to write an article about Enterprise 2.0 in Harvard Business Review this fall. While I’ve got you here, let me ask a question: what would you like to have covered in the article?  Which topics related to Enterprise 2.0 should it discuss? Leave a comment, please, and let us know — I’d like to crowdsource the article a bit. And if you have any questions or comments about the book, I’d love to hear them.

I have made my suggestions above, Andy. I’ll comment on your blog as well.

1 thought on “Back to basics”

Comments are closed.