Social: location, listening, connection, reciprocation

As happens from time to time, there is a bit of a backlash against Twitter and other forms of social media at the moment. Jon Ronson is publicising the paperback edition of his book, so the headlines focus on his ‘disenchantment with social media’. Ed Sheeran has chosen to concentrate on some new experiences ahead of releasing his third album, so he has turned away from social media. In highlighting these examples, we run the risk of misunderstanding what being online can mean. Despite the stories we are told, it is more important that social media are social, rather than media.

Deansgate

A city street might have many purposes, and see many forms of human behaviour: teenage shopping, adult drunkenness, coupling, casual conversation, protest, police brutality, acts of charity, theft, commercial deliveries, commuting by car, walking, running, sports events… the list is potentially endless. But we rarely define the street by one, or even a small group, of these activities. We are more likely to talk about the activity itself, with the location either ignored or sidelined.

We have yet to reach that level of maturity when talking about online interactions. Too often it is still the case that the medium in which something happens is identified as a cause of that something. Our understanding of these platforms is thereby impoverished.

I have been ‘online’ in some form or another for almost 25 years, starting with places like Usenet and CIX. Over this time, I have noticed some recurring patterns in the way people become social online.

Where to go?

As we become familiar with our own towns and cities, we learn quickly where the best places are for particular types of gathering. There is no point in holding a protest where we can’t be seen or heard. Likewise, an intimate dinner isn’t likely to be found in a casino. There is a huge range of online places, each of which supports different kinds of interaction. Some are also specialised as to the topics they cover. On the larger platforms, such as Facebook, Twitter and LinkedIn, everyone needs to create their own community.

When things start to go wrong online, the cause is often a lack of common understanding about the nature of place. If one person thinks they are in the right place for a contemplative discussion about life, but someone else considers that their agressive responses about the government’s political choices, there is no common ground. Sadly, this kind of mismatch still happens too often — often because people forget or don’t know about the next point.

Lurking/listening

This step is one that many people do instinctively, but is sometimes missed by those who don’t understand its importance. Euan Semple wrote about this very well today:

We’ve all had that situation of having agreed to link with someone on LinkedIn and then second message they send is trying to sell us something. Or maybe we’ve been reading that influential industry blogger’s posts for years and, thanks to their easy going style, feel like we know them – but how would they react if we reach out and try to connect with them?

This is why lurking matters. Finding the people you want to connect with, working out where they spend time and watching how they behave. You need to learn the ropes, get to understand the rules and the etiquette of people and situations. Think about the person you are about to connect with. What are their challenges and priorities? What sort of language do they use? What is your motivation for connecting with them and is it mutually beneficial?

For many people, it is enough to listen. Nearly a decade ago, Jakob Nielsen drew together a number of strands of research to suggest that as a rule of thumb, 90% of participants in online communities merely observed the discussion. (Of the rest, 9% contributed occasionally and 1% were responsible for most of the contributions.) This 90-9-1 rule has been challenged more recently by researchers at the BBC, but their data was gathered by survey rather than from monitoring actual community usage.

Whatever the figures, lurking is a natural human behaviour. As we circulate round a drinks party, we listen to the conversations around us and familiarise ourselves with what is going on before joining any of them. And we only join in when we have something interesting to add. Listening skills are valued as a means of generating trust. The same should be true of lurking. Learning about a community by sitting respectfully and observing what it does and what the key norms are only helps when the time comes to join in.

Making connections and sharing

When the time comes to speak up, rather than listen, normal social convention requires that one adds some kind of value to the conversation. That is true online just as it is in the pub. Commenting on a blog post or joining a Twitter conversation is most meaningful when the original participants benefit and the remaining audience gets something they might not have had without the intervention.

This cycle of connection and reciprocation is common offline, and is reinforced by all sorts of social and implicit norms. It is often harder to express (let alone enforce) similar norms online, which is why trolling can become a problem. Online, it is also much more likely that there is no homogeneous audience. The troll’s audience is almost certainly completely different from that of the person he attacks.

I have no deep-seated aversion to ‘content marketing’ — after all, this blog is probably an example of the genre. However, there is a growing body of material that is pushed willy-nilly via various ‘channels’ with no real appreciation of the way other people interact in those fora, and with little engagement by way of conversation. I do have an aversion to that because it uses a social medium in an unsocial way, and thereby taints it.

[In January, I will be running a workshop aimed at PSLs, but possibly of wider interest, on good social media use. Sign up on the Ark Group website if you’re interested.]

A tale of two peelers: getting the tools right

Our household batterie de cuisine covers most normal eventualities, with plenty of pots, pans and utensils. We even have three corkscrews, which will be useful if there is ever a vinous emergency. One duplication is particularly interesting, and provides a metaphor for the knowledge and collaboration tools provided by law firms or other organisations.

2014-10-09 19.11.51We have two peelers.

I am sure this isn’t surprising in itself (after all, we have three corkscrews). However, the reason why we have two peelers is interesting. My wife and I have strongly-held and divergent views on the utility of each peeler. She hates the one I prefer, and I cannot use her favourite to peel effectively.

So we both use different tools to produce the same outcome — peeled vegetables. Such a clarity of outcome is not always possible in complex organisations, but I think it is worth striving for. Without it, one can easily be sidetracked into shiny new toys whose purpose is not really clear.

Having settled on a desired outcome, one needs to work out how best to achieve it. In our household there was no consensus on this. Fortunately, peelers are inexpensive enough to be able to acquire different types to satisfy everyone.

Even in more expensive situations, I think it is important to do everything possible to meet different needs when adopting new organisational tools and processes. When I look at some firms who have invested significant amounts in knowledge or collaboration tools that are rarely used, the cause is usually either a poorly defined outcome (what is this thing for, and does the average employee care about that?) or a failure to understand how people work and how that might be enhanced by the new system.

This was highlighted (again) by a tweet from today’s Enterprise 2.0 Summit in London:

‘Small pieces loosely joined’ was at the heart of many early uses of social tools within organisations. It is an approach that allows people to choose the approach that fits them and their desired outcome best. When the organisation chooses which outcomes to favour, and implements a one-size-fits-all tool, it is almost inevitable that half or more of the people who would have used it are put off by something that doesn’t work for them. As a result, it is much less likely that the desired outcome can actually be delivered.

It is still possible for organisations to find the right tools for people to use — big platforms are not the only approach. If you are interested in giving your people the peelers that they will use, I can help — please get in touch.

What do we do with knowledge?

Every now and then, I discover a new way in which my assumptions about things are challenged. Today’s challenge comes in part from the excellent commentary on my last post (which has been so popular that yesterday quickly became the busiest day ever here). I am used to discussions about the definition or usage of ‘knowledge management’, but I thought ‘knowledge sharing’ was less controversial. How wrong can one be?

Table at Plas Mawr, Conwy

The first challenge comes from Richard Veryard. His comment pointed to a more expansive blog post, “When does Communication count as Knowledge Sharing?” Richard is concerned that the baggage carried by the word ‘sharing’ can be counter-productive in the knowledge context.

In many contexts, the word “sharing” has become an annoying and patronizing synonym for “disclosure”. In nursery school we are encouraged to share the biscuits and the paints; in therapy groups we are encouraged to “share our pain”, and in the touchy-feely enterprise we are supposed to “share” our expertise by registering our knowledge on some stupid knowledge management system.

But it’s not sharing (defined by Wikipedia as “the joint use of a resource or space”). It’s just communication.

I agree that if people construe sharing as a one-way process, it is communication. (Or, more accurately, ‘telling’, since effective communication requires a listener to do more than hear what is said.) In a discussion in the comments to Richard’s post, Patrick Lambe defends his use of ‘sharing’ and Richard suggests that knowledge ‘transfer’ more accurately describes what is happening. I also commented on the post, along the following lines.

I can see a distinction between ‘sharing’ and ‘transfer’, which might be relevant. To talk of transferring knowledge suggest to me (a) that there is a knower and an inquirer and that those roles are rarely swapped, and (b) that there needs to be a knowledge object to be transferred. (As Richard puts it, “a stupid knowledge management system” is probably the receptacle for that object.)

As Patrick’s blog post and longer article make clear, the idea of the knowledge object is seriously flawed. Equally, the direction in which knowledge flows probably varies from time to time. For me, this fluidity (combined with the intangible nature of what is conveyed in these knowledge generation processes) makes me comfortable with the notion of ‘sharing’ (even given Richard’s playgroup example).

In fact, I might put it more strongly. The kind of sharing and complex knowledge generation that Patrick describes should be an organisational aspiration (not at all like ‘sharing pain’), while exchange or transfer of knowledge objects into a largely lifeless repository should be deprecated.

I think Richard’s response to that comment suggests that we are on the point of reaching agreement:

I am very happy with the notion of shared knowledge generation – for example, sitting down and sharing the analysis and interpretation of something or other. I am also happy with the idea of some collaborative process in which each participant contributes some knowledge – like everyone bringing some food to a shared picnic. But that’s not the prevailing use of the word “sharing” in the KM world.

This was a really interesting conversation, and I felt that between us we reached some kind of consensus — if what is happening with knowledge is genuinely collaborative, jointly creating an outcome that advances the organisation, then some kind of sharing must be going on. If not, we probably have some kind of unequal transfer: producing little of lasting value.

Coincidentally, I was pointed to a really interesting discussion on LinkedIn today. (Generally, I have been deeply unimpressed with LinkedIn discussions, so this was a bit of a surprise.) The question at the start of the discussion was “If the term “KM” could get a do-over what would you call the discipline?” There are currently 218 responses, some of which range into other interesting areas. One of those areas was an exchange between Nick Milton and John Tropea.

Nick responded to another participant who mentioned that her organisation had started talking about ‘knowledge sharing’ rather than ‘knowledge management’.

Many people do this, but I would just like to point out that there is a real risk here – that sharing (“push”) is done at the expense of seeking (“pull”). The risk is you create supply, with no demand.

See here for more detail: http://www.nickmilton.com/2009/03/knowledge-sharing-and-knowledge-seeking.html

The blog post at the end of that link is probably even more emphatic (I will come back to it later on). John had a different view:

Nick you say “sharing (“push”) is done at the expense of seeking (“pull”). The risk is you create supply, with no demand.”

This is true if sharing is based on conscription, or not within an ecosystem (sorry can’t think of a more appropriate word)…this is the non-interactive document-centric warehousing approach.

But what about blogging experiences and asking questions in a social network, this is more on demand rather than just-in-case…I think this has more of an equilibrium or yin and yang of share and seek.

People blog an experience as it happens which has good content recall, and has no agenda but just sharing the raw experience. Others may learn, converse, share context, etc…and unintentionally new information can be created. This is a knowledge creation system, it’s alive and is more effective than a supply-side approach of shelving information objects…and then saying we are doing KM…to me KM is in the interactions. We must create an online environment that mimics how we naturally behave offline, and I think social computing is close to this.

Nick’s response was interesting:

John – “But what about blogging experiences and asking questions in a social network, this is more on demand rather than just-in-case”

Asking questions in a network, yes (though if I were after business answers, i would ask in a business network rather than a social network). Thats a clear example of Pull.

Blogging, no, I have to disagree with you here. I am sorry – blogging is classic Push. Its classic “just in case” someone should want to read it. Nobody “demands” that you blog about something. You are not writing your blog because you know there is someone out there who is waiting to hear from you – you write your firstly blog for yourself, and secondly “just in case” others will be interested.

Blogging is supply-side, and it’s creating stuff to be stored. OK, it is stored somewhere it can be interacted with, and there is a motivation with blogging which is absent with (say) populating an Intranet, but it is stll classic supply-side Push. Also it is voluntary push. The people who blog (and I include myself in this) are the ones who want to be heard, and that’s not always the same as “the ones who need to be heard”. Knowledge often resides in the quietest people.

This exchange puts me in a quandary. I respect both Nick and John, but they appear to be at loggerheads here. Can they both be right? On the one hand, Nick’s characterisation of supply-side knowledge pushing as something to be avoided is, I think correct. However, as I have written before, in many organisations (such as law firms), it is not always possible to know what might be useful in the future. My experience with formal knowledge capture suggests that when they set out to think about it many people (and firms) actually rate the wrong things as important for the future. They tend to concentrate on things that are already being stored by other people (copies of journal articles or case reports), or things that are intimately linked to a context that is ephemeral. Often the information stored is fairly sketchy. One of the justifications for these failings is the the avoidance of ‘information overload’. This is the worst kind of just-in-case knowledge, as Nick puts it.

I think there is a difference though when one looks at social tools like blogging. As Nick and John probably agree, keeping a blog is an excellent tool for personal development. The question is whether it is more than that. I think it is. I don’t blog here, nor do I encourage the same kind of activity at work because someone might find the content useful in the future. I do it, and encourage it, because the activity itself is useful in this moment. It is neither just-in-case nor just-in-time: it just is.

In the last couple of paragraphs, I was pretty careless with my use of the words ‘information’ and ‘knowledge’. That was deliberate. The fact is that much of what we call KM is, in fact, merely manipulation of information. What social tools bring us (along with a more faceted view of their users) are really interesting ways of exposing people’s working processes. As we learnt from Nonaka all those years ago, there is little better for learning and development of knowledge than close observation of people at work. (Joining in is certainly better, but not always possible.) What we may not know is where those observations might lead, or when they might become useful. Which brings me to Nick’s blog post.

We hear a lot about “knowledge sharing”. Many of the knowledge management strategies I am asked to review, for example, talk about “creating a culture of knowledge sharing”.

I think this misses the point. As I said in my post about Push and Pull, there is no point in creating a culture of sharing, if you have no culture of re-use. Pull is a far more powerful driver for Knowledge Management than Push, and I would always look to create a culture of knowledge seeking before creating a culture of knowledge sharing.

Nick’s point about knowledge seeking is well made, and chimes with Patrick Lambe’s words that I quoted last time:

We do have an evolved mechanism for achieving such deep knowledge results: this is the performance you can expect from a well-networked person who can sustain relatively close relationships with friends, colleagues and peers, and can perform as well as request deep knowledge services of this kind.

Requesting, seeking, performing: all these are aspects of sharing. Like Richard Veryard’s “traditional KM” Nick characterises sharing as a one-way process, but that is not right — that is the way it has come to be interpreted. Sharing must be a two-way process: it needs someone to ask as well as someone who answers, and those roles might change from day to day. However, Nick’s point about re-use is a really interesting one.

I suggested above that some firms’ KM systems might contain material that was ultimately useless. More precisely, I think uselessness arises at the point where re-use becomes impossible because the material we need to use is more flawed than not. These flaws might arise because of the age of the material, combined with its precise linkage with a specific person, client, subject and so on. Lawyers understand this perfectly — it is the same process we use to decide whether a case is a useful precedent or not. Proximity in time, matter or context contributes significantly to this assessment. However, an old case on a very different question of law in a very different commercial context is not necessarily useless.

One of the areas of law I spent some time researching was the question of Crown privilege. A key case in that area involved the deportation of a Zairean national in 1990. In the arguments before the House of Lords, the law dating back to the English Civil War was challenged by reference to cases on subjects as varied as EC regulation of fisheries and potato marketing. That those cases might have been re-used in such a way could not have been predicted when they were decided or reported.

In many contexts, then, re-use is not as clear-cut an issue as it may appear at first. My suspicion is that organisations that rely especially highly on personal, unique, knowledge (or intellectual capital) should be a lot more relaxed about this than Nick suggests. His view may be more relevant in organisations where repetitive processes generate much more value.

On the just-in-case problem, I think social tools are significantly different from vast information repositories. As Clay Shirky has said, what we think is information overload is actually filter failure. Where we rely solely on controlled vocabularies and classification systems, our capability to filter and search effectively runs out much sooner than it does when we can add personalised tags, comments, trackbacks, knowledge about the author from other sources, and so on. Whereas repositories usually strip context from the information they contain, blogs and other social tools bring their context with them. And, crucially, that context keeps growing.

Which brings me, finally, back to my last post. One of the other trackbacks was from another blog asking the question “What is knowledge sharing?” It also picks up on Patrick’s article, and highlights the humanity of knowledge generation.

…we need to think laterally about what we consider to constitute knowledge sharing. This morning I met some friends in an art gallery and, over coffee, we swapped anecdotes, experiences, gripes, ideas and several instances of ‘did you hear about?’ or ‘have you seen?’… I’m not sure any of us would have described the encounter as knowledge exchange but I came away with answers to work-related questions, a personal introduction to a new contact and the germ of a new idea. The meet up was organised informally through several social networks.

The key thing in all of this, for me, is that whether we talk of knowledge sharing, transfer, or management, it only has value if it can result in action: new knowledge generation; new products; ideas; thoughts. But I think that action is more likely if we are open-minded about where it might arise. If we try and predict where it may be, and from which interactions it might come, I think it is most probable that no useful action and value will result in the long term.

How different is the social web (and why)?

When I spoke at the Headshift insight event back in September, one of the points I made was that new forms of interaction on the web might feel subtly different from older ones. A couple of recent blog posts have called this to mind again. So here are some of my thoughts.

Barn Owl in flight

Over at the 3 Geeks… blog, Lisa Salazar argues that there is nothing new about social media.

Social networking isn’t new. I t has been around since the very first introduction to the internet. Just like Alexander Graham Bell, the first sign of life on the internet was an communication between UCLA and Stanford computers in 1969. And that certainly was social–the internet was built in response to the threat of USSR dropping bombs onto the US. Not exactly friendly but certainly social.

Through the internet, I have met people from all around the world. As I like to say on my job, I have traveled “virtually” everywhere.

Like Lisa, I have been online in one way or another for many years, but I feel that things have changed significantly between my early experiences and now. (This may just be a function of who we are — it is quite possible that Lisa invested better in her early online community than I did — so we should just be regarded as two anecdotal data-points.)

It is true that, as Lisa points out, people have built communities using e-mail lists, IRC and Usenet (as well as the closed networks such as Prodigy, AOL, Compuserve, The WELL, CIX, and all the others) since the late 1980s or early 1990s.  Those networks have been used to create content and connect people, just as we do now with the plethora of Web2.0 tools. Where is the difference? I am still trying to work it out.

I used to have an interest in Internet governance. I was a member of the CYBERIA-L mailing list (now apparently defunct [Update April 2015: I have since learned that the list has been resuscitated.]); I spoke at conferences in the US and UK; I wrote journal articles. But I never felt a connection with the governance community in the way that I do with the KM community now. It was as if we were all operating in our own focused silos. (That may also have been a result of the academic ivory towers that many of us inhabited.) I also pursued personal interests in Usenet newsgroups and on CIX. Those activities rarely spilled over into my work interests. I think that partitioning of lives is a hint as to how the old online world differs from the new one.

By contrast, my Web2.0 journey has been more open and fruitful. Apart from a couple of abortive attempts at blogging (I had no real focus, so they withered away very quickly), I started as most people do — reading other blogs and then graduating to comments. Once the comments became longer I felt I had found my voice and it was time to start blogging. At the same time, Facebook and LinkedIn gave me connections with and insights into people on whose blogs I had commented. As people started reacting to my blog posts with their comments and on their own blogs, I found that I was part of a real community. That sense of community has only deepened over time and with more interactions via Twitter and the like. I have even met some people face to face.

The difference between then and now, for me, is that the variety of interactions and ‘places’ where I engage with this community has broken down the silos that I experienced in the past. Because it is impossible not to see more facets of someone’s life and personality in their blogs, comments, tweets and status updates, it becomes easier to see them as real people — not just participants in a mailing list discussion, a conference or a newsgroup. We talk of work-life balance, but as Orson Wells points out early in this interview (from 0:48), there isn’t really a distinction.

Interviewer: Would you say that you live to work or work to live?

Welles: I regard working as part of life. I don’t know how to distinguish between the two; I know that one can and people do. I honestly think the best answer to that question that I can give you is that the two things aren’t separated in my mind.

Interviewer: There are people who devote everything to their work and have no life at all, but you have lived in a big way and you have worked in a big way…

Welles: And I don’t separate them. To me they are all part of the… Work is an expression of life for me.

For many of us, I think this is now true of our interactions with each other via online networks. Earlier this week, John Bordeaux provided a magnificent example of this in his post, “A Year Ago.” This time last year, John was laid off. His reaction was unconventional, but may offer a taste of future convention.

Using online social media tools, I stitched together a loose network of future colleagues and relationships to be tended.  Rather than broadcasting my increasingly urgent need for income, I trusted the network effect would work in time.

And it did.

Today I find myself engaged in meaningful and rewarding work to redesign a failed education system; working alongside leading professionals in innovation, public policy, and social change.

A year ago, I could not predict where I would be today.  Such is the nature of complexity and networks.  The theory suggested I should place myself in conversations, expand my connections into new networks, and a vocation would emerge.  (While I embrace the notion, I hope I never again have to conduct such experiments with my family’s financial health.)  I saw the traditional reaction to job loss as creating one-to-one intense conversations trying to match my talents to a company’s need.  Instead, I took this path.  Which amounted to no path at all, certainly not one any could predict.  To paraphrase Mr. Frost, that has made all the difference

The thing is, we knew that John was going through this. Not from his blog, but from changes in his LinkedIn status, from clues in his tweets. I hope he felt that the support we were able to give (often from a distance) was enough.

Ultimately, I think John’s experience shows that effective participation in online networks allows one to see a more authentic picture of people. Perhaps it is becoming less true that “on the Internet nobody knows you’re a dog.”

Knowing together, better

I am a bit of an e-mail hoarder, so occasionally I go back into the store and find an apparently random message that strikes a new chord. So it was when I stumbled across a message from Kaye Vivian to the ActKM mailing list dating back to July 2008. Her e-mail simply drew attention to an article by Richard McDermott on communities of practice (CoPs). More significantly, Richard had listed six characteristics shared by CoPs that successfully matured into dynamic entities (rather than withering away).

Cloister, Canterbury Cathedral

To date, I have not explored the potential benefits of CoPs for knowledge purposes. Within law firms, self-organised or mandated groups are the norm. At one extreme, there is the practice group or client team, and at the other there may be groups of like-minded individuals with a common interest (such as trainee solicitors) who cluster together for support when necessary. Some of these groups work as CoPs by sharing knowledge and learning incidental to their main purpose. Reading Richard McDermott’s article, however, I thought his conclusion probably had wider resonance than just for CoPs.

So what are Richard’s six characteristics? Kaye’s e-mail referred to a post of Stan Garfield’s in which he summarised this part of the article, but Richard actually started by pointing to factors inhibiting flourishing CoPs:

When starting, communities often need to build momentum as they discover what knowledge is useful to share. Once they’ve been going for a few years, three other problems often inhibit communities’ ability to maintain the spark they had during their early years — loss of momentum, loss of attention and localism.

Once these problems are overcome, six factors are evident in successful CoPs:

Not all communities at mid-life suffer these limitations. Some are vital, full of energy and add value to both their members and the company. The most vital of the communities we reviewed shared six characteristics — clear purpose, active leadership, critical mass of engaged members, sense of accomplishment, high management expectations and real time.

Whilst I have no experience with CoPs, I think these characteristics also hold good for successful collaboration of many different types. For example, organisational wiki use works well and adds value when we see the factors manifested in the following ways.

  1. Clear purpose: A wiki which has a defined purpose (creating a resource, for example, or managing a project) flourishes where unfocussed efforts fail.
  2. Active leadership: As Stuart Mader points out in his book, Wikipatterns, a number of key roles have grown up around good wiki use. One of those is the wiki champion: “A passionate, enthusiastic champion is essential to the success of wiki…”
  3. Critical mass of engaged members: Because of the 90:9:1 principle, a significant number of people is necessary to generate valuable wiki contributions.
  4. Sense of accomplishment: One of the advantages of good wikis over traditional CoPs is that as they grow the contributions of members naturally accrete and can provide a real sense of accomplishment. By the same token, if nothing is happening with the wiki people will see it and are unlikely to be encouraged to turn it round.
  5. High management expectations: Whilst many wikis are established as grass-roots activities, they can still benefit from interest being shown by senior people in the organisation. Whilst there is an argument that Enterprise 2.0 might result in less hierarchical organisations, it is still the case that people respond to traditional management and leadership.
  6. Real time: This is where wikis can score over traditional CoPs. Whereas CoPs may require additional time (McDermott refers to one organisation where there was an expectation that 10% of people’s time was dedicated to community activities), wikis can be the place where some aspects of work actually take place (in preference to e-mail, for example). This success factor is probably better worded as real commitment.

And what does success look like? For Richard McDermott, CoPs are successful when they achieve a significant level of influence in the organisation.

But to play this role effectively, communities need to be more than informal discussion groups. They need to be empowered to be full-fledged elements of the organization, legitimately exercising influence without formal authority.

The same is probably true of wikis.

Speaking of social software and KM

Last week, Headshift hosted an “insight event” to showcase the report on social software for law firms written by Penny Edwards and Lee Bryant. I was honoured to be asked to present, along with Sam Dimond of Clifford Chance and Steve Perry of Freshfields.

Nick Holmes wrote a great summary of the event on his blog, Binary Law, and I intended to post the notes for my session here, but Penny has now done a really impressive job of transcribing our three presentations, together with Lee’s opening remarks. I am particularly impressed because she was listening into the event from Amsterdam, and I gather the sound quality was not particularly good.

Penny’s four posts on the Headshift blog are as follows:

As well as the presentations, we had some great questions from the audience and an opportunity for offline social networking. I only wish we could have had longer to discuss all the issues that people raised. Many thanks to Penny for putting the event together, and to Lars Plougmann for hosting it. (By the way, I think the term “insight event” is a really good one.)

Back to basics

Recently I have caught up with two Ur-texts that I really should have read before. However, the lessons learned are two-fold: the content (in both cases) is still worthy of note, and one should not judge a work by the way it is used.

Recycling in Volterra

In late 1991, the Harvard Business Review published an article by Ikujiro Nonaka containing some key concepts that would be used and abused in the name of knowledge management for the next 18 years (and probably beyond). In “The Knowledge-Creating Company” (reprinted in 2007) Nonaka described a number of practices used by Japanese companies to use their employees’ and others’ tacit knowledge to create new or improved products.

Nonaka starts where a number of KM vendors still are:

…despite all the talk about “brain-power” and “intellectual capital,” few managers grasp the true nature of the knowledge-creating company — let alone know how to manage it. The reason: they misunderstand what knowledge is and what companies must do to exploit it.

Deeply ingrained in the traditions of Western management, from Frederick Taylor to Herbert Simon, is a view of the organisation as a machine for “information processing.” According to this view, the only useful knowledge is formal and systematic — hard (read: quantifiable) data, codified procedures, universal principles. And the key metrics for measuring the value of new knowledge are similarly hard and quantifiable — increased efficiency, lower costs, improved return on investment.

Nonaka contrasts this with an approach that is exemplified by a number of Japanese companies, where managing the creation of new knowledge drives fast responses to customer needs, the creation of new markets and innovative products, and dominance in emergent technologies. In some respects, what he describes presages what we now call Enterprise 2.0 (although, tellingly, Nonaka never suggests that knowledge creation should involve technology):

Making personal knowledge available to others is the central activity of the knowledge-creating company. It takes place continuously and at all levels of the organization. And … sometimes it can take unexpected forms.

One of those unexpected forms is the development of a bread-making machine by the Matsushita Electric Company. This example of tacit knowledge converted into explicit has become unrecognisable in its repetition in numerous KM articles, fora, courses, and so on. Critically, there is no actual conversion — the tacit knowledge of how to knead bread dough is not captured as an instruction manual for bread making. What actually happens is that the insight gained by the software developer Ikuko Tanaka by observing the work of the head baker at the Osaka International Hotel was converted into a simple improvement in the way that an existing bread maker kneaded dough prior to baking. The expression of this observation was a piece of explicit knowledge — the design of a new bread maker, to be sold as an improved product.

That is where the critical difference is. To have any value at all in an organisation, peoples’ tacit knowledge must be able to inform new products, services, or ways of doing business. Until tacit knowledge finds such expression, it is worthless. However, that is not to say that all tacit knowledge must be documented to be useful. That interpretation is a travesty of what Nonaka has to say.

Tacit knowledge is highly personal. It is hard to formalize and, therefore, difficult to communicate to others. Or, in the words of philosopher Michael Polanyi, “We know more than we can tell.” Tacit knowledge is also deeply rooted in action and in an individual’s commitment to a specific context — a craft or profession, a particular technology or product market, or the activities of a work group or team.

Nonaka then explores the interactions between the two aspects of knowledge: tacit-tacit, exlpicit-explicit, tacit-explicit, and explicit-tacit. From this he posits what is now known as the SECI model. In this original article, he describes four stages: socialisation, articulation, combination and internalisation. Later, “articulation” became “externalisation.” It is this stage where technology vendors and those who allowed themselves to be led by them decided that tacit knowledge could somehow be converted into explicit as a business or technology process divorced from context or commitment. This is in direct contrast to Nonaka’s original position.

Articulation (converting tacit knowledge into explicit knowledge) and internalization (using that explicit knowledge to extend one’s own tacit knowledge base) are the critical steps in this spiral of knowledge. The reason is that both require the active involvement of the self — that is, personal commitment. …

Indeed, because tacit knowledge includes mental models and beliefs in addition to know-how, moving from the tacit to the explicit is really a process of articulating one’s vision of the world — what it is and what it ought to be. When employees invent new knowledge, they are also reinventing themselves, the company, and even the world.

The rest of Nonaka’s article is rarely referred to in the literature. However, it contains some really powerful material about the use of metaphor , analogy and mental models to generate new insights and trigger valuable opportunities to articulate tacit knowledge. He then turns to organisational design and the ways in which one should manage the knowledge-creating company.

The fundamental principle of organizational design at the Japanese companies I have studied is redundancy — the conscious overlapping of company information, business activities, and managerial responsibilities. …

Redundancy is important because it encourages frequent dialogue and communication. This helps create a “common cognitive ground” among employees and thus facilitates the transfer of tacit knowledge. Since members of the organization share overlapping information, they can sense what others are struggling to articulate. Redundancy also spreads new explicit knowledge through the organization so it can be internalized by employees.

This silo-busting approach is also at the heart of what has now become known as Enterprise 2.0 — the use of social software within organisations. What Nonaka described as a natural form for Japanese organisations was difficult for Western companies to emulate. The legacy of Taylorism has proved too hard to shake off, and traditional enterprise technology has not helped.

Which is where we come to the second text: Andrew McAfee’s Spring 2006 article in the MIT Sloan Management Review: “Enterprise 2.0:The Dawn of Emergent Collaboration.” This is where the use of Web 2.0 technologies started to hit the mainstream. In reading this for the first time today — already having an an understanding and experience of the use of blogs and wikis in the workplace — it was interesting to see a different, almost historical, perspective. One of the most important things, which we sometimes forget, is McAfee’s starting point. He refers to a study of knowledge workers’ practices by Thomas Davenport.

Most of the information technologies that knowledge workers currently use for communication fall into two categories. The first comprises channels — such as e-mail and person-to-person instant messaging — where digital information can be created and distributed by anyone, but the degree of commonality of this information is low (even if everyone’s e-mail sits on the same server, it’s only viewable by the few people who are part of the thread). The second category includes platforms like intranets, corporate Web sites and information portals. These are, in a way, the opposite of channels in that their content is generated, or at least approved, by a small group, but then is widely visible — production is centralized, and commonality is high.

So, what is the problem with this basic dichotomy?

[Davenport’s survey] shows that channels are used more than platforms, but this is to be expected. Knowledge workers are paid to produce, not to browse the intranet, so it makes sense for them to heavily use the tools that let them generate information. So what’s wrong with the status quo?

One problem is that many users aren’t happy with the channels and platforms available to them. Davenport found that while all knowledge workers surveyed used e-mail, 26% felt it was overused in their organizations, 21% felt overwhelmed by it and 15% felt that it actually diminished their productivity.In a survey by Forrester Research, only 44% of respondents agreed that it was easy to find what they were looking for on their intranet.

A second, more fundamental problem is that current technologies for knowledge workers aren’t doing a good job of capturing their knowledge.

In the practice of doing their jobs, knowledge workers use channels all the time and frequently visit both internal and external platforms (intranet and Internet). The channels,however, can’t be accessed or searched by anyone else, and visits to platforms leave no traces. Furthermore,only a small percentage of most people’s output winds up on a common platform.

So the promise of Enterprise 2.0 is to blend the channel with the platform: to use the content of the communication channel to create (almost without the users knowing it) a content-rich platform. McAfee goes on to describe in more detail how this was achieved within some examplar organisations — notably Dresdner Kleinwort Wasserstein. He also derives a set of key features (Search, Links, Authorship, Tagging, Extensions and Signals (SLATES) to describe the immanent nature of Enterprise 2.0 applications as distinct from traditional enterprise technology.

What interests me about McAfee’s original article is (a) how little has changed in the intervening three years (thereby undermining the call to the Harvard Business Press to rush his book to press earlier than scheduled), and (b) which of the SLATES elements still persist as critical issues in organisations. Effective search will always be a challenge for organisational information bases — the algorithms that underpin Google are effectively unavailable, and so something else needs to be simulated. Tagging is still clearly at the heart of any worthwhile Enterprise 2.0 implementation, but it is not clear to me with experience that users understand the importance of this at the outset (or even at all). The bit that is often missing is “extensions” — few applications deliver the smartness that McAfee sought.

However, the real challenge is to work out the extent to which organisations have really blurred the channel/platform distinction by using Enterprise 2.0 tools. Two things suggest to me that this will not be a slow process: e-mail overload is still a significant complaint; and the 90-9-1 rule of participation inequality seems not to be significantly diluted inside the firewall.

Coincidentally, McAfee has posted on his blog today, asking for suggestions for a new article on Enterprise 2.0, as well as explaining some of the delay with his book.

Between now and the publication date the first chapter of the book, which describes its genesis, goals, and structure, is available for download. I’m also going to write an article about Enterprise 2.0 in Harvard Business Review this fall. While I’ve got you here, let me ask a question: what would you like to have covered in the article?  Which topics related to Enterprise 2.0 should it discuss? Leave a comment, please, and let us know — I’d like to crowdsource the article a bit. And if you have any questions or comments about the book, I’d love to hear them.

I have made my suggestions above, Andy. I’ll comment on your blog as well.

First, think…

I wasn’t at the Reboot Britain conference today, but there were some valuable nuggets in the twitterstream for the #rebootbritain hashtag. Of these, Lee Bryant’s reference to Howard Rheingold’s closing keynote resonated most for me.

@hreingold triage skills vital to new world of flow

The most common challenge I see from people about social software, Enterprise 2.0, whatever you want to call it, is that it looks interesting, but they are busy enough as it is, and can’t we do something about information overload. “Where do you find the time to do all this?” I can point to examples where these technologies can save them time (using a wiki over e-mail, for example), but these are often seen as problematic for some reason or another.

Wood stack

What Lee has spotted in Howard’s keynote is that people are being faced with a new challenge in life and work, and it probably frightens them.

Up until now, much of the information we need (as well as a huge amount that we don’t need) has been selected by someone else. Whether it is stories in a newspaper, TV programmes on the favourite channel or information within an organisation, someone has undertaken the task of choosing what the audience sees. As a result, we often have to live with things we don’t want. For example, I have little interest in most sports, so all newspapers have a sports section that is too long for my needs. Our tolerance for this redundancy is incredible. But we still resist changing it for a situation in which we can guarantee to see just what we want (and more of it).

According to Wikipedia (and this chimes with other accounts that I have read, so I trust it for now), triage was formalised as a means of dealing with large volumes of battlefield casualties in the First World War. One approach to medical emergencies might be to treat them as they arise, irrespective of their chances of survival. However, doing this is likely to lead to pointless treatment of hopeless cases and to a failure to treat those with a chance of survival in time. The result is a waste of resources and a higher than necessary death rate. Triage means that immediate treatment can be focused on those whose chances of survival are not negligible and where urgency is most important. Triage in medical emergencies is now a highly-developed technique, with incredibly effective results. (However much it may be resented by the walking wounded who are inevitably kept waiting in hospital accident & emergency departments.)

What would triage mean for information consumption? In the first place, it means no filtering before triage. One of the causes of information overload is that traditional selectors (the TV scheduler or news editor) inevitably pay no attention to the personal needs or interests of the audience. How could they? So, unlike the A&E department, we cannot rely on a triage nurse to make our choices for us. Rule zero, then, is that everyone does their own triage.

One of the key things about hospital or battlefield triage is that we don’t waste time with it if there is a clear life-saving need. So rule one of information triage is that anything life-threatening for the organisation or for ourselves needs immediate attention.

After that, we can sit down calmly to review and classify information as it comes in. Rule two: only two questions need to be asked. These are: “is this important to me in my role?” and “does this need attention now, or will its message still be fresh later?

Taking the answers to these questions together, we should be able to assess the importance and timeliness of anything that comes up. Anything that is time-bound and important needs attention now. Anything that can wait and is not relevant must be junked.

The final stage isn’t strictly triage, although it might correspond to a medical decision about who treats a patient. Having decided than a piece of information or an information flow is worthy of attention, we need to decide what to do with it. That is rule three: don’t just read it, do something with it. If information is important, it should need action, filing, or onward communication. What form each of those take is not a question for now, but there is no point paying attention to something if you or your organisation immediately loses the benefit of that attention.

Information triage is just like medical triage in that it stops action before thought. That is potentially a huge change if people have been accustomed to taking in pre-digested information flows without any thought and either acting immediately or not acting at all.

That’s all off the top of my head. Have I missed anything?

Book Review: Generation Blend

I have already voiced my scepticism about Generation Y, so it may seem odd that I chose to buy Rob Salkowitz’s book Generation Blend: Managing Across the Technology Age Gap. However, there is a lot in this book that does not depend on an uncritical acceptance of the “generations” thesis. It provides a sound practical basis for any business that wants to, in Salkowitz’s words, “develop practices and deploy technology to attract, motivate, and empower workers of all ages.”

As one might expect, underpinning Generation Blend is the thesis that there are clear generational (not age-related) differences that affect how people approach and use technology. In this, Salkowitz builds on Neil Howe and William Strauss’s book, Generations: The History of America’s Future, 1584 to 2069. However, generational differences are not the starting point for the book. Instead, Salkowitz begins by showing how technology itself has changed the working environment irrevocably. In doing so, he establishes the purpose of the book: to allow organisations to develop the most suitable strategy to help their people to cope with those changes (and the many more to come).

Organizations invest in succeeding waves of new technology — and thus subject their workers to waves of changes in their lives and workstyles — to increase their productivity and competitiveness. Historically, productivity has increased when new technology replaced labor-intensive processes, first with mechanical machinery, and now electronic information systems. (p. 24)

Dave Snowden has started an interesting analysis of these waves of change, and Andrew McAfee’s research shows that IT makes a difference for organisations. What Salkowitz does in Generation Blend is to provide real, practical, insights into the way in which organisations can make the most of the abilities of all generations when faced with new technology. When he does discuss the generations, it is important to remember that his perspective is entirely a US-centric one. That said, the rest of the book is generally applicable. This is Salkowitz’s strength — he recognises that there are real exceptions to the broad brush of generational study, and his guidance focuses on clear issues with which it is difficult to disagree. As one of the section headings puts it, “software complexity restricts the talent pool,” so the target is to accommodate different generational approaches in order to loosen that restriction. Chapter 3 of the book closes with a set of tables outlining different generational attributes. I found these very useful in that they focused the mind on the behaviours and attitudes affecting people’s approach to technology, rather than as a hard-and-fast description of the different generations.

Salkowitz’s approach can be illuminated by comparing three passages on blogging.

The open, unsupervised quality of blogs can be deeply unsettling to people who have internalized the notion that good information comes only from trusted institutions, credentialed individuals, or valid ideological perspectives. (p. 82)

On the other hand:

Blogs and wikis create an environment where unofficial and uncredentialed contributors stand at eye level with traditionally authoritative sources of knowledge. This is perfectly natural to GenXers, who believe that performance and competence should be the sole criteria for authority. (p. 147)

And, quoting Dave Pollard with approval:

“I’d always expected that the younger and more tech-savvy people in any organization would be able to show (not tell) the older and more tech-wary people how to use new tools easily and effectively. But in thirty years in business, I’ve almost never seen this happen. Generation Millennium will use IM, blogs, and personal web pages (internal or on public sites like LinkedIn, MySpace and FaceBook) whether they’re officially sanctioned or not, but they won’t be evangelists for these tools.” (p. 216)

 There is here, I think, a sense of Salkowitz’s desire to engage older workers as well as his concern that unwarranted assumptions about younger people’s affinity with technology could lead businesses towards the wrong courses of action.

At the heart of Generation Blend is a critique of existing technology, in which Salkowitz points out that current business software has a number of common characteristics:

  • It tends to be complex and overladen with features
  • It focuses on efficiency
  • It is driven by the need to perform tasks
  • It supports a work/life balance that is “essentially a one-way flow of work into life” (p. 147)

These characteristics have come about, Salkowitz argues, because the technology has largely been produced by and for programmers whose values and culture:

…independence, obsession with efficiency as a way to save personal time and effort, low priority on interpersonal communication skills, focus on outcomes rather than process (such as meetings or showing up on a regular schedule), seeing risk in a positive light, desire to dominate through competence — sound like the thumbnail descriptions of Generation X tossed out by management analysts. (p. 149)

Since this group is clearly comfortable with technology, and is also increasingly moving into leadership and management roles, Salkowitz provides them with guidance on making technology accessible to older workers and on making the most of the skills and insights of younger workers. He does this in general terms throughout the book, but most convincingly in the final three chapters. Two of these use narrative to show how (a) the fear can be taken out of technology for older people and (b) the younger generation can be involved directly in defining organisational strategy.

In the first of these chapters, Salkowitz describes a non-profit New York initiative, OATS (Older Adults Technology Services), which trains older people in newer technologies, so that they can comfortably move into roles where those skills are needed. OATS has found that understanding the learning style of these people allows them to pick up software skills much more quickly than is commonly assumed.

While younger people learn technology by handson experimentation and trial and error, [Thomas] Kamber [OATS founder] and his team find that older learners prefer information in step-by-step instructions and value written documentation. (p. 167)

At the other end of the generational scale, Salkowitz starts with a statement that almost reads like a manifesto:

Millennials may be objects of study, but they are also, increasingly, participants in the dialogue, and it is silly (and rude) for organizations to talk about them as if they are not already in the room. (p. 190)

He goes on to illustrate the point with an account of Microsoft’s Information Worker Board of the Future, which was a “structured weeklong exercise around the future of work,” which the company used to help it understand how its strategy should develop in the future. It was judged to be a success by bringing new perspectives to the company as well as showing Microsoft to be a thought leader in this area.

…the organizational commitment to engage with Millennials as partners in the formation of a strategic vision can be as valuable as the direct knowledge gained from the engagement. Strategic planning is a crucial discipline for organizations operating in an uncertain world. When it is a closed process, conducted by experts and senior people (who inevitably bring their generational biases with them), it runs a greater risk of missing emergent trends or misjudging the potential for discontinuities that could disrupt the entire global environment. Opening up the planning process to younger perspectives as a matter of course rather than novelty hedges against the risks of generational myopia and also sends a strong positive signal to members of the rising generation. (p. 209)

Generation Blend ends with a clear exposition of the key issues that organisations need to address in order to make the most of their workers of all ages and the technology they use.

Organizations looking to effectively manage across the age gap in an increasingly sophisticated connected information workplace should ask themselves five questions:

  1. Are you clearly explaining the benefits of technology?
  2. Are you providing a business context for your technology policies?
  3. Are you making technology accessible to different workstyles?
  4. Does your organizational culture support your technology strategy?
  5. Are you building bridges instead of walls? (p. 212)

The last two of these are particularly interesting. In discussing organisational culture, Salkowitz includes careful consideration of knowledge management activities, especially using Web 2.0 tools. He is confident that workers of all generations will adapt to this approach to KM at a personal level, but points to real challenges: “[t]he real difficulties… are rooted in the business model and in the way that individual people see their jobs.” (p. 229) For Salkowitz, the solution is for the organisation to make a real and visible investment in knowledge activities — he points to the use of PSLs in UK law firms as one example of this approach. Given the tension between social and market norms that I commented on yesterday, I wonder how far this approach can be pushed successfully.

Running through Generation Blend is a thread of involvement and engagement. Salkowitz consistently advocates management approaches that accommodate different ways of extracting value from technology at work. This thread emerges in the final section of the book as an exhortation to use the best of all generations to work together for the organisation — building bridges rather than walls.

Left to themselves, workers of different ages will apply their own preconceptions and experiences of technology at work, sometimes leading to conflict and misunderstanding when generational priorities diverge. But when management demonstrates a commitment to respecting both the expectations of younger workers and the concerns of more experienced workers around technology, organizations can effectively combine the tech-savvy of the young with the knowledge and wisdom of the old in ways that make the organization more competitive, more resilient to external change, more efficient, and more open. (p. 231)

I think he is right in this, but it will be a challenge for many organizations to do this effectively, especially when they are distracted by seismic changes outside. My gut feeling is that those businesses that work hard at the internal stuff will find that their workforce is better able to deal with those external forces.

Cooking the books

One of the longest-established forms of knowledge activity in law firms is the creation and maintenance of standard or precedent documents. These usually cover the core activities of the firm, and allow lawyers to create the first drafts of client documents in much less time and (assuming they have been well-drafted in the first place) to a higher and more consistent standard than if they were to start with a blank sheet, or a fully-negotiated agreement from an earlier transaction.

web_img_3711

When I spoke on Web 2.0 and KM at a conference last November, I likened a law firm’s precedent collection to the domestic KM system represented by a set of recipe books. We tend to collect recipes for dishes that we already like or that look interesting on the page. Whether we use the books religiously or not depends on a number of things:

  • How confident we are as cooks
  • Whether we cook according to what is in the cupboard, or shop to fit a recipe
  • How important it is to get something right (on a big occasion, for example)

In the picture above, one of the books is so well-used that it has lost its spine. That is our copy of the book known generally (and affectionately) in British households as “Delia”: the Complete Cookery Course, by Delia Smith. On the Learning to Fly mailing list this week, “Delia” was suggested as an example of a knowledge asset (defined as “a compilation of know-how, packaged in such a way as to provide valuable reference material that others can translate into tacit knowledge”).

There was some disagreement about this — perhaps it is a better example of an information asset. For me, it was a reminder of a comment of Dave Snowden’s, comparing a mere user of recipe books with a true chef:

There is a huge difference between a chef and a user of recipe books. The recipe book user (for which read the manufacturing model of consultancy) uses best practice to assemble the same ingredients in the same context to produce the same meal, time and time again. If they come into your kitchen, it will have to be re-engineered to confirm with the requirements of the recipe before they start to work (and you will pay in many ways for that). The Chef in contrast can work with whatever ingredients and utensils you happen to have to hand and create a great meal.

In my presentation, I contrasted the traditional precedent/recipe book KM approach with the use of Web 2.0 tools to expose knowledge that the firm did not know it had or to create knowledge from interactions that would be impossible to create otherwise. I think this model is closer to Dave Snowden’s chef, in that it makes the most of what is in the cupboard. For a law firm, this approach means that it is possible to be more adaptable to what clients need, to changes in legal or market practice, or to the economy.

But… The chef needs to start somewhere. Recipes are necessary. We just need to be careful. As Matthew Fort put it:

Just as we have delegated most of our food decision-making to supermarkets so we have bowed our heads to the recipe. We can’t get through cooking life without them. We’ve come to treat recipes like crutches, to help us limp through the process of cooking a dish, rather relying on our own experience and judgement.

Nigel Slater is right when he writes in his introduction to February’s Observer Food Monthly that the purpose of a recipe is to instil confidence, to inspire and allow ideas to be shared.

A view of recipes as inviolate is totally erroneous, they are not the culinary equivalent of chemical formulae. Tamper with the ingredients or the proportions and you tamper with the something precise and ordered. Who knows what chaos and disaster lies the other side of leaving out the celery?

It’s bollocks, of course. You’re just cooking something a little different. It’s not going to alter the course of the universe or cause disgrace at the dinner table.

The same can be said of precedents. They contain the essential ingredients to ensure that a basic agreement is sound, but a confident lawyer will have learnt over time what can be added and what left out to make the final product just what the client ordered. There are many ways of building that confidence. Experience, sound basic documents, mentoring, coaching, insights provided by colleagues through training and by intelligent use of blogs and wikis: these and others are all important tools in the development of confident, inspired, idea-sharing and creative lawyers.

That is why all of these are fundamental to KM in law firms. Our job is to blend the ingredients in just the right way to meet the needs of our clients, their markets, our lawyers, and the firm. This will inevitably be a constantly-changing recipe — the basic elements are all themselves changing.