Archive for the 'Technology' Category

Hiding behind technology: what kind of a job is that?

I think our relationship with technology is detracting from our capacity to work effectively. In order to change this, we need to reassert what it is that we actually do when we come to work.

One of the staples of TV drama is the workplace, another is espionage. The BBC is currently showing a short series, The Hour, in which those elements are combined with a touch of social/political comment in a not-so-distant historical setting — a BBC current affairs programme (The Hour of the title) in 1956, as Hungary is invaded by the Soviet Union and Egypt precipitates the Suez crisis. It isn’t the best thing that the BBC have done — Mad Men beats it for verisimilitude, nothing can touch Tinker, Tailor, Soldier, Spy for British cold war espionage drama, and at least one person who was in TV in the 1950s is adamant that its representation of news broadcasting is a long way from the reality. That said, it is relaxing Summer viewing.

One of the things that struck me, watching the most recent episode, is that everyone is intimately engaged with the objects of their work. Cameramen wield cameras; editors cut film; reporters write words (with a pen or pencil) on paper. And they do one thing at once. During the episode, the producer of The Hour is admonished by her superior (for having an affair with the programme’s presenter). As she enters the room, he is making notes in a file. He closes it while berating her for putting the programme, and her career, at risk. When finished, he returns to his paperwork. There is no doubt at any point during this sequence as to his focus, his priorities or his wider role.

I think we have lost that clarity. As I look around me, in all sorts of workplaces, there is little or no distinction in the environments inhabited by people who actually do very varied jobs. Generally, it looks like we all work with computers. People sit with a large flat surface in front of them, which is dominated by a box filled with electronics, umbilically attached via (in my case) ten cables to a variety of other bits of electronics, to power and to a wider network. One or two of those other pieces of hardware are really intrusive. The screens we work at (I have two) are our windows into the material that we produce — documents, emails, spreadsheets — to the information we consume, and to our connections with other people. But physically, they fail miserably to benefit our human connectivity. In my case, the screens sit between me and the rest of the occupants of my working room. We all sit in a group, facing each other, but our screens act as a barrier between our working environments. When we converse, we have to crane round the barriers, and we are easily distracted from the conversation by things that happen on the screens.

But if you asked the average law firm employee (whether a lawyer or not) what they do every day, very few would respond that they work with computers. They would speak in terms of managing teams, delivering quality advice to clients, supporting the wider business with training, information or know-how. Some of our IT colleagues might agree that they do work with computers, but some would claim instead that their role is to enhance the firm’s effectiveness and that computers are just the tools by which that is achieved. That is consistent with research conducted by Andrew McAfee, for example. At an organisational level, then, technology improves performance. However, it is also well-observed that many forms of technology, inappropriately used, can distract people and reduce their personal effectiveness. That is manifest in complaints about information overload, email management, social media at work, and so on.

The problem is that, through this box and its two big screens, I have access to absolutely everything I need — the software tools, the online information, the worldwide contacts — to do my job. Unfortunately, because everything is in the same place, it is hard to create clear boundaries between all these things. Outlook is open, so I see when email arrives even though I am working on a document. When I am focusing on an email on one project, it is sitting next to one on a different topic, so it is practically impossible not to skip to that topic before I am actually ready. We can discipline ourselves, but that actually makes work harder, and so we must be less effective.

In some organisations, the technology is configured to provide access just to the tools people need. This is typically the case in call centre environments, for example. I think this only really works when people are working through clearly defined processes. As soon as a degree of creativity is required, or where the information needs of a role are emergent, bounded technology starts to fail.

Instead, I think each of us needs to understand exactly what we need from the technology, to create a clear path to that, and to take steps to exclude the less relevant stuff.

My current role requires me to take responsibility for a group of people who have not previously thought of themselves as a single team. I shouldn’t do that from a desk which is at a significant distance from many of them. The technology may fool me into thinking that I am bridging that distance by sending emails and writing documents, but I am sure that isn’t really the case. We have technology to allow me to divest myself of the big box and its screens. I am seriously considering doing just that — doing my job, rather than working with computers.

 

The nature of the firm, and why it matters

Jordan Furlong‘s justified question, “Why do law firms exist?” is something that isn’t just relevant to partners (or potential investors in firms). Those who support the core functions of the firm need to be aware of its implications. I’ll come back to Jordan’s question, but first I want to reflect on something else.

Thanks to the generosity of Headshift, I was able to attend the Dachis Group’s London Social Business Summit at the end of March. One of the most interesting sessions that day was the presentation by Dave Gray of XPLANE. Dave outlined his current thinking about the nature of the company, which can be found summarised in the initial post on his new site, The Connected Company.

Dave is concerned about the short life span of the average company:

In a recent talk, John Hagel pointed out that the average life expectancy of a company in the S&P 500 has dropped precipitously, from 75 years (in 1937) to 15 years in a more recent study. Why is the life expectancy of a company so low? And why is it dropping?

He is also worried about their productivity:

A recent analysis in the CYBEA Journal looked at profit-per-employee at 475 of the S&P 500, and the results were astounding: As you triple the number of employees, their productivity drops by half (Chart here).

This “3/2 law” of employee productivity, along with the death rate for large companies, is pretty scary stuff. Surely we can do better?

I believe we can. The secret, I think, lies in understanding the nature of large, complex systems, and letting go of some of our traditional notions of how companies function.

The largest complex system that still seems to work is the city.

Cities aren’t just complex and difficult to control. They are also more productive than their corporate counterparts. In fact, the rules governing city productivity stand in stark contrast to the ominous “3/2 rule” that applies to companies. As companies add people, productivity shrinks. But as cities add people, productivity actually grows.

A study by the Federal Reserve Bank of Philadelphia found that as the working population in a given area doubles, productivity (measured in this case by the rate of invention) goes up by 20%. This finding is borne out by study after study. If you’re interested in going deeper, take a look at this recent New York Times article: A Physicist Solves the City.

Drawing on a study of long-lived successful companies commissioned by Shell Oil, Dave spots three characteristics of those companies also shared by cities:

Ecosystems: Long-lived companies were decentralized. They tolerated “eccentric activities at the margins.” They were very active in partnerships and joint ventures. The boundaries of the company were less clearly delineated, and local groups had more autonomy over their decisions, than you would expect in the typical global corporation.

Strong identity: Although the organization was loosely controlled, long-lived companies were connected by a strong, shared culture. Everyone in the company understood the company’s values. These companies tended to promote from within in order to keep that culture strong. Cities also share this common identity: think of the difference between a New Yorker and a Los Angelino, or a Parisian, for example.

Active listening: Long-lived companies had their eyes and ears focused on the world around them and were constantly seeking opportunities. Because of their decentralized nature and strong shared culture, it was easier for them to spot opportunities in the changing world and act, proactively and decisively, to capitalize on them.

The whole post is worth reading and reflecting on. Dave’s prescription for success, for companies to be more like cities, is to shun divisional structures, and to build on networks and connections instead. This has been refined in a more recent post into a ‘podular’ system.

A pod is a small, autonomous unit that is enabled and empowered to deliver the things that customers value.

By value, I mean anything that’s a part of a service that delivers value, even though the customer may not see it. For example, in a construction firm, the activities valued by customers are those that are directly related to building. The accounting department of a construction firm is not part of the value delivery system, it’s a support team. But in an accounting firm, any activity related to accounting is part of the customer value delivery system.

There’s a reason that pods need to focus on value-creating activities rather than support activities. Support activities might need to be organized differently.

This idea appears to be closely related to Steve Denning’s notion of Radical Management, as described in his latest book. It also reflects the way that some professional service firms organise themselves. That’s what brings us back to Jordan Furlong’s question.

Why do law firms exist? Or, more properly, why should law firms continue to exist? (One important reason why they exist is that their history brought us to this point. What might happen to them in the future is actually more interesting.)

Jordan’s post starts with Ronald Coase, but also points to a number of ways in which law firms might not meet Coase’s standards.

Companies exist, therefore, because they:

  • reduce transaction costs,
  • build valuable culture,
  • organize production,
  • assemble collective knowledge, and
  • spur innovation.

So now let’s take a look at law firms. I don’t think it would be too huge a liberty to state that as a general rule, law firms:

  • develop relatively weak and fragmented cultures,
  • manage production and process indifferently,
  • assign and perform work inefficiently,
  • share knowledge haphazardly and grudgingly, and
  • display almost no interest in innovation.

That’s an inventory of defects that would make Ronald Coase wonder exactly what it is that keeps law firms together as commercial entities.

Worse than that, Jordan points to a range of recent commentaries suggesting that things aren’t getting any better. I think he is correct. In fact, it is interesting to note that John Roberts spotted the germ of the problem in his 2004 book, The Modern Firm.

Many authors, including Ronald Coase and Herbert Simon, have identified the essential nature of the firm as the reliance on heirarchic, authority relations to replace the inherent equality among participants that markes market dealings. When you join a firm, you accept the right of the executives and their delegates to direct your behaviour, at least over a more-or-less commonly understood range of activities. …

Others … have challenged this view. They argue that any appearance of authority in the firm is illusory. For them, the relationship between employer and employee is completely parallel to that between customer and butcher. In each case, the buyer (of labor services or meat) can tell the seller what is wanted on a particular day, and the seller can acquiesce and be paid, or refuse and be fired. For these scholars, the firm is simply “a nexus of contracts” — a particularly dense collection of the sort of arrangements that characterise markets.

While there are several objections to this argument, we focus on one. It is that, when a customer “fires” a butcher, the butcher keeps the inventory, tools, shop, and other customers she had previously. When an employee leaves a firm, in contrast, she is typically denied access to the firm’s resources. The employee cannot conduct business using the firm’s name; she cannot use its machinery or patents; and she probably has limited access to the people and networks in the firm, certainly for commercial purposes and perhaps even socially. (The Modern Firm, pp.103-4)

The benefits Roberts identifies are almost always missing in a law firm. The firm’s name may be less significant than the lawyer’s and there is little machinery or patents. In the seven years since the book was published access to networks and people has become infinitely more straightforward, thanks to developments in social software and similar technologies.

Joining Roberts’s insights with those of Dave Gray and Jordan Furlong, I think it is likely that we will see much more fluid structures in law firms in coming years. Dave Gray’s podular arrangement need not be restricted to one organisation — what is to stop clients creating their own pods for specific projects, drawing together the good lawyers from a variety of firms? Could the panel arrangement now commonly in use by larger companies be a Trojan horse to allow them to pick off key lawyers whenever they need them? Technology is only going to make that easier.

So that leaves the support functions. In Dave Gray’s podular model, support is provided by a backbone, or platform.

Podular platform

For a podular system to work, cultural and technical standards are imperative. This means that a pod’s autonomy does not extend to choices in shared standards and protocols. This kind of system needs a strong platform that clearly articulates those standards and provides a mechanism for evolving them when necessary.

For small and large companies alike, the most advantageous standards are those that are most widely adopted, because those standards will allow you to plug in more easily to the big wide world – and the big wide world always offers more functionality, better and more cheaply than you can build it yourself. Platform architecture is about coordination and consistency, so the best way to organize it may not be podular. When it comes to language, protocols, culture and values, you don’t want variability, you want consistency. Shared values is one of the best ways to ensure consistent behavior when you lack a formal hierarchy. Consistency in standards is an absolute requirement if you want to enable autonomous units.

Interestingly, there is often little variation between different law firms in terms of their technical standards. In some practice areas, these are dictated by external agencies (courts, industry associations, etc.), whilst in others they converge because of intervention by common suppliers (in the UK, many firms use know-how and precedents provided by PLC) or simply the fact that in order to do their job lawyers have to share their basic knowledge (first-draft documents often effectively disclose a firm’s precedents to their competitors). It is a small step to a more generally accepted foundation for legal work.

Will clients push for this? Would they benefit from some form of crowd-sourced backbone to support lawyers working for them in a podular fashion? Time will tell, but don’t wait for the train to leave the station before you decide to board it.

Now and then

A couple of days ago, Patrick Lambe posted a really thoughtful piece considering the implications of heightened awareness from the new generation of social software tools as opposed to the traditional virtues of long-term information storage and access. If you haven’t read it, do so now. (Come back when you have finished.)

Laid down

The essence of Patrick’s piece is that when we focus our attention on the here and now (through Twitter or enterprise micro-blogging, for example), we forget to pay attention to the historically valuable information that has been archived away. This is not a problem with technology. He points to interesting research on academics’ use of electronic resources and their citation patterns.

How would online access influence knowledge discovery and use? One of his hypotheses was that “online provision increases the distinct number of articles cited and decreases the citation concentration for recent articles, but hastens convergence to canonical classics in the more distant past.”

In fact, the opposite effect was observed.

As deeper backfiles became available, more recent articles were referenced; as more articles became available, fewer were cited and citations became more concentrated within fewer articles. These changes likely mean that the shift from browsing in print to searching online facilitates avoidance of older and less relevant literature. Moreover, hyperlinking through an online archive puts experts in touch with consensus about what is the most important prior work—what work is broadly discussed and referenced. … If online researchers can more easily find prevailing opinion, they are more likely to follow it, leading to more citations referencing fewer articles. … By enabling scientists to quickly reach and converge with prevailing opinion, electronic journals hasten scientific consensus. But haste may cost more than the subscription to an online archive: Findings and ideas that do not become consensus quickly will be forgotten quickly.

Now this thinning out of long term memory (and the side effect of instant forgettability for recent work that does not attract fast consensus) is observed here in the relatively slow moving field of scholarly research. But I think there’s already evidence (and Scoble seems to sense this) that exactly the same effects occur when people and organisations in general get too-fast and too-easy access to other people’s views and ideas. It’s a psychosocial thing. We can see this in the fascination with ecologies of attention, from Tom Davenport to Chris Ward to Seth Godin. We can also see it in the poverty of attention that enterprise 2.0 pundits give to long term organisational memory and recordkeeping, in the longer term memory lapses in organisations that I have blogged about here in the past few weeks…

Jack Vinson adds another perspective on this behaviour in a post responding to Patrick’s.

I see another distinction here.  The “newer” technologies are generally about user-engagement and creation, whereas the “slower” methods are more focused on control and management activities much more so than the creation.  Seen in this light, these technologies and processes spring from the situation where writing things down was a time-consuming process.  You wanted to have it right, if you went to that much effort.  Unfortunately, the phrase “Document management is where knowledge goes to die” springs to mind.

In knowledge management, we are trying to combine the interesting knowledge that flows between people in natural conversation as well as the “hard knowledge” of documented and proven ideas and concepts.  KM has shown that technology just can’t do everything (yet?) that humans can do.  As Patrick says, technology has been a huge distraction to knowledge management.

I think Jack’s last comment is essential. What we do is a balance between the current flow and the frozen past. What I find fascinating is that until now we have had few tools to help  us with the flow, whereas the databases, archives, taxonomies and repositories of traditional KM and information management have dominated the field. I think Patrick sounds an important warning bell. We should not ignore it. But our reaction shouldn’t be to reverse away from the interesting opportunities that new technologies offer.

It’s a question (yet again) of focus. Patrick opens his post with a complaint of Robert Scoble’s.

On April 19th, 2009 I asked about Mountain Bikes once on Twitter. Hundreds of people answered on both Twitter and FriendFeed. On Twitter? Try to bundle up all the answers and post them here in my comments. You can’t. They are effectively gone forever. All that knowledge is inaccessible. Yes, the FriendFeed thread remains, but it only contains answers that were done on FriendFeed and in that thread. There were others, but those other answers are now gone and can’t be found.

Yes, Twitter’s policy of deleting old tweets is poor, but even if they archived everything the value of that archive would be minimal. Much of what I see on Twitter is related to the here and now. It is the ideal place to ask the question, “I’m looking at buying a mountain bike. For $1,000 to $1,500 what would you recommend?” That was Scoble’s question, and it is time-bound. Cycle manufacturers change their offering on a seasonal and annual basis. The cost of those cycles also changes regularly. The answer to that question would be different in six months time. Why worry about storing that in an archive?

Knowledge in law firms is a curious blend of the old and the new. Sometimes the law that we deal with dates back hundreds of years. It is often essential to know how a concept has been developed over an extended period by the courts. The answer to the question “what is the current position on limitations of liability in long-term IT contracts?” is a combination of historic research going back to cases from previous centuries and up to the minute insight from last week’s negotiations on a major outsourcing project for a client. It is a real combination of archived information and current knowledge. We have databases and law books to help us with the archived information. What we have been lacking up until recently is an effective way of making sure that everyone has access to the current thinking. As firms become bigger and more scattered (across the globe, in some cases) making people aware of what is happening across the firm has become increasingly difficult.

Patrick’s conclusion is characteristically well expressed.

So while at the level of technology adoption and use, there is evidence that a rush toward the fast and easy end of the spectrum places heavy stresses on collective memory and reflection, at the same time, interstitial knowledge can also maintain and connect the knowledge that makes up memory. Bipolarity simply doesn’t work. We have to figure out how to see and manage our tools and our activities to satisfy a balance of knowledge needs across the entire spectrum, and take a debate about technology and turn it into a dialogue about practices. We need to return balance to the force.

That balance must be at the heart of all that we do. And the point of balance will depend very much on the demands of our businesses as well as our interest in shiny new toys. Patrick is right to draw our attention to the risks attendant on current awareness, but memory isn’t necessarily all it is cracked up to be. We should apply the same critical eye to everything that comes before us — how does this information (or class of information) help me with the problems that I need to solve? The answer will depend heavily on your organisational needs.

Back to basics

Recently I have caught up with two Ur-texts that I really should have read before. However, the lessons learned are two-fold: the content (in both cases) is still worthy of note, and one should not judge a work by the way it is used.

Recycling in Volterra

In late 1991, the Harvard Business Review published an article by Ikujiro Nonaka containing some key concepts that would be used and abused in the name of knowledge management for the next 18 years (and probably beyond). In “The Knowledge-Creating Company” (reprinted in 2007) Nonaka described a number of practices used by Japanese companies to use their employees’ and others’ tacit knowledge to create new or improved products.

Nonaka starts where a number of KM vendors still are:

…despite all the talk about “brain-power” and “intellectual capital,” few managers grasp the true nature of the knowledge-creating company — let alone know how to manage it. The reason: they misunderstand what knowledge is and what companies must do to exploit it.

Deeply ingrained in the traditions of Western management, from Frederick Taylor to Herbert Simon, is a view of the organisation as a machine for “information processing.” According to this view, the only useful knowledge is formal and systematic — hard (read: quantifiable) data, codified procedures, universal principles. And the key metrics for measuring the value of new knowledge are similarly hard and quantifiable — increased efficiency, lower costs, improved return on investment.

Nonaka contrasts this with an approach that is exemplified by a number of Japanese companies, where managing the creation of new knowledge drives fast responses to customer needs, the creation of new markets and innovative products, and dominance in emergent technologies. In some respects, what he describes presages what we now call Enterprise 2.0 (although, tellingly, Nonaka never suggests that knowledge creation should involve technology):

Making personal knowledge available to others is the central activity of the knowledge-creating company. It takes place continuously and at all levels of the organization. And … sometimes it can take unexpected forms.

One of those unexpected forms is the development of a bread-making machine by the Matsushita Electric Company. This example of tacit knowledge converted into explicit has become unrecognisable in its repetition in numerous KM articles, fora, courses, and so on. Critically, there is no actual conversion — the tacit knowledge of how to knead bread dough is not captured as an instruction manual for bread making. What actually happens is that the insight gained by the software developer Ikuko Tanaka by observing the work of the head baker at the Osaka International Hotel was converted into a simple improvement in the way that an existing bread maker kneaded dough prior to baking. The expression of this observation was a piece of explicit knowledge — the design of a new bread maker, to be sold as an improved product.

That is where the critical difference is. To have any value at all in an organisation, peoples’ tacit knowledge must be able to inform new products, services, or ways of doing business. Until tacit knowledge finds such expression, it is worthless. However, that is not to say that all tacit knowledge must be documented to be useful. That interpretation is a travesty of what Nonaka has to say.

Tacit knowledge is highly personal. It is hard to formalize and, therefore, difficult to communicate to others. Or, in the words of philosopher Michael Polanyi, “We know more than we can tell.” Tacit knowledge is also deeply rooted in action and in an individual’s commitment to a specific context — a craft or profession, a particular technology or product market, or the activities of a work group or team.

Nonaka then explores the interactions between the two aspects of knowledge: tacit-tacit, exlpicit-explicit, tacit-explicit, and explicit-tacit. From this he posits what is now known as the SECI model. In this original article, he describes four stages: socialisation, articulation, combination and internalisation. Later, “articulation” became “externalisation.” It is this stage where technology vendors and those who allowed themselves to be led by them decided that tacit knowledge could somehow be converted into explicit as a business or technology process divorced from context or commitment. This is in direct contrast to Nonaka’s original position.

Articulation (converting tacit knowledge into explicit knowledge) and internalization (using that explicit knowledge to extend one’s own tacit knowledge base) are the critical steps in this spiral of knowledge. The reason is that both require the active involvement of the self — that is, personal commitment. …

Indeed, because tacit knowledge includes mental models and beliefs in addition to know-how, moving from the tacit to the explicit is really a process of articulating one’s vision of the world — what it is and what it ought to be. When employees invent new knowledge, they are also reinventing themselves, the company, and even the world.

The rest of Nonaka’s article is rarely referred to in the literature. However, it contains some really powerful material about the use of metaphor , analogy and mental models to generate new insights and trigger valuable opportunities to articulate tacit knowledge. He then turns to organisational design and the ways in which one should manage the knowledge-creating company.

The fundamental principle of organizational design at the Japanese companies I have studied is redundancy — the conscious overlapping of company information, business activities, and managerial responsibilities. …

Redundancy is important because it encourages frequent dialogue and communication. This helps create a “common cognitive ground” among employees and thus facilitates the transfer of tacit knowledge. Since members of the organization share overlapping information, they can sense what others are struggling to articulate. Redundancy also spreads new explicit knowledge through the organization so it can be internalized by employees.

This silo-busting approach is also at the heart of what has now become known as Enterprise 2.0 — the use of social software within organisations. What Nonaka described as a natural form for Japanese organisations was difficult for Western companies to emulate. The legacy of Taylorism has proved too hard to shake off, and traditional enterprise technology has not helped.

Which is where we come to the second text: Andrew McAfee’s Spring 2006 article in the MIT Sloan Management Review: “Enterprise 2.0:The Dawn of Emergent Collaboration.” This is where the use of Web 2.0 technologies started to hit the mainstream. In reading this for the first time today — already having an an understanding and experience of the use of blogs and wikis in the workplace — it was interesting to see a different, almost historical, perspective. One of the most important things, which we sometimes forget, is McAfee’s starting point. He refers to a study of knowledge workers’ practices by Thomas Davenport.

Most of the information technologies that knowledge workers currently use for communication fall into two categories. The first comprises channels — such as e-mail and person-to-person instant messaging — where digital information can be created and distributed by anyone, but the degree of commonality of this information is low (even if everyone’s e-mail sits on the same server, it’s only viewable by the few people who are part of the thread). The second category includes platforms like intranets, corporate Web sites and information portals. These are, in a way, the opposite of channels in that their content is generated, or at least approved, by a small group, but then is widely visible — production is centralized, and commonality is high.

So, what is the problem with this basic dichotomy?

[Davenport's survey] shows that channels are used more than platforms, but this is to be expected. Knowledge workers are paid to produce, not to browse the intranet, so it makes sense for them to heavily use the tools that let them generate information. So what’s wrong with the status quo?

One problem is that many users aren’t happy with the channels and platforms available to them. Davenport found that while all knowledge workers surveyed used e-mail, 26% felt it was overused in their organizations, 21% felt overwhelmed by it and 15% felt that it actually diminished their productivity.In a survey by Forrester Research, only 44% of respondents agreed that it was easy to find what they were looking for on their intranet.

A second, more fundamental problem is that current technologies for knowledge workers aren’t doing a good job of capturing their knowledge.

In the practice of doing their jobs, knowledge workers use channels all the time and frequently visit both internal and external platforms (intranet and Internet). The channels,however, can’t be accessed or searched by anyone else, and visits to platforms leave no traces. Furthermore,only a small percentage of most people’s output winds up on a common platform.

So the promise of Enterprise 2.0 is to blend the channel with the platform: to use the content of the communication channel to create (almost without the users knowing it) a content-rich platform. McAfee goes on to describe in more detail how this was achieved within some examplar organisations — notably Dresdner Kleinwort Wasserstein. He also derives a set of key features (Search, Links, Authorship, Tagging, Extensions and Signals (SLATES) to describe the immanent nature of Enterprise 2.0 applications as distinct from traditional enterprise technology.

What interests me about McAfee’s original article is (a) how little has changed in the intervening three years (thereby undermining the call to the Harvard Business Press to rush his book to press earlier than scheduled), and (b) which of the SLATES elements still persist as critical issues in organisations. Effective search will always be a challenge for organisational information bases — the algorithms that underpin Google are effectively unavailable, and so something else needs to be simulated. Tagging is still clearly at the heart of any worthwhile Enterprise 2.0 implementation, but it is not clear to me with experience that users understand the importance of this at the outset (or even at all). The bit that is often missing is “extensions” — few applications deliver the smartness that McAfee sought.

However, the real challenge is to work out the extent to which organisations have really blurred the channel/platform distinction by using Enterprise 2.0 tools. Two things suggest to me that this will not be a slow process: e-mail overload is still a significant complaint; and the 90-9-1 rule of participation inequality seems not to be significantly diluted inside the firewall.

Coincidentally, McAfee has posted on his blog today, asking for suggestions for a new article on Enterprise 2.0, as well as explaining some of the delay with his book.

Between now and the publication date the first chapter of the book, which describes its genesis, goals, and structure, is available for download. I’m also going to write an article about Enterprise 2.0 in Harvard Business Review this fall. While I’ve got you here, let me ask a question: what would you like to have covered in the article?  Which topics related to Enterprise 2.0 should it discuss? Leave a comment, please, and let us know — I’d like to crowdsource the article a bit. And if you have any questions or comments about the book, I’d love to hear them.

I have made my suggestions above, Andy. I’ll comment on your blog as well.

We are all in this together

A couple of links to start with: John Stapp and “Has ‘IT’ Killed ‘KM’?

Picture credit: Bill McIntyre on Flickr

I don’t have much truck with heroes. Many people do great things, in the public eye and otherwise, and it seems invidious to single certain individuals out mainly because they are better known than others who are equally worthy of credit. However, I make an exception for John Stapp.

Every time you get into a car and put on a seat belt (whether required to by law or not), you owe a debt to Dr Stapp. As a doctor in the US Air Force, he took part in experiments on human deceleration in the late 1940s. During the Second World War it had been assumed that the maximum tolerable human deceleration was 18G (that is, 18 times the force of gravity at sea level), and that death would occur above that level. The Air Force wanted to test whether this was really true, and so a research project was set up. In order to test the hypothesis, an anthropomorphic dummy was to be shot down a test track and abruptly brought to a halt. Measuring equipment would be used to gauge the effect of the deceleration on the dummy. An account of the project is provided in the Annals of Improbable Research. That account indicates that Stapp had little confidence in the dummy.

While the brass assigned a 185-pound, absolutely fearless, incredibly tough, and altogether brainless anthropomorphic dummy — known as Oscar Eightball — to ride the Gee Whiz, David Hill remembers Stapp had other ideas. On his first day on site he announced that he intended to ride the sled so that he could experience the effects of deceleration first-hand. It was a statement that Hill and everyone else found shocking. “We had a lot of experts come out and look at our situation,” he remembers. “And there was a person from M.I.T. who said, if anyone gets 18 Gs, they will break every bone in their body. That was kind of scary.”
But the young doctor had his own theories about the tests and how they ought to be run, and his nearest direct superiors were over 1000 miles away. Stapp’d done his own calculations, using a slide rule and his knowledge of physics and human anatomy, and concluded that the 18 G limit was sheer nonsense. The true figure he felt might be twice that if not more.

In the event, Oscar the dummy was used merely to test the efficacy of the test track and the ballistic sled on which his seat was first accelerated and then decelerated. Once that was done, testing could start.

Finally in December 1947 after 35 test runs, Stapp got strapped into the steel chariot and took a ride. Only one rocket bottle was fired, producing a mere 10 Gs of force. Stapp called the experience “exhilarating.” Slowly, patiently he increased the number of bottles and the stopping power of the brakes. The danger level grew with each passing test but Stapp was resolute, Hill says, even after suffering some bad injuries. And within a few months, Stapp had not only subjected himself to 18 Gs, but to nearly 35. That was a stunning figure, one that would forever change the design of airplanes and pilot restraints.

The initial tests were done with the subject (not always Stapp) facing backwards. Later on, forward-facing tests were done as well. Over the period of the research, Stapp was injured a number of times. Many of these injuries had never been seen before — nobody had been subjected to such extreme forces. Some were more mundane — he broke his wrist twice; on one occasion resetting the fracture himself as he walked back to his office. It is one thing to overcome danger that arises accidentally, quite another to put oneself directly in such extreme situations.

And he did it for the public good.

…while saving the lives of aviators was important, Kilanowski says Stapp realized from the outset that there were other, perhaps even more important aspects to his research. His experiments proved that human beings, if properly restrained and protected, could survive an incredible impact.

Cars at the time were incredibly dangerous places to be. All the padding, crumple zones and other safety features that we now take for granted had yet to be introduced.

Improving automobile safety was something no one in the Air Force was interested in, but Stapp gradually made it his personal crusade. Each and every time he was interviewed about the Gee Whiz, Kilanowski notes, he made sure to steer the conversation towards the less glamorous subject of auto safety and the need for seatbelts. Gradually Stapp began to make a difference. He invited auto makers and university researchers to view his experiments, and started a pioneering series of conferences. He even managed to stage, at Air Force expense, the first ever series of auto crash tests using dummies. When the Pentagon protested, Stapp sent them some statistics he’d managed to dig up. They showed that more Air Force pilots died each year in car wrecks than in plane crashes.

While Stapp didn’t invent the three point auto seatbelt, he helped test and perfect it. Along with a host of other auto safety appliances. And while Ralph Nader took the spotlight when Lyndon Johnson signed the 1966 law that made seatbelts mandatory, Stapp was in the room. It was one of his real moments of glory.

Ultimately, John Stapp is a hero to me because he was true to his convictions — he had a hypothesis and tested it on himself. In the modern business vernacular, he ate his own dogfood. Over and above that, he did it because he could see a real social benefit. His work, and (more importantly) the way he did it, has directly contributed to saving millions of lives over the last 60 years. Those of us who seek to change our environments, whether at work or home, or in wider society, should heed his example. If there are things that might make a difference, we shouldn’t advocate them for others (even dummies) without checking that they work for us.

Now, the other link. Greg Lambert at the 3 Geeks and a Law Blog has extended the critique of IT failing to spot and deal with the current financial crisis by suggesting that KM is equally to blame.

Knowledge Management was originally an idea that came forth in the library field as a way to catalog internal information in a similar way we where cataloging external information. However, because it would be nearly impossible for a librarian to catalog every piece of internal information, KM slowly moved over to the IT structure by attempting to make the creator of the information (that would be the attorney who wrote the document or made the contact) also be the “cataloger” of the information. Processes were created through the use of technology that were supposed to assist them in identifying the correct classification. In my opinion, this type of self-cataloging and attempt at creating a ultra-structured system creates a process that is:

  1. difficult to use;
  2. doesn’t fit the way that lawyers conduct their day-to-day work;
  3. gives a false sense of believing that the knowledge has been captured and can be easily recovered;
  4. leads to user frustration and “work around” methods; and
  5. results in expensive, underutilized software resources.

In a comment on that post, Doug Cornelius says:

I look at KM 1.0 as being centralized and KM 2.0 as being personalized. The mistake with first generation KM and why it failed was that people don’t want to contribute to a centralized system.

We have to be careful, as Bill Ives points out, not to throw out the baby in our enthusiasm to replace the 1.0 bathwater with nice fresh 2.0 bubbles. However, Greg and Doug do have a point. We made a mistake in trying to replicate the hundreds or thousands of databases walking round our organisations with single inanimate repositories.

The human being is an incredible thing. It comes with a motive system and an incredibly powerful (but probably unstructured) data storage, computation and retrieval apparatus. Most (probably all) examples of homo sapiens could not reproduce the contents of this apparatus, but they can produce answers to all sorts of questions. The key to successful knowledge activities in an organisation, surely, is to remember that each one of these components adds a bit of extra knowledge value to the whole.

Potentially, then, we are all knowledge heroes. When we experiment with knowledge, the more people who join in, the better the results. And the result here should be, as Greg points out, to “help us face future challenges.” We can only do that by taking advantage of the things that the people around us don’t realise that they know.

It’s mine and I will choose what to do with it

This isn’t a political blog, and it is a coincidence that I came across a couple of things that chime with each other on the same day that the UK government has started to reverse from its enthusiastic promotion of ID cards for all.

The first juicy nugget came from Anne Marie McEwan. In writing about social networking tools and KM, she linked some of the requirements for successful social software adoption (especially the need for open trusting cultures) to the use of technology for monitoring.

And therein lies a huge problem, in my strong view. Open, trusting, transparent cultures? How many of them have you experienced? That level of monitoring could be seen as a version of Bentham’s Panopticon. Although the research is now quite old, there was a little publicised (in my view) ESRC-funded research project in the UK, The Future of Work, involving 22 universities and carried out over six years. One of the publications from that research was a book, Managing to Change?. The authors note that:

“One area where ICT is rapidly expanding management choices is in monitoring and control systems … monitoring information could connect with other parts of the HRM agenda, if it is made accessible and entrusted to employees for personal feedback and learning. This has certainly not happened yet and the trend towards control without participation is deeply disquieting.

If ICT-based control continues to be seen as a management prerogative, and the monitoring information is not shared with employees, then this is likely to become a divisive and damaging issue.”

On the other hand, the technology in the right hands and cultures creates amazing potential for nurturing knowledge and innovation.

What struck me about this was that (pace Mary Abraham’s concerns about information disclosure), people quite freely disclose all sorts of information about themselves on public social networking sites, such as Facebook, LinkedIn, Twitter, and so on. The fact is that some of this sharing is excessive and ill-advised, but even people who have serious reservations about corporate or governmental use of personal information lose some of their inhibition.

Why do they do this? In part it may be naïveté, but I think sometimes this sharing is much more knowing than that. What do they know, then? The difference between this voluntary sharing and forced disclosure is the identification of the recipients and (as Anne Marie recognises) trust. Basically, we share with people, not with organisations.

The second thing I found today was much more worrying. The UK Government is developing a new strategy for sharing people’s personal information between different government departments. It starts from a reasonable position:

We have a simple aim. We want everyone who interacts with Government to be able to establish and use their identity in ways which protect them and make their lives easier. Our strategy seeks to deliver five fundamental benefits. In future, everyone should expect to be able to:

  • register their identity once and use it many times to make access to public services safe, easy and convenient;
  • know that public services will only ask them for the minimum necessary information and will do whatever is necessary to keep their identity information safe;
  • see the personal identity information held about them – and correct it if it is wrong;
  • give informed consent to public services using their personal identity information to provide services tailored to their needs; and
  • know that there is effective oversight of how their personal identity information is used.

All well and good so far, but then buried in the strategy document is this statement (on p.11):

When accessing services, individuals should need to provide only a small amount of information to prove that they are who they say they are. In some situations, an individual may only need to use their fingerprint (avoiding the need to provide information such as their address).

But I can change my address (albeit with difficulty). I can never change my fingerprints. And fingerprints are trivially easy to forge. Today alone, I must have left prints on thousands of surfaces. All it takes is for someone to lift one of those, and they would have immediate access to all sorts of services in my name. (An early scene in this video shows it being done.

What I really want to be able to do is something like creating single-use public keys where the private key is in my control. And I want to be able to know and control where my information is being used and shared.

Going back to KM, this identity crisis is what often concerns people about organisationally forced (or incentivised) knowledge sharing. Once they share, they lose control of the information they provided. They also run the risk that the information will be misused without reference back to them. It isn’t surprising that people react to this kind of KM in the same way that concerned citizens have reacted to identity cards in the UK: rather than No2ID, we have No2KM (stop the database organisation).

From bureaucracy to agility

Last year, I referred to a post by Olivier Amprimo, who was then at Headshift. He is now working at the National Library Board in Singapore, and is still sharing really interesting thoughts. The latest is a presentation he gave to the Information and Knowledge Management Society in Singapore on “The Adaptation of Organisations to a Knowledge Economy and the Contribution of Social Computing“. I have embedded it below.

For me, the interesting facet of what Olivier describes is the transition from bureaucratic organisations to agile ones, and what that means for KM. Traditional KM reflects what Olivier isolates in the bureaucratic organisation, especially the problem he describes as the confusion between administrative work and intellectual work. In doing traditional KM (repositories of knowledge, backed up with metrics based on volume) we run the risk that administrative work is enshrined as the only work of value. However, it is the intellectual work where agility can be generated, and where real value resides.

Olivier describes the agile organisation as one where the focus is on rationalisation of design.

What is important is how the individual forms and is conditioned by work. The work is the facilitator. This is the first time that the individual has been in this position. This is where the knowledge economy really starts.

I found an example of the kind of agility that Olivier refers to in an unexpected place: a short account of the work of Jeff Jonas, who is the chief scientist of IBM’s Entity Analytics group. His work with data means that he is an expert in manipulating it and getting answers to security-related questions for governmental agencies and Las Vegas casinos. For example, he describes how he discussed data needs with a US intelligence analyst:

“What do you wish you could have if you could have anything?” Jonas asked her. Answers to my questions faster, she said. “It sounds reasonable,” Jonas told the audience, “but then I realized it was insane.” Insane, because “What if the question was not a smart question today, but it’s a smart question on Thursday?” Jonas says.

The point is, we cannot assume that data needed to answer the query existed and been recorded before the query was asked. In other words, it’s a timing problem.

Jonas works with data and technology, but what he says resonates for people too. When we store documents and information in big repositories and point search engines at them, we don’t create the possibility of intelligent knowledge use. The only thing we get is faster access to old (and possibly dead) information.

According to Jonas, organizations need to be asking questions constantly if they want to get smarter. If you don’t query your data and test your previous assumptions with each new piece of data that you get, then you’re not getting smarter.

Jonas related an example of a financial scam at a bank. An outside perpetrator is arrested, but investigators suspect he may have been working with somebody inside the bank. Six months later, one of the employees changes their home address in payroll system to the same address as in the case. How would they know that occurred, Jonas asked. “They wouldn’t know. There’s not a company out there that would have known, unless they’re playing the game of data finds data and the relevance finds the user.”

This led Jonas to expound his first principle. “If you do not treat new data in your enterprise as part of a question, you will never know the patterns, unless someone asks.”

Constantly asking questions and evaluating new pieces of data can help an organization overcome what Jonas calls enterprise amnesia. “The smartest your organization can be is the net sum of its perceptions,” Jonas told COMMON attendees.

And:

Getting smarter by asking questions with every new piece of data is the same as putting a picture puzzle together, Jonas said. This is something that Jonas calls persistent context. “You find one piece that’s simply blades of grass, but this is the piece that connects the windmill scene to the alligator scene,” he says. “Without this one piece that you asked about, you’d have no way of knowing these two scenes are connected.”

Sometimes, new pieces reverse earlier assertions. “The moment you process a new transaction (a new puzzle piece) it has the chance of changing the shape of the puzzle, and right before you go to the next piece, you ask yourself, ‘Did I learn something that matters?’” he asks. “The smartest your organization is going to be is considering the importance right when the data is being stitched together.”

Very like humans, then? A characteristic of what we do in making sense of the world around us is drawing analogies between events and situations: finding matching patterns. This can only be done if we have a constant awareness of what we already know coupled with a desire to use new information to create a new perspective on that. That sounds like an intellectual exercise to me.

It’s just this thing, you know?

We are on our way towards a place where some of the technologies that currently astound us will be so commonplace as to be boring. This is a truism. It was true of the spinning mule in the 1780s, and it is true of Web 2.0 software today. The longer we are astounded, the less likely we are to prepare for this inevitability, and therefore the worse prepared we will be.

James Dellow makes this point in his blog post, “Time for an upgrade? Wiki 2.0” and Luis Suarez drives it home with a pointer to a really engaging video on the impact of these technologies on learning (and therefore on business).

One of the interesting people speaking in the video is Stephen Heppell, who has been an educational innovator in the UK for what seems like decades (I certainly first encountered him in the early 1990s).

Children are living now in a different space. They are living in what I call a “nearly now”. Nearly now is that space that they text in, the space that they update their Facebook entries in, the space that they twitter in, you know, the space that is not quite synchronous. It’s a really interesting space because it’s not adversarial, it’s not pressured. It’s a space where people can — it’s all the R-words — they can reflect, and retract, and research, and repeat. It’s a very gentle world. I tell you what: it’s a great world for learning. (1’14″-1’45″)

Now we’re looking at a whole different range of schools. We are looking at schools that produce ingenious, collaborative, gregarious and brave children who care about stuff — like their culture. To build schools that do that is a whole other challenge. And around the world, you know, people are testing out the ingredients of what makes that work. Those ingredients are being assembled into some just stunning recipes in different places. It’s a very exciting time for learning. It’s the death of education, but it’s the dawn of learning. That makes me very happy! (4’31″-5’00″)

This idea of the pervasive “nearly now” is implicit in James Dellow’s post, and some of the things he links to. One of those things is an article by by Matthew C. Clarke, “Control and Community: A Case Study of Enterprise Wiki Usage“. He concludes as follows:

I predict that Wikis will disappear over the next 5 to 10 years. This is not because they will fail but precisely because they will succeed. The best technologies disappear from view because they become so common-place that nobody notices them. Wiki-style functionality will become embedded within other software – within portals, web design tools, word processors, and content management systems. Our children may not learn the word “Wiki,” but they will be surprised when we tell them that there was a time when you couldn’t just edit a web page to build the content collaboratively.

As James Dellow puts it: the wiki will become more of a verb than a noun. This is the future that Stephen Heppell sees, and will come more quickly than the mechanisation of the textile industry. We need to be prepared for it, not by resisting it like the destroyers of the spinning mule, but by being open to the opportunities it offers. As Clarke puts it in his penultimate paragraph:

By putting minimal central control in place an enterprise can gain significant benefit from this simple technology, including improved knowledge capture, reduced time to build complex knowledge-based web sites, and increased collaboration. Although enterprise Wiki use requires a greater degree of centralized control than public Wikis, this need not impinge on the freedom to contribute that is the hallmark of a Wiki approach. The balance of power is different in an enterprise context, but fear of anarchy should not prohibit Wiki adoption.

James Dellow is not quite so starry-eyed, but his note of caution is not a Luddite one.

I’m not sure its good enough to add wiki-like page editing functionality to an information tool and expect it to behave like a social computing tool suddenly (if that’s your intent). I think what’s more interesting is the evolution of enterprise wikis, as they add other types of social computing features. Other social computing platforms may also threaten these wiki-based solutions by adding the capability to manage pages and documents.

The key thing here is that we need to blend our corporate demands with the opportunities that working and collaborating in the “nearly now” will bring. The result of that blend will inevitably mean that the technologies will develop in slightly different ways. Modern textile machinery is very different from Crompton’s mule, if only because a modern health and safety regime requires it. Similarly, the openness of some of our current social networking and collaboration tools will need to be toned down in a corporate environment, to allow for the right level of knowledge and information sharing consistent with regulatory and ethical compliance.

As we tread the path that will lead us towards that future, I agree with David Gurteen that it is our responsibility to engage with the new technologies to help work out what the future will look like. As David puts it in the introduction to his latest Knowledge Letter, “I am surprised at just how many people especially knowledge managers are not using social tools (not necessarily internally but on the web for personal use) and consequently do not really understand their power as knowledge sharing and informal learning tools.” It surprises me too. David drives home the link with learning.

…when I ask people why they do not do the same the answer is always “Oh I’d love to but I am too busy. I just do not have the time.” But I think in reality the truth is that in our busy lives we never have enough time to do all the things we would like to do. So we prioritise things and taking the time to learn tends to fall off the bottom of the list.

I think that many people are so busy they have got out of the habit of informal learning – maybe they never got into it. Its not seen as a priority. So can I make a suggestion – if you are one of those people who are not keeping up with your with new developments and thinking in your field of endeavour then take a few minutes to think about how important is it to you compared with everything else that you do. And if you decide it is important then commit to doing it.

As the video above makes clear, the world of learning is changing fast. Our world of work will change to follow it. We owe it to ourselves, our colleagues and our organisations not to sit back and wait for the changes to overwhelm us. The tide is coming in — swim out to meet it.

Don’t overdo it

When we think about and plan our KM activities, it can be tempting to imagine a marvellous future wherein all our firm’s know-how is carefully nurtured, categorised, exposed for all to see, tagged, analysed, or whatever it is we think would be the best outcome. However, as the Bard of Ayrshire put it: “The best laid schemes o’ mice an’ men/ Gang aft agley.” Why is this?

Gigha boatscape

One good reason is pointed out in Gall’s Law:

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.

I am indebted to John Gruber for the pointer to this formulation. He uses it to explain how to understand Apple’s strategy with regard to the iPhone.

If there’s a formula to Apple’s success over the past 10 years, that’s it. Start with something simple and build it, grow it, improve it, steadily over time. Evolve it.

The iPhone exemplifies this strategy. There’s a long list of features many experts and pundits claimed the original 1.0 iPhone needed but lacked. Ends up it didn’t need any of them. Nice to have is not the same thing as necessary. But things the iPhone did have, which other phones lacked, truly were necessary in terms of providing the sort of great leap forward in the overall experience that Apple was shooting for.

At this point, it is worth noting an essential qualifier to Gall’s Law: “A simple system may or may not work.” In the case of the iPhone it clearly did work. In other cases, Apple decided that it did not work.

What Gruber brings over and above a simple assertion of Gall’s Law is an insight about how to choose the original simple system: “It’s not enough just to start simple, you have to start simple with a framework designed for future evolution and growth.” When the iPhone was first launched, it was not particularly full-featured as a phone: it was not 3G; it did not support MMS. It even fell short on the music front, as it cost significantly more per gigabyte than any of the iPod range. However, as Gruber points out:

Apple started instead with the idea of a general-purpose pocket-sized networked computer. It no more has a single main purpose than a desktop PC has a single main purpose. Telephony is simply one feature among many, whereas on most other phones, the features are attached to the side of the telephone. They sold 30 million iPhone OS devices in the first 18 months after 29 June 2007, but 13 million of those were non-phone iPod Touches — proving that the platform is clearly appealing even when the “phone” is entirely removed. (Consider too that the iPhone’s two strongest competitors are BlackBerry and Android, neither of which started as phones.)

The iPhone was not conceived merely as a single device or a one-time creation. It’s a platform. A framework engineered for the long-run. The iPhone didn’t and doesn’t need MMS or a better camera or a video camera or more storage or cut/copy/paste or GPS mapping or note syncing, because the framework was in place so that Apple could add these things, and much more, later — either through software updates or through new hardware designs. The way to build a complex device with all the features you want is not to start by trying to build a device with all those features, but rather to start with the fundamentals, and then iterate and evolve.

We should learn the same lesson with our knowledge systems. Not to try and predict all the features that might be useful in the future — that way lies excessive complexity coupled with early obsolescence and failure. Instead we should imagine and create the best platform for future possibilities — as simple as possible, but as open to development as necessary.

Book Review: Generation Blend

I have already voiced my scepticism about Generation Y, so it may seem odd that I chose to buy Rob Salkowitz’s book Generation Blend: Managing Across the Technology Age Gap. However, there is a lot in this book that does not depend on an uncritical acceptance of the “generations” thesis. It provides a sound practical basis for any business that wants to, in Salkowitz’s words, “develop practices and deploy technology to attract, motivate, and empower workers of all ages.”

As one might expect, underpinning Generation Blend is the thesis that there are clear generational (not age-related) differences that affect how people approach and use technology. In this, Salkowitz builds on Neil Howe and William Strauss’s book, Generations: The History of America’s Future, 1584 to 2069. However, generational differences are not the starting point for the book. Instead, Salkowitz begins by showing how technology itself has changed the working environment irrevocably. In doing so, he establishes the purpose of the book: to allow organisations to develop the most suitable strategy to help their people to cope with those changes (and the many more to come).

Organizations invest in succeeding waves of new technology — and thus subject their workers to waves of changes in their lives and workstyles — to increase their productivity and competitiveness. Historically, productivity has increased when new technology replaced labor-intensive processes, first with mechanical machinery, and now electronic information systems. (p. 24)

Dave Snowden has started an interesting analysis of these waves of change, and Andrew McAfee’s research shows that IT makes a difference for organisations. What Salkowitz does in Generation Blend is to provide real, practical, insights into the way in which organisations can make the most of the abilities of all generations when faced with new technology. When he does discuss the generations, it is important to remember that his perspective is entirely a US-centric one. That said, the rest of the book is generally applicable. This is Salkowitz’s strength — he recognises that there are real exceptions to the broad brush of generational study, and his guidance focuses on clear issues with which it is difficult to disagree. As one of the section headings puts it, “software complexity restricts the talent pool,” so the target is to accommodate different generational approaches in order to loosen that restriction. Chapter 3 of the book closes with a set of tables outlining different generational attributes. I found these very useful in that they focused the mind on the behaviours and attitudes affecting people’s approach to technology, rather than as a hard-and-fast description of the different generations.

Salkowitz’s approach can be illuminated by comparing three passages on blogging.

The open, unsupervised quality of blogs can be deeply unsettling to people who have internalized the notion that good information comes only from trusted institutions, credentialed individuals, or valid ideological perspectives. (p. 82)

On the other hand:

Blogs and wikis create an environment where unofficial and uncredentialed contributors stand at eye level with traditionally authoritative sources of knowledge. This is perfectly natural to GenXers, who believe that performance and competence should be the sole criteria for authority. (p. 147)

And, quoting Dave Pollard with approval:

“I’d always expected that the younger and more tech-savvy people in any organization would be able to show (not tell) the older and more tech-wary people how to use new tools easily and effectively. But in thirty years in business, I’ve almost never seen this happen. Generation Millennium will use IM, blogs, and personal web pages (internal or on public sites like LinkedIn, MySpace and FaceBook) whether they’re officially sanctioned or not, but they won’t be evangelists for these tools.” (p. 216)

 There is here, I think, a sense of Salkowitz’s desire to engage older workers as well as his concern that unwarranted assumptions about younger people’s affinity with technology could lead businesses towards the wrong courses of action.

At the heart of Generation Blend is a critique of existing technology, in which Salkowitz points out that current business software has a number of common characteristics:

  • It tends to be complex and overladen with features
  • It focuses on efficiency
  • It is driven by the need to perform tasks
  • It supports a work/life balance that is “essentially a one-way flow of work into life” (p. 147)

These characteristics have come about, Salkowitz argues, because the technology has largely been produced by and for programmers whose values and culture:

…independence, obsession with efficiency as a way to save personal time and effort, low priority on interpersonal communication skills, focus on outcomes rather than process (such as meetings or showing up on a regular schedule), seeing risk in a positive light, desire to dominate through competence — sound like the thumbnail descriptions of Generation X tossed out by management analysts. (p. 149)

Since this group is clearly comfortable with technology, and is also increasingly moving into leadership and management roles, Salkowitz provides them with guidance on making technology accessible to older workers and on making the most of the skills and insights of younger workers. He does this in general terms throughout the book, but most convincingly in the final three chapters. Two of these use narrative to show how (a) the fear can be taken out of technology for older people and (b) the younger generation can be involved directly in defining organisational strategy.

In the first of these chapters, Salkowitz describes a non-profit New York initiative, OATS (Older Adults Technology Services), which trains older people in newer technologies, so that they can comfortably move into roles where those skills are needed. OATS has found that understanding the learning style of these people allows them to pick up software skills much more quickly than is commonly assumed.

While younger people learn technology by handson experimentation and trial and error, [Thomas] Kamber [OATS founder] and his team find that older learners prefer information in step-by-step instructions and value written documentation. (p. 167)

At the other end of the generational scale, Salkowitz starts with a statement that almost reads like a manifesto:

Millennials may be objects of study, but they are also, increasingly, participants in the dialogue, and it is silly (and rude) for organizations to talk about them as if they are not already in the room. (p. 190)

He goes on to illustrate the point with an account of Microsoft’s Information Worker Board of the Future, which was a “structured weeklong exercise around the future of work,” which the company used to help it understand how its strategy should develop in the future. It was judged to be a success by bringing new perspectives to the company as well as showing Microsoft to be a thought leader in this area.

…the organizational commitment to engage with Millennials as partners in the formation of a strategic vision can be as valuable as the direct knowledge gained from the engagement. Strategic planning is a crucial discipline for organizations operating in an uncertain world. When it is a closed process, conducted by experts and senior people (who inevitably bring their generational biases with them), it runs a greater risk of missing emergent trends or misjudging the potential for discontinuities that could disrupt the entire global environment. Opening up the planning process to younger perspectives as a matter of course rather than novelty hedges against the risks of generational myopia and also sends a strong positive signal to members of the rising generation. (p. 209)

Generation Blend ends with a clear exposition of the key issues that organisations need to address in order to make the most of their workers of all ages and the technology they use.

Organizations looking to effectively manage across the age gap in an increasingly sophisticated connected information workplace should ask themselves five questions:

  1. Are you clearly explaining the benefits of technology?
  2. Are you providing a business context for your technology policies?
  3. Are you making technology accessible to different workstyles?
  4. Does your organizational culture support your technology strategy?
  5. Are you building bridges instead of walls? (p. 212)

The last two of these are particularly interesting. In discussing organisational culture, Salkowitz includes careful consideration of knowledge management activities, especially using Web 2.0 tools. He is confident that workers of all generations will adapt to this approach to KM at a personal level, but points to real challenges: “[t]he real difficulties… are rooted in the business model and in the way that individual people see their jobs.” (p. 229) For Salkowitz, the solution is for the organisation to make a real and visible investment in knowledge activities — he points to the use of PSLs in UK law firms as one example of this approach. Given the tension between social and market norms that I commented on yesterday, I wonder how far this approach can be pushed successfully.

Running through Generation Blend is a thread of involvement and engagement. Salkowitz consistently advocates management approaches that accommodate different ways of extracting value from technology at work. This thread emerges in the final section of the book as an exhortation to use the best of all generations to work together for the organisation — building bridges rather than walls.

Left to themselves, workers of different ages will apply their own preconceptions and experiences of technology at work, sometimes leading to conflict and misunderstanding when generational priorities diverge. But when management demonstrates a commitment to respecting both the expectations of younger workers and the concerns of more experienced workers around technology, organizations can effectively combine the tech-savvy of the young with the knowledge and wisdom of the old in ways that make the organization more competitive, more resilient to external change, more efficient, and more open. (p. 231)

I think he is right in this, but it will be a challenge for many organizations to do this effectively, especially when they are distracted by seismic changes outside. My gut feeling is that those businesses that work hard at the internal stuff will find that their workforce is better able to deal with those external forces.


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,223 other followers

Follow me on Twitter

Categories

Interesting links

Bookmark and Share

When…

April 2014
M T W T F S S
« May    
 123456
78910111213
14151617181920
21222324252627
282930  

Follow

Get every new post delivered to your Inbox.

Join 1,223 other followers

%d bloggers like this: