Improving work with better relationships

Roosevelt and Churchill at Chatsworth House

There is a growing body of people bringing new perspectives on the way organisations are structures and how work gets done. Amongst these, I have recently found a podcast, Reimagining Work, presented by John Wenger and Rogier Noort. It has now been running for 14 episodes, and I listened to a few of these on my walk this morning.

Roosevelt and Churchill at Chatsworth HouseThe presenters have very different backgrounds, which makes their conversations more interesting than if they came from the same direction. Rogier’s experiences are more rooted in technology, whilst John’s come from helping businesses with people issues. In the first episode they talked a little about their previous work, and I was struck by John’s description of something he offers as part of his consulting work — sociometry. John described this in the podcast as follows:

Sociometry is a word that literally means ‘measure of social relationships’ — connections between people. One of the themes of sociometry — teachings of sociometry — is that the quality of an outcome is directly related to the quality of relationships between the people who are trying to generate that outcome. Therefore, if you have better relationships, what you try and do together will be more productive, more satisfying, more life-giving.

There is also a later podcast dedicated to the topic (which I have yet to listen to), and John has a blog post on the topic. But John’s description made me think about teams and crews.

I have mentioned crews before. For now, the important point is that unlike a team, which has an existence of its own, a crew is a temporary group of people brought together for a particular job or task and then disbanded. In sociometric terms, the members of a crew may not have a relationship of any sort prior to coming together. There is value in both approaches to work, but does the lack of a pre-existing relationship mean that a crew-based approach is at a disadvantage against a team focus?

I am not sure that it does, as long as the organisation is not bound to traditional models of work and management. The classic crew might be a group of fire-fighters, police officers, paramedics and road managers brought together to deal with a serious motor collision. Each member of the crew brings their own professional expertise, which is respected by the others and which the others have no interest in challenging. As a result, they all do their work as a group and achieve most effectively what needs to be done — often without a lot of command and instruction. Discipline and practice takes the place of strong relationships.

By contrast, an organisation that depends heavily on hierarchy and command-and-control management probably could not use crews to get work done. Instead, teams arise and are managed more or less well (depending largely on the quality of the relationships within and between them. They therefore miss out on the possibility that a crew might bring a better outcome by introducing new expertise and experience. By contrast, I think that an organisation which uses social technology to build relationships where there isn’t necessarily an existing working connection (along the lines of Mark S. Granovetter’s “Strength of Weak Ties”) can use those relationships as the basis for task- or activity-based crews. The outcome would be a much lower dependence on managed structures, more autonomy for people working in groups, and improved value for the business.

Building on the relationships theme, John and Rogier spend some time in a later podcast discussing empathy and its power to improve the way people work together. This leads to a conversation about the way organisations tend to dehumanise people when they think of them as assets or resources. Coincidentally, I have been reading a paper summarising critiques of the resource-based view of the firm. I was led to that research by my concern that organisations were treating knowledge as an asset and that this didn’t reflect the reality of how knowledge flows and how it is valued.

In the paper, the resource-based view of the firm is described thus (I have removed the references for ease of reading):

The resource-based view (RBV) has become one of the most influential and cited theories in the history of management theorizing. It aspires to explain the internal sources of a firm’s sustained competitive advantage (SCA). Its central proposition is that if a firm is to achieve a state of SCA, it must acquire and control valuable, rare, inimitable, and nonsubstitutable resources and capabilities, plus have the organization in place that can absorb and apply them. This proposition is shared by several related analyses: core competences, dynamic capabilities, and the knowledge-based view.

The paper summarises eight ways in which this view has been challenged, but doesn’t offer an alternative approach (reasonably, as it is a literature review). In listening to John and Rogier, I wondered whether a better way of understanding the strength of a business might be an evaluation of the strength of its relationships (internally — within teams and more generally across the organisation — and externally — with clients/customers, suppliers, competitors, and the wider market) and the merit of the work it does (in terms of social value outside the organisation and personal value within it). A business that was well-connected and whose people enjoyed their work and knew that it added value to the world would be in a stronger position than one which demotivated its employees by keeping them in unjustifiable silos producing things of no particular worth.

This combined relationship/activity-based value probably wouldn’t appeal to traditional economists, because it has the benefit of being additive — a business that is successful by this measure does not succeed at the expense of another. Good relationships and social value are like knowledge in this respect — if they are shared they grow rather than being diminished. As the authors of the RBV review put it:

Another characteristic of knowledge, hardly taken into account in the RBV, is its nonrivalrousness—meaning that its deployment by one firm, or for one purpose, does not prevent its redeployment by the same or another firm, or for another purpose. On the contrary, deploying knowledge may increase it.

That sounds like a powerful reason for organisations and individuals using knowledge well, in addition to building relationships and generating real value for society. In short — doing great work.

Legal technology, practice, theory and justice

Like all other areas of life and work, the law has been changed immeasurably by technology. This will doubtless continue, but I am unconvinced by the most excited advocates of legal technology.

The impact of technology has been felt at a variety of levels. The last 35-40 years has changed the way practitioners approach all aspects of their work. Likewise, the changes wrought by technology on personal, social and commercial behaviour and activities have driven changes in the law itself.

Clouds round the tower of lawThese trends will doubtless continue, but predicting the actual changes that they will bring is a fool’s errand.

I recently wrote an article in Legal IT Today, arguing that the most extreme predictions of the capability of legal artificial intelligence would struggle to match the abductive reasoning inherent in creative legal work. In addition to that argument, I am less confident than some about the limits of technological development, I suspect that the economics of legal IT are not straightforward, and I have a deeper concern that there is little engagement between the legal IT community and generations of legal philosophy.

Limits of technology

One of the touchstones of any technology future-gazing (in any field, not just the law) is a reference to Moore’s Law. I am less certain than the futurologists that we should expect to see the doubling of capacity for ever more. If nothing else, exponential growth cannot continue for ever.

…in the real world, any simple model that shows a continuing increase will run into a real physical limit. And if it is an exponentially increasing curve that we are forecasting, that limit is going to come sooner rather than later.

What could stop computing power from increasing exponentially? A range of things — the size of the components on a chip may have a natural limit, or the materials that are used could start to become scarce.

More interestingly, from the perspective of legal business, the undoubted growth of technology over recent years has not necessarily produced efficiencies in the law, if we use lawyer busyness as a proxy for efficiency. There are far more people employed in the law now than 40 years ago, and they appear to work longer hours. Improved computing capability has produced all sorts of new problems that demand novel business practices to resolve them. (One of these being knowledge management.)

Nonetheless, it is still possible that future developments will actually be capable of taking on significant aspects of work that is currently done by people. The past is not necessarily a good predictor of the future.

The business challenge

There is currently a lot of interest in the possibility that IBM’s Watson will introduce a new era of legal expert systems. Earlier this month Paul Lippe and Daniel Martin Katz provided “10 predictions about how IBM’s Watson will impact the legal profession” in the ABA Journal. Bruce MacEwen has also asked “Watson, I Presume?” However, one thing that marks out any reference to Watson in the law is a complete absence of hard data.

The Watson team have helpfully provided a press release summarising the systems currently available or under development. Looking at these, a couple of things strike me. The most obvious is that there are none in the law. There are medical and veterinary applications, and some in retail and travel planning. There are applications that enhance existing IT capability (typically in the area of search and retrieval). But there are none in the law.  The generic applications could be certainly be used to enhance legal IT, but there is no indication of how effective they might be compared to existing tools. And, most crucially, it is unclear how costly Watson solutions might be. That is where legal IT often struggles.

The business economics of legal technology can be difficult. Medical and veterinary systems have a huge scale advantage — human or animal physiology changes little across the globe, and pharmaceutical effectiveness does not depend significantly on where drugs are administered. By contrast, legal and political systems differ hugely, so that ready-made legal technology often needs to be tailored to fit different jurisdictions. Law firms tend to be small compared to some other areas of professional services and the demands of ethical and professional rules often restrict sharing of information. Those constraints can mean that it is hard for all but the largest firms with considerable volumes of appropriate types of work to justify investment in the most highly-developed forms of technology. As a consequence, I suspect few legal IT providers will be tempted to pursue Watson or similar developments until they can be convinced that a market exists for them.

Technology, justice and legal theory

My Legal IT piece was a response to an article by David Halliwell. His piece started with a reference to an aspect of Ronald Dworkin’s legal philosophy. Mine was similarly rooted in theory. This marks them out from most of the articles I have read on the future of legal IT. Given the long history of association between legal theory and academic study of IT in the law (exemplified by Richard Susskind’s early work on the use of expert systems in the law), it is disappointing to see so little critical thought about the impact of technology in the law.

As I read them, most disquisitions on legal IT are based on simple legal positivism — the law is presented as a set of rules that can be manipulated in an almost mechanical way to produce a result. By contrast, there is a deeper critique of concepts like big data in wider social discourse. A good example is provided in an essay by Moritz Hardt, “How big data is unfair”:

I’d like to refute the claim that “machine learning is fair by default”. I don’t mean to suggest that machine learning is inevitably unfair, but rather that there are powerful forces that can render decision making that depends on learning algorithms unfair. Any claim of fair decision making that does not address the technical issues that I’m about to discuss should strike you as dubious.

Hardt focuses on machine learning, but his point is true of any algorithm and probably more generally of any technology tending towards artificial intelligence. Any data set, any process defined to be applied to that data, any apparently neutral ‘thinking’ system will have inherent prejudices. Those prejudices may be innocuous or trivial, but they may not be. Ignoring the possibility that they exist runs a risk of unfairness, as Hardt puts it. In the law, unfairness manifests itself as injustice.

What concerns me is that there doesn’t appear to be a lively debate about the risk of injustice in the way legal IT might develop in the future (not to mention the use of technology with a legal impact in other areas of society). Do we have a modern equivalent of the debate between Lon Fuller and H.L.A. Hart? I am not as close to legal theory as I used to be, so it may already have taken place. If not, are we happy for the legal positivists to win this one by default? (I am not sure that I am.)