Legal technology, practice, theory and justice

Like all other areas of life and work, the law has been changed immeasurably by technology. This will doubtless continue, but I am unconvinced by the most excited advocates of legal technology.

The impact of technology has been felt at a variety of levels. The last 35-40 years has changed the way practitioners approach all aspects of their work. Likewise, the changes wrought by technology on personal, social and commercial behaviour and activities have driven changes in the law itself.

Clouds round the tower of lawThese trends will doubtless continue, but predicting the actual changes that they will bring is a fool’s errand.

I recently wrote an article in Legal IT Today, arguing that the most extreme predictions of the capability of legal artificial intelligence would struggle to match the abductive reasoning inherent in creative legal work. In addition to that argument, I am less confident than some about the limits of technological development, I suspect that the economics of legal IT are not straightforward, and I have a deeper concern that there is little engagement between the legal IT community and generations of legal philosophy.

Limits of technology

One of the touchstones of any technology future-gazing (in any field, not just the law) is a reference to Moore’s Law. I am less certain than the futurologists that we should expect to see the doubling of capacity for ever more. If nothing else, exponential growth cannot continue for ever.

…in the real world, any simple model that shows a continuing increase will run into a real physical limit. And if it is an exponentially increasing curve that we are forecasting, that limit is going to come sooner rather than later.

What could stop computing power from increasing exponentially? A range of things — the size of the components on a chip may have a natural limit, or the materials that are used could start to become scarce.

More interestingly, from the perspective of legal business, the undoubted growth of technology over recent years has not necessarily produced efficiencies in the law, if we use lawyer busyness as a proxy for efficiency. There are far more people employed in the law now than 40 years ago, and they appear to work longer hours. Improved computing capability has produced all sorts of new problems that demand novel business practices to resolve them. (One of these being knowledge management.)

Nonetheless, it is still possible that future developments will actually be capable of taking on significant aspects of work that is currently done by people. The past is not necessarily a good predictor of the future.

The business challenge

There is currently a lot of interest in the possibility that IBM’s Watson will introduce a new era of legal expert systems. Earlier this month Paul Lippe and Daniel Martin Katz provided “10 predictions about how IBM’s Watson will impact the legal profession” in the ABA Journal. Bruce MacEwen has also asked “Watson, I Presume?” However, one thing that marks out any reference to Watson in the law is a complete absence of hard data.

The Watson team have helpfully provided a press release summarising the systems currently available or under development. Looking at these, a couple of things strike me. The most obvious is that there are none in the law. There are medical and veterinary applications, and some in retail and travel planning. There are applications that enhance existing IT capability (typically in the area of search and retrieval). But there are none in the law.  The generic applications could be certainly be used to enhance legal IT, but there is no indication of how effective they might be compared to existing tools. And, most crucially, it is unclear how costly Watson solutions might be. That is where legal IT often struggles.

The business economics of legal technology can be difficult. Medical and veterinary systems have a huge scale advantage — human or animal physiology changes little across the globe, and pharmaceutical effectiveness does not depend significantly on where drugs are administered. By contrast, legal and political systems differ hugely, so that ready-made legal technology often needs to be tailored to fit different jurisdictions. Law firms tend to be small compared to some other areas of professional services and the demands of ethical and professional rules often restrict sharing of information. Those constraints can mean that it is hard for all but the largest firms with considerable volumes of appropriate types of work to justify investment in the most highly-developed forms of technology. As a consequence, I suspect few legal IT providers will be tempted to pursue Watson or similar developments until they can be convinced that a market exists for them.

Technology, justice and legal theory

My Legal IT piece was a response to an article by David Halliwell. His piece started with a reference to an aspect of Ronald Dworkin’s legal philosophy. Mine was similarly rooted in theory. This marks them out from most of the articles I have read on the future of legal IT. Given the long history of association between legal theory and academic study of IT in the law (exemplified by Richard Susskind’s early work on the use of expert systems in the law), it is disappointing to see so little critical thought about the impact of technology in the law.

As I read them, most disquisitions on legal IT are based on simple legal positivism — the law is presented as a set of rules that can be manipulated in an almost mechanical way to produce a result. By contrast, there is a deeper critique of concepts like big data in wider social discourse. A good example is provided in an essay by Moritz Hardt, “How big data is unfair”:

I’d like to refute the claim that “machine learning is fair by default”. I don’t mean to suggest that machine learning is inevitably unfair, but rather that there are powerful forces that can render decision making that depends on learning algorithms unfair. Any claim of fair decision making that does not address the technical issues that I’m about to discuss should strike you as dubious.

Hardt focuses on machine learning, but his point is true of any algorithm and probably more generally of any technology tending towards artificial intelligence. Any data set, any process defined to be applied to that data, any apparently neutral ‘thinking’ system will have inherent prejudices. Those prejudices may be innocuous or trivial, but they may not be. Ignoring the possibility that they exist runs a risk of unfairness, as Hardt puts it. In the law, unfairness manifests itself as injustice.

What concerns me is that there doesn’t appear to be a lively debate about the risk of injustice in the way legal IT might develop in the future (not to mention the use of technology with a legal impact in other areas of society). Do we have a modern equivalent of the debate between Lon Fuller and H.L.A. Hart? I am not as close to legal theory as I used to be, so it may already have taken place. If not, are we happy for the legal positivists to win this one by default? (I am not sure that I am.)

Complexity, drivers and modulators in life and the law

This video is fascinating for a host of reasons. In particular, it illustrates a concept that is critical in my continuing series of posts about the legal ecosystem.


The video shows an aggregation of anchovies — a shoal of fish obeying simple rules, but creating a constantly changing unpredictable pattern in the sea (just as a murmuration of starlings does in the air). What I find especially interesting is the way the fish react to humans in their midst.

Studies of fish show that they observe very simple rules when they shoal (as starlings do when they flock):

  1. Move in the same direction as your neighbour;
  2. Remain close to your neighbours;
  3. Avoid collisions with your neighbours.

Armed with these rules, it is possible to simulate the behaviour of shoals. It is even possible to predict large-scale changes in behaviour (such as migration). What is impossible is predicting the precise pattern traced by the shoal itself as it moves through the water. There are simply too many variables to allow this — for all intents and purposes each pattern is unique for the fleeting moment that it exists. This can be demonstrated mathematically.

For any number of items (N), the number of links (L) between pairs of items can be expressed thus:


Likewise, the number of patterns (P) that can be generated by connection items can be calculated thus:


A simple table shows how the numbers of patterns grows exponentially as items are added:

Dots Links Patterns
N=4 L=6 P=64
N=10 L=45 P=35 trillion
N=12 L=66 P=73.8 quintillion

The number of possible patterns generated by hundreds of fish is inconceivable.

Drivers and modulators

We often refer to actions driving change in life and work. The word ‘driver’ suggests a direct and predictable link between cause and effect: if I do this, the outcome will be that — every time. Drivers of this type do exist, even in some quite complicated systems. When I turn the steering wheel in my car, I need the result to be predictable. It usually is, unless the system is broken or some other factor (ice or gravel, perhaps) has been introduced without my knowledge.

As the number of components in a system increases and the connections between them are loosened, the behaviour of the system become less predictable. However, when we see the outcome it often looks inevitable; we are persuaded by hindsight that it should have been predictable.

This can be seen in the video when people swim towards the anchovies. Sometimes the shoal just moves away from the swimmer. Sometimes it parts and rejoins beyond the swimmer. Sometimes it forms a ring around the swimmer. None of these outcomes was predictable, but they all appear to follow the same rules — the fish maintain a constant distance from each other and follow the course of their neighbours, but they keep a greater distance from the alien body (whilst not fleeing from it altogether).

Similar things happen in organisations. Patterns of behaviour might look fairly constant and predictable, but can be disturbed by significant events (a change of leadership, for example, or some external pressure). The consequence of that disturbance is unpredictable before it happens, but may look obvious afterwards — hindsight makes us think that it was inevitable.

It is at this point that the ‘driver’ fallacy comes into being. If we see a number of events that appear to form an inevitable sequence of events, it is natural to think that repeating the initial cause will drive the same outcome. If the outcome looks good, then we are likely to take the same initial action expecting it to result in the same outcome. This is the thought process that underpins so-called ‘best practice’ and books like Good to Great. These hold out a promise that success will follow emulation of others.

Dave Snowden has suggested that it is more appropriate to think of change in complex systems being effected by modulators rather than drivers.

Imagine that you have a round flat table and around that table are a series of electro-magnets. They can vary in strength and also polarity. Some you control, some are controlled by people you know and some appear to change at random. In the middle of the table are a lot of iron filings. Now as long as the magnets don’t change, the iron filings will form a complex stable pattern. However as the magnets fluctuate in strength the pattern changes. if some of them change polarity then change is sudden and drastic before a new stability emerges. At the same time some of the iron filings get magnetised in turn as they pass through electric currents, making the situation even more complex. I may not even be aware of some modulators until they suddenly come into play and their impact is seen.

The magnets in this case modulate the system. They interact with each other and with the system as a whole, they make it inherently unpredictable. Understanding what modulators are in play will help us understand emergent behaviour of the system, but not to predict its future state. Attributing cause to a limited number of dominant modulators (that is what I think people mean by drivers) is a mistake as the level of interaction is too much. I can build models to simulate the behaviour of the system, however simulation does not lead to prediction.

Modulating change in law

A common type of ‘driver error’ arises when two systems appear to have the same objectives and basic structure. If one of the systems appears to have a good way of achieving a particular outcome, it is natural to consider transposing it into the other system. This sometimes occurs in legal and political systems, when adoption of different approaches to similar problems from foreign jurisdictions might be proposed. It also happens in law firms and other organisations that are apparently similar in scope and purpose. Otto Kahn-Freund skewered the notion of transplanting between legal cultures in his 1973 Chorley Lecture, “On the Uses and Misuses of Comparative Law”. His view was that there is a continuum of actions ranging from the organic (rooted in unique cultural, social and political soil) to the mechanical. The closer a particular process is to the mechanical end of the continuum the more likely it is that a transplant will be successful.

I prefer the Cynefin framework to Kahn-Freund’s continuum. It is better rooted in theory, as well as being more subtle — the linearity of a continuum is an immanent flaw. I intend to explore that further in a future blog post.

Whatever explanatory tool one uses, it should be clear that some practices translate better between organisations than others. At the moment, there are a few practical changes that are fairly universally recommended to law firms as panaceas to help them ride out changes in the legal market. These include legal project management, process mapping, fixed-fees, and so on. Some of these will work for some firms, but how can we know at the outset which and why? The answer is that we cannot do so reliably. Instead, it is important to test things out. There should be clarity about how the experiment can be evaluated — what does success or failure look like? — and there should be a safe fall-back position in the case of failure. Anything else is wishful thinking.

An experiment might be fairly small-scale, but it can also be quite audacious, as shown in this video explaining what happened when wolves were reintroduced to Yellowstone National Park.

The sequence of events described here — leading eventually to changes in the physical environment — is unique. It is not possible to say that reintroducing wolves in other places would have the same effect. A number of other factors also played their part:

  • The patterns of grazing behaviour of deer
  • The topography of the park itself
  • The availability of other species (beaver, coyotes, bears, birds, etc)
  • The time taken for plant species to regenerate
  • and so on…

So, for a law firm, introducing new ways of working or doing business might be a really good idea. The success of such changes depends on a host of components responding in particular ways. Beneficial outcomes are neither inevitable nor predictable.

Better outcomes arise from a process like this:

  • An understanding of why particular changes might work (and knowing what ‘working’ means for your firm);
  • Testing the change;
  • Evaluating if it is working or not (by reference to the first step);
  • If it works, continue;
  • If it doesn’t, revert to the previous safe state.

The limits of technology and law

2014-05-27 13.45.59One of the first law lectures I attended, over 30 years ago, was given by Avrom Sherr. As we all settled ourselves, full of our importance as future lawyers, Avrom walked into the lecture theatre and lay on his back on the desk at the front of the hall. The hubbub subsided and there was a moment of uncertainty (embarrassment even) before he got to his feet to start the lecture.

The point of this act of theatre, we were informed, was as follows.

In previous centuries, medical students were taught from cadavers. As a consequence, everything they learned was pathology. More recently, medical science had caught up with the idea that most people were actually healthy and that there was probably more to be understood about the workings of healthy bodies than diseased and dead ones; certainly as much that would be useful to those charged with the care of the living.

Legal studies, Avrom argued, had a similar problem. By studying the pathology of the law (as found in centuries of case law), the real life of the law was lost. His impersonation of a cadaver was intended to remind us that although dissection of cases (like anatomy lessons) had a purpose in learning about law, we should not forget that the vast majority of legal actions (making contracts, administrative decision-making, etc) would never be even be the subject of litigation, let alone a reported case.

I was reminded of this experience, and the valuable lesson, by a short article in The Lawyer by Peter Kalis (chairman and global managing partner, K&L Gates), “Lawyers as robotic bores? It’s not the English way”. Mr Kalis will be writing a series of articles, and this one sets out his stall.

In future columns I’ll supply some thoughts on our evolving industry. In this inaugural venture, however, I wish to acknowledge my debt to the English legal tradition. In other words, I come in peace.

He singles out three legal academics whose work influenced him whilst at Oxford 40 years ago: HLA Hart, Sir Otto Kahn-Freund, and Mark Freedland.

Why do Professors Hart, Kahn-Freund and Freedland matter here? Their careers nicely illustrate that law is about ideas and the ability to express them, whether in service to clients or to scholarship.

In future columns you’ll see me challenge those who regard lawyers and their firms as anachronistic and those who would reduce us to automata and algorithms. It will be my way of saying thanks to Professors Hart, Kahn-Freund and Freedland, among so many others on your side of the pond.

That description of the purpose of law — “ideas and the ability to express them” — resonates with my experience as a raw undergraduate. After four years of study at Warwick, it was clearer than than ever to me that law doesn’t exist to give opportunities to judges and law reporters. As an academic discipline it can be a kind of applied social science — a combination of psychology, ethnology, economics, politics — that may help to describe how social and individual relationships might work out in the presence or absence of power. Unlike many of those other disciplines, law also has a practical life outside the academy. Its practitioners have the privilege of being able to help mediate in those relationships — supporting or opposing power as necessary.

Over the past few years, I have kept coming back to this point about relationships in my work and on this blog. I am more sure than ever that good law, sensitively practised, depends on an understanding of the people involved. That understanding requires the kind of insight into human relationships, desires and needs, power structures, that I suspect most people develop unconsciously.

Critically, though, technology struggles with this aspect of law as lived. It sometimes appears that the most vocal technology advocates forget this. As news this week about the Turing test shows, it is too easy to be blinded by overblown claims of what computers can do. The reality is usually much more limited. In this context, also, we need to know whether a piece of legal technology deals with a pathological legal problem or the real human issue that underlies the call to law. If it doesn’t look to the latter, then it will be of severely limited use. That is not to say it will be useless, just good for some things only.


Transplanting practices between organisations

It is time to revisit the best practices meme again. Over the past few months I have been struck by the way the term is sometimes used in an all-encompassing way, without necessarily clarifying its scope.

Lamb House, Rye

One relatively recent post of this type “Innovation Builds on Best Practice” was written by Tom Young of Knoco, and refers to their intriguing Bird Island exercise. Over the last ten years, Knoco have been running workshops in which the participants build a tower with a given set of materials, then improve their designs following a number of KM interventions. The decade of experience has been documented in a set of ‘best practices’ which are used as part of the exercise. As the exercise progresses, tower heights increase significantly, and the maximum heights have also grown over the ten year period. (There is a longer account of the exercise in the April 2009 issue of Inside Knowledge magazine.)

Tom defines ‘best practice’ by reference to work done with BP:

A recognised way of [raising productivity or quality level across the board] is to identify a good example of how to do it and replicate that in other locations. We used the term ‘good practice’ in the BP Operations Excellence programme. After we had identified several ‘good practices’, we developed from them, the ‘best practice’. It was only after the ‘best practice’ was identified (and agreed by the practitioners) that it was rolled out and all plants encouraged to implement that method. After all if there was an agreed ‘best practice’ to do an activity, why would you not want to use it? Learning was captured on an ongoing basis and the ‘best practice’ updated periodically.

If I understand him correctly, Tom is comparing performance in an activity, process or task in one part of an organisation with the same activity, process or task elsewhere in the same organisation. In this context, I can see that practices may well be comparable and replicable across silos. (Although, to answer his rhetorical question, I can easily envisage situations where the context may well require a ‘best practice’ to be ignored. Offshore oil extraction will be very different in the different climatic conditions of the Gulf of Mexico and the North Sea.)

However, greater problems arise in attempts to transfer ‘best practice’ between organisations, or even within organisations where more processes or activities are at stake.

More years ago than it is comfortable to recall, I studied Comparative Law. (I even taught it briefly at a later stage.) One of the key readings was an article by Otto Kahn-Freund, “On Use and Misuse of Comparative Law” (1974) 37 Modern Law Review 1. (The article is not online, but I found a very good summary of its key points, together with a later piece by Gunther Teubner.) Kahn-Freund’s argument is that a law or legal principle cannot be separated from the culture or society that created it, and so even when there is a common objective, transplanting the law from elsewhere will rarely work. There is a useful example in the criminal law. The way in which criminal investigations and prosecutions proceed varies wildly between countries. It would make little sense to take a rule of evidence from the adversarial system used in England and transplant it into the French inquisitorial system. William Twining has elaborated considerably on this argument in an interesting lecture given in 2000 (PDF).

The problem that I have with much of the ‘best practice’ discourse is that it often strays into assertions or assumptions that such practices can readily be transplanted. However, like the law, such transplants will often be rejected.

The other aspect of Tom Young’s post that, frankly, confuses me is his treatment of innovation. Here’s an extended quote.

Now I hear some mention the words like ‘innovation’ and ‘creativity’. Perhaps you are thinking that the use of best practice will inhibit innovation and creativity. For me this is where context is vital.

In some situations, you don’t want innovation or creativity, you just want it done in a standard, consistant fashion.

If you are running a chemical plant, you don’t want the operator to innovate. If you are manufacturing microchips, you don’t want the technicians to innovate. If you are launching a new product into a target market, you perhaps don’t want innovation but standardisation. If you are decommissioning a nuclear power plant, perhaps you don’t want innovation during the work phase.

I am comfortable with this so far. Where things are working well, we should carry on. However, there is always room for improvement, even in simple systems.

Innovation should be built on current best practice. One of the key lessons from the Knoco Bird Island exercise is that if you ask people to do something, they will frequently start based on their own experience. When you illustrate the current best practice that has been achieved by several hundred people before them, they are frequently overwhelmed as to how poor they achievement was compared to what has already been established. 

Where appropriate give them the best practice and ask them to innovate from there. For example if by the introduction of AAR’s the time to change filters has been reduced from 240 hours per screen to 75 hours and a best practice created illustrating how this is achieved, innovate from the best practice figure of 75 hours, not the previous figure of 240 hours but only if it is safe to do so. In some instances innovation must be done in test area, ideas thought out, prototypes created and tested before the agreed modification is installed in the main plant.

My problem here is that I don’t think Tom is describing innovation. These are improvements in existing processes, rather than adaptations to new scenarios where adherence to the current way of doing things would be counter-productive. In a comment to Tom’s post, Rex Lee refers to kaizen. This is something that is often associated with Toyota. To be sure, the lean production processes in Toyota’s main, automotive, division are partly responsible for its continuing viability. However, another critical aspect is the way in which the company has diversified into other areas such as prefabricated housing, which it has been building since the mid-1970s. This response to crisis is an innovation, and goes beyond process improvement. Toyota encourages both through its well-documented suggestion system.

Going back to the Bird Island, it is certainly correct that no sensible business would expect people to embark on tasks or activities without guidance as to the ways in which they have successfully been done before. However, if the business needs a different way to achieve the same outcome, or a different outcome altogether, getting better at doing the same thing isn’t going to cut it.

Tenses and legal dominance

In an Easter-flavoured post on Language Log, Geoff Pullum summarised the argument that the English language has no future tense.

The claim I’m making is not that reference to future time cannot be made in English; of course it can. And the claim is not that will cannot be thus used: probably over 80 percent of its occurrences involve some kind of future time reference. My claim — Huddleston’s claim — is simply that the varied ways we have of referring to future time in English are not part of the tense system; they involve a significant-sized array of idioms and periphrastic work-arounds — and the modal verb will has no particularly privileged place in that array.

Geoff’s article prompted an tangential thought: Is this linguistic anomaly — the extensive use of the verb ‘will’ to denote future time and its potential for confusion with ‘will’ as an expression of volition — one of the factors that has contributed to the dominance of Anglo-American law in commercial law?

Let’s get the dominance question out of the way first. Bruce MacEwen, in his analysis of last year’s Global 100 list of law firms, points out “the continuing domination of the lists by firms headquartered in the former British Empire.” His explanation?

I believe it’s fairly obvious:  Anglo-Saxon common law has a particular genius for innovation.  Imagine trying to structure a complex multi-jurisdictional project financing vehicle under French Civil Law.  I’m no expert, but I don’t think it could be done.  Not only does the common law presume that the wishes of voluntarily transacting private parties should be honored, every time such a transaction is challenged and either enforced or overturned, we have future guidance for our behavior.

(I agree with Bruce that imperial hegemony is not enough to justify this dominance. Britain’s historical geo-political power is long-gone and was not several orders of magnitude greater than the French, Spanish or Dutch empires: each of these modern nations can claim one firm in the Global 100 list. The USA’s current commercial power is fickle and does not necessarily support the global spread of its law firms.)

But where does the genius for innovation come from? Bruce’s example of structuring a complex transaction suggests the ingenuity of the Anglo-American draftsman. I wonder if the ingenuity originally rests with litigators. The common law tradition depends heavily on oral argumentation. In a language where there is an inherent confusion in the statements people make (when I say that I will do something, can you be certain that I am making a promise rather than a prediction?), the resolution of disputes is almost certain to involve the most imaginative propositions. Those propositions are then reflected in an immense juristic corpus (the common law itself) which is at the heart of the most imaginative contractual drafting, even if only implicitly.

Arguments about alleged promises are at the heart of all legal systems. There are undoubtedly many other elements that contribute to the current Anglo-American dominance in the law, but the special privilege of English-speaking lawyers that their language captures that argument in its grammar surely plays a part.