Like all other areas of life and work, the law has been changed immeasurably by technology. This will doubtless continue, but I am unconvinced by the most excited advocates of legal technology.
The impact of technology has been felt at a variety of levels. The last 35-40 years has changed the way practitioners approach all aspects of their work. Likewise, the changes wrought by technology on personal, social and commercial behaviour and activities have driven changes in the law itself.
These trends will doubtless continue, but predicting the actual changes that they will bring is a fool’s errand.
I recently wrote an article in Legal IT Today, arguing that the most extreme predictions of the capability of legal artificial intelligence would struggle to match the abductive reasoning inherent in creative legal work. In addition to that argument, I am less confident than some about the limits of technological development, I suspect that the economics of legal IT are not straightforward, and I have a deeper concern that there is little engagement between the legal IT community and generations of legal philosophy.
Limits of technology
One of the touchstones of any technology future-gazing (in any field, not just the law) is a reference to Moore’s Law. I am less certain than the futurologists that we should expect to see the doubling of capacity for ever more. If nothing else, exponential growth cannot continue for ever.
…in the real world, any simple model that shows a continuing increase will run into a real physical limit. And if it is an exponentially increasing curve that we are forecasting, that limit is going to come sooner rather than later.
What could stop computing power from increasing exponentially? A range of things — the size of the components on a chip may have a natural limit, or the materials that are used could start to become scarce.
More interestingly, from the perspective of legal business, the undoubted growth of technology over recent years has not necessarily produced efficiencies in the law, if we use lawyer busyness as a proxy for efficiency. There are far more people employed in the law now than 40 years ago, and they appear to work longer hours. Improved computing capability has produced all sorts of new problems that demand novel business practices to resolve them. (One of these being knowledge management.)
Nonetheless, it is still possible that future developments will actually be capable of taking on significant aspects of work that is currently done by people. The past is not necessarily a good predictor of the future.
The business challenge
There is currently a lot of interest in the possibility that IBM’s Watson will introduce a new era of legal expert systems. Earlier this month Paul Lippe and Daniel Martin Katz provided “10 predictions about how IBM’s Watson will impact the legal profession” in the ABA Journal. Bruce MacEwen has also asked “Watson, I Presume?” However, one thing that marks out any reference to Watson in the law is a complete absence of hard data.
The Watson team have helpfully provided a press release summarising the systems currently available or under development. Looking at these, a couple of things strike me. The most obvious is that there are none in the law. There are medical and veterinary applications, and some in retail and travel planning. There are applications that enhance existing IT capability (typically in the area of search and retrieval). But there are none in the law. The generic applications could be certainly be used to enhance legal IT, but there is no indication of how effective they might be compared to existing tools. And, most crucially, it is unclear how costly Watson solutions might be. That is where legal IT often struggles.
The business economics of legal technology can be difficult. Medical and veterinary systems have a huge scale advantage — human or animal physiology changes little across the globe, and pharmaceutical effectiveness does not depend significantly on where drugs are administered. By contrast, legal and political systems differ hugely, so that ready-made legal technology often needs to be tailored to fit different jurisdictions. Law firms tend to be small compared to some other areas of professional services and the demands of ethical and professional rules often restrict sharing of information. Those constraints can mean that it is hard for all but the largest firms with considerable volumes of appropriate types of work to justify investment in the most highly-developed forms of technology. As a consequence, I suspect few legal IT providers will be tempted to pursue Watson or similar developments until they can be convinced that a market exists for them.
Technology, justice and legal theory
My Legal IT piece was a response to an article by David Halliwell. His piece started with a reference to an aspect of Ronald Dworkin’s legal philosophy. Mine was similarly rooted in theory. This marks them out from most of the articles I have read on the future of legal IT. Given the long history of association between legal theory and academic study of IT in the law (exemplified by Richard Susskind’s early work on the use of expert systems in the law), it is disappointing to see so little critical thought about the impact of technology in the law.
As I read them, most disquisitions on legal IT are based on simple legal positivism — the law is presented as a set of rules that can be manipulated in an almost mechanical way to produce a result. By contrast, there is a deeper critique of concepts like big data in wider social discourse. A good example is provided in an essay by Moritz Hardt, “How big data is unfair”:
I’d like to refute the claim that “machine learning is fair by default”. I don’t mean to suggest that machine learning is inevitably unfair, but rather that there are powerful forces that can render decision making that depends on learning algorithms unfair. Any claim of fair decision making that does not address the technical issues that I’m about to discuss should strike you as dubious.
Hardt focuses on machine learning, but his point is true of any algorithm and probably more generally of any technology tending towards artificial intelligence. Any data set, any process defined to be applied to that data, any apparently neutral ‘thinking’ system will have inherent prejudices. Those prejudices may be innocuous or trivial, but they may not be. Ignoring the possibility that they exist runs a risk of unfairness, as Hardt puts it. In the law, unfairness manifests itself as injustice.
What concerns me is that there doesn’t appear to be a lively debate about the risk of injustice in the way legal IT might develop in the future (not to mention the use of technology with a legal impact in other areas of society). Do we have a modern equivalent of the debate between Lon Fuller and H.L.A. Hart? I am not as close to legal theory as I used to be, so it may already have taken place. If not, are we happy for the legal positivists to win this one by default? (I am not sure that I am.)