Warning: Lengthy Moral Philosophy Discussion Ahead. Worse luck for you, it’s from an English major who took exactly two Philosophy courses in undergrad and was entirely unsuccessful in trying to penetrate Kant’s Critique of Pure Reason, so govern yourselves accordingly. But it’ll take us a few paragraphs before we get there. First, we talk technology.
In Silicon Valley earlier this month, there transpired a conference that crystallized many of the current trends and topics regarding the rapid re-engineering of the legal marketplace. Reinvent Law is a laboratory based at the Michigan State School of Law and sponsored by the Kaufmann Foundation that seeks to combine innovations in law, technology, design and delivery to create a new and better legal system. The primary Reinventors are MSU law professors Daniel Martin Katz and Renee Knake, and if you’re not following them on Twitter, you should be.
ReInvent conferences had already been held in Dubai and London (the latter under the Law Tech Camp banner), but the Silicon Valley meeting was a breakout event that deeply connected with many people in the legal market and is still generating conversations. Here’s a roundup of commentary on the event: I especially recommend Ron Friedmann’s live-blog posts for your review, while the report by The American Lawyer’s Aric Press demonstrates that the issues #reinventlaw is exploring are of interest to some of the largest legal enterprises in the world.
I was seriously sorry to miss ReInvent Silicon Valley, and I hope to make it to a future iteration of the event closer to home. I’m a fan of what ReInvent Law is aiming to do and the methods by which it’s doing it (Dan Katz’s work with data and the law is particularly noteworthy). Technology offers us tremendous potential to improve the quality, delivery and accessibility of legal services, partly because technological disruptions are being applied from the bottom up and from the outside in (rather than top-down from within the legal profession, as previous reform efforts have been), and because the application of internet-based technology can provide benefits well beyond its costs.
This is not a unanimous view, of course, and Reinvent Silicon Valley had its share of critics, including Scott Greenfield of Simple Justice. Scott’s post on the subject expresses a deep skepticism about the conference’s focus on technology, especially as it relates to the criminal justice system. Scott’s take on these issues will be familiar to readers of his blog, but I’d like to single out one part of his post for further consideration:
The fear is that much of what is being promoted as the future of law will actually come to pass. We will have those paperless offices where we sell virtual legal services unbundled like the widgets they can be. And the prisons will still be filled with people whose computer programs told them they should be free.
It’s not that the people involved in all of this aren’t smart. Indeed, these are some very smart, very dedicated people, but they don’t see the law. Dreams of technological change may be very exciting, but to what end?
That last question is an interesting one, and it will do all of us in the legal marketplace reform movement some good to think it over for a while. What are we aiming to achieve with the growing integration of technology into the legal system? I think Scott may underestimate both the purpose and the impact of these new legal technologies: to reduce costly inefficiencies and improve effectiveness throughout the legal service process; to provide more avenues for people to access legal services; to break the monopolistic tendencies of the legal profession that have served the market so poorly.
But when Scott talks about “making the law actually work better for the sake of human beings, rather than make it point and click,” he reminds us that the end, rather than the means, is what we need to focus on here. And although I don’t think this is a mistake that the ReInvent people are making, nonetheless we are vulnerable to the risk that our newest tools — and some of them promise to be very powerful indeed — may cause us to value the tool more than the task. Automation is meant to serve a purpose, not to be a purpose in and of itself.
This brings me to the central issue I want to examine, and to the philosophical part of our program. Peter Thiel recently delivered a guest lecture at Stanford Law School’s Legal Technology course. You might know Thiel as the co-founder of PayPal, the first outside investor in Facebook, and a generally brilliant fellow worth roughly $1.5 billion. Blake Masters took notes on Thiel’s lecture and the Q-and-A that followed, resulting in an extremely thought-provoking and (for me) unsettling read, because Thiel essentially advocates a greater role for automation and technology in the justice system.
You should read the whole article, but it’s quite long, so here are some key excerpts for present purposes.
Computerizing the legal system could make it much less arbitrary while still avoiding totalitarianism. There is no reason to think that automization is inherently draconian.
Of course, automating systems has consequences. Perhaps the biggest impact that computer tech and the information revolution have had over last few decades has been increased transparency. More things today are brought to the surface than ever before in history. A fully transparent world is one where everyone gets arrested for the same crimes. As a purely descriptive matter, our trajectory certainly points in that direction. Normatively, there’s always the question of whether this trajectory is good or bad. …
In some sense, computers are inherently transparent. Almost invariably, codifying and automating things makes them more transparent. … Things become more transparent in a deeper, structural sense if and when code determines how they must happen. One considerable benefit of this kind of transparency is that it can bring to light the injustices of existing legal or quasi-legal systems. …. If you’re skeptical, ask yourself which is safer: being a prisoner at Guantanamo or being a suspected cop killer in New York City. Authorities in the latter case are pretty careful not to formalize rules of procedure. …
The overarching, more philosophical question is how well a more transparent legal system would work. Transparency makes some systems work better, but it can also make some systems worse. So which kind of system is the legal system? … [Is it] pretty just already, and perfectible like a market? Or is it more arbitrary and unjust, like a psychosocial phenomenon that breaks down when illuminated?
The standard view is the former, but the better view is the latter. Our legal system is probably more parts crazed psychosocial phenomenon. The naïve rationalistic view of transparency is the market view; small changes move things toward perfectibility. But transparency can be stronger and more destructive than that. … Truly understanding our legal system probably has this same effect; once you throw more light on it, you’re able to fully appreciate just how bad things are underneath the surface.
Once you start to suspect that the status quo is quite bad, you can ask all sorts of interesting questions. Are judges and juries rational deliberating bodies? Are they weighing things in a careful, nuanced way? Or are they behaving irrationally, issuing judgments and verdicts that are more or less random? Are judges supernaturally smart people? The voice of the people? The voice of God? Exemplars of perfect justice? Or is the legal system really just a set of crazy processes?
Looking forward, we can speculate about how things will turn out. The trend is toward automization, and things will probably look very different 20, 50, and 1000 years from now. We could end up with a much better or much worse system. But realizing that our baseline may not be as good as we tend to assume it is opens up new avenues for progress.
On the surface, there’s much to like here. It’s difficult to argue that the legal system is not, at least in part, a crazed psychosocial phenomenon, inconsistent and frequently irrational in its operation. There is no shortage of error and bias in the law: Scott Greenfield might point to prosecutorial malfeasance and systemic discrimination, whereas I might point to the rampant inefficiency of law practice, the turf-guarding monopolism of lawyer market regulation, and the fundamental conflicts between the traditional law firm business model and the best interests of clients. Why not introduce into this highly imperfect system the discipline, objectivity and predictability of the algorithm?
And yet … something about Thiel’s narrative bothered me. Just the fact that the word “totalitarianism” came up in this discussion is enough to raise red flags about the possible risks we run here. Humans have a long-held apprehension about developing technologies that will eventually destroy them: I wrote about this in Blawg Review #252 back in 2010, when I tracked science-fiction tropes about technophobia from Frankenstein to The Matrix. Literature abounds with nightmarish future states in which our machines, given the power to execute the law, eventually become the law unto themselves. If we have a generalized dislike of bureaucracy, it’s because we fear the spectre of a faceless, mindless, autonomous system that knows who, what, where, when, and how, without ever knowing or caring why. And history supplies us with good reason to feel that way.
But I was also disturbed by what I felt was a deeper problem: that while this approach was clearly intended as a moral good that would improve fairness and correct injustices, nonetheless there was something vaguely wrong about the whole thing. So I did what anyone would do in these circumstances: I consulted a moral philosopher; in my case, Dr. Richard Matthews of King’s University College at the University of Western Ontario (who also happens to be an old and great high school friend) for his assistance. With his permission, here are excerpts from his illuminating response:
The article is deeply uneasy with human subjectivity. … The discussion of AI and improvements in legal computation suggests the possibility of improving on this, of making the legal system more rational. To be fair, he acknowledges that things could get better or worse with the introduction of AI. But what he does not notice is that the drive is to eliminate human fallibility as such from the process of legal reasoning — to render human judgment irrelevant.
Suppose that the trend towards legal computation is “successful,” whatever that would mean…. The consequence will be reduced human involvement in the most important aspects of the legal system, and thus increasing irrelevance of human beings as subjects in the process. This is, no matter what the ultimate results of the process are, the further objectification of human beings. Humans become the objects of judgments, not subjects.
What are some of the practical implications of this? Well, you have been mapping many of them in your blog already — the elimination of highly skilled and highly trained lawyers and judges from participation in a meaningful human activity; the organization and maintenance of law through mechanization of the kind that this article identifies; and by taking the labour that you cannot be bothered to mechanize and finding the least-well paid and most desperate people to do it. Obviously there are many others, but I find none of them attractive.
This is a mapping and reshaping of human life and its possibilities which has, at its root, the controlling and reshaping of human populations. The controlling will not produce better human beings or increased obedience to law. Instead, it always generates resistance. …
Such technologies also concentrate power in the hands of an increasingly small group of people, since they own and thus control access to the AIs. The issue of transparency is dodgy, in any event. We have to ask: To whom are computers transparent, since 99.9% of the world doesn’t have a clue what a computer is, even as we use them. Also, the computer does not function in a politically neutral environment. I would be highly surprised to find transparency applied to powerful individuals in the same way that it will be applied to the vulnerable.
I think Richard has struck several nails on the head here, which is why I’ve gone to such lengths to address this subject: because although the size of the risk that an increasingly automated justice system presents is very small, the potential impact of that risk is not. I’m fond of saying that lawyers were invented to serve the law, not the other way around. Well, the law was developed to serve people, not the other way around, and one of the services it’s meant to deliver is to support and extend the realm of human dignity. Humans aren’t always great at sustaining our own and others’ dignity; but we do try, here in the law, to accomplish that, and sometimes we succeed. Machines aren’t good at it at all.
Rest assured, I remain a strong proponent of improving and expanding the role of systems, processes and technology in the business of law and, to a more limited degree, in the justice system itself. The problem arises when we give in to the temptation to let these systems run loosely supervised, or not supervised at all — and that temptation is real, because every mechanized process is always telling us, “Go on, take a break, leave it to me, I’ve got this handled” — and, hard-pressed for time or money, we often acquiesce. Not everything requires watchful human guidance, but some things do, and the law is one.
The word “autonomy” comes from the Greek autonomos, which means “independent, living by one’s own laws.” (Emphasis added.) The implications of that definition for this discussion are too strong for me to pass up: these are our laws, meant for our good, and Peter Thiel notwithstanding, I recommend that we remain highly vigilant about and directly involved with their application.