Warning: Lengthy Moral Philosophy Discussion Ahead. Worse luck for you, it’s from an English major who took exactly two Philosophy courses in undergrad and was entirely unsuccessful in trying to penetrate Kant’s Critique of Pure Reason, so govern yourselves accordingly. But it’ll take us a few paragraphs before we get there. First, we talk technology.
In Silicon Valley earlier this month, there transpired a conference that crystallized many of the current trends and topics regarding the rapid re-engineering of the legal marketplace. Reinvent Law is a laboratory based at the Michigan State School of Law and sponsored by the Kaufmann Foundation that seeks to combine innovations in law, technology, design and delivery to create a new and better legal system. The primary Reinventors are MSU law professors Daniel Martin Katz and Renee Knake, and if you’re not following them on Twitter, you should be.
ReInvent conferences had already been held in Dubai and London (the latter under the Law Tech Camp banner), but the Silicon Valley meeting was a breakout event that deeply connected with many people in the legal market and is still generating conversations. Here’s a roundup of commentary on the event: I especially recommend Ron Friedmann’s live-blog posts for your review, while the report by The American Lawyer’s Aric Press demonstrates that the issues #reinventlaw is exploring are of interest to some of the largest legal enterprises in the world.
I was seriously sorry to miss ReInvent Silicon Valley, and I hope to make it to a future iteration of the event closer to home. I’m a fan of what ReInvent Law is aiming to do and the methods by which it’s doing it (Dan Katz’s work with data and the law is particularly noteworthy). Technology offers us tremendous potential to improve the quality, delivery and accessibility of legal services, partly because technological disruptions are being applied from the bottom up and from the outside in (rather than top-down from within the legal profession, as previous reform efforts have been), and because the application of internet-based technology can provide benefits well beyond its costs.
This is not a unanimous view, of course, and Reinvent Silicon Valley had its share of critics, including Scott Greenfield of Simple Justice. Scott’s post on the subject expresses a deep skepticism about the conference’s focus on technology, especially as it relates to the criminal justice system. Scott’s take on these issues will be familiar to readers of his blog, but I’d like to single out one part of his post for further consideration:
The fear is that much of what is being promoted as the future of law will actually come to pass. We will have those paperless offices where we sell virtual legal services unbundled like the widgets they can be. And the prisons will still be filled with people whose computer programs told them they should be free.
It’s not that the people involved in all of this aren’t smart. Indeed, these are some very smart, very dedicated people, but they don’t see the law. Dreams of technological change may be very exciting, but to what end?
That last question is an interesting one, and it will do all of us in the legal marketplace reform movement some good to think it over for a while. What are we aiming to achieve with the growing integration of technology into the legal system? I think Scott may underestimate both the purpose and the impact of these new legal technologies: to reduce costly inefficiencies and improve effectiveness throughout the legal service process; to provide more avenues for people to access legal services; to break the monopolistic tendencies of the legal profession that have served the market so poorly.
But when Scott talks about “making the law actually work better for the sake of human beings, rather than make it point and click,” he reminds us that the end, rather than the means, is what we need to focus on here. And although I don’t think this is a mistake that the ReInvent people are making, nonetheless we are vulnerable to the risk that our newest tools — and some of them promise to be very powerful indeed — may cause us to value the tool more than the task. Automation is meant to serve a purpose, not to be a purpose in and of itself.
This brings me to the central issue I want to examine, and to the philosophical part of our program. Peter Thiel recently delivered a guest lecture at Stanford Law School’s Legal Technology course. You might know Thiel as the co-founder of PayPal, the first outside investor in Facebook, and a generally brilliant fellow worth roughly $1.5 billion. Blake Masters took notes on Thiel’s lecture and the Q-and-A that followed, resulting in an extremely thought-provoking and (for me) unsettling read, because Thiel essentially advocates a greater role for automation and technology in the justice system.
You should read the whole article, but it’s quite long, so here are some key excerpts for present purposes.
Computerizing the legal system could make it much less arbitrary while still avoiding totalitarianism. There is no reason to think that automization is inherently draconian.
Of course, automating systems has consequences. Perhaps the biggest impact that computer tech and the information revolution have had over last few decades has been increased transparency. More things today are brought to the surface than ever before in history. A fully transparent world is one where everyone gets arrested for the same crimes. As a purely descriptive matter, our trajectory certainly points in that direction. Normatively, there’s always the question of whether this trajectory is good or bad. …
In some sense, computers are inherently transparent. Almost invariably, codifying and automating things makes them more transparent. … Things become more transparent in a deeper, structural sense if and when code determines how they must happen. One considerable benefit of this kind of transparency is that it can bring to light the injustices of existing legal or quasi-legal systems. …. If you’re skeptical, ask yourself which is safer: being a prisoner at Guantanamo or being a suspected cop killer in New York City. Authorities in the latter case are pretty careful not to formalize rules of procedure. …
The overarching, more philosophical question is how well a more transparent legal system would work. Transparency makes some systems work better, but it can also make some systems worse. So which kind of system is the legal system? … [Is it] pretty just already, and perfectible like a market? Or is it more arbitrary and unjust, like a psychosocial phenomenon that breaks down when illuminated?
The standard view is the former, but the better view is the latter. Our legal system is probably more parts crazed psychosocial phenomenon. The naïve rationalistic view of transparency is the market view; small changes move things toward perfectibility. But transparency can be stronger and more destructive than that. … Truly understanding our legal system probably has this same effect; once you throw more light on it, you’re able to fully appreciate just how bad things are underneath the surface.
Once you start to suspect that the status quo is quite bad, you can ask all sorts of interesting questions. Are judges and juries rational deliberating bodies? Are they weighing things in a careful, nuanced way? Or are they behaving irrationally, issuing judgments and verdicts that are more or less random? Are judges supernaturally smart people? The voice of the people? The voice of God? Exemplars of perfect justice? Or is the legal system really just a set of crazy processes?
Looking forward, we can speculate about how things will turn out. The trend is toward automization, and things will probably look very different 20, 50, and 1000 years from now. We could end up with a much better or much worse system. But realizing that our baseline may not be as good as we tend to assume it is opens up new avenues for progress.
On the surface, there’s much to like here. It’s difficult to argue that the legal system is not, at least in part, a crazed psychosocial phenomenon, inconsistent and frequently irrational in its operation. There is no shortage of error and bias in the law: Scott Greenfield might point to prosecutorial malfeasance and systemic discrimination, whereas I might point to the rampant inefficiency of law practice, the turf-guarding monopolism of lawyer market regulation, and the fundamental conflicts between the traditional law firm business model and the best interests of clients. Why not introduce into this highly imperfect system the discipline, objectivity and predictability of the algorithm?
And yet … something about Thiel’s narrative bothered me. Just the fact that the word “totalitarianism” came up in this discussion is enough to raise red flags about the possible risks we run here. Humans have a long-held apprehension about developing technologies that will eventually destroy them: I wrote about this in Blawg Review #252 back in 2010, when I tracked science-fiction tropes about technophobia from Frankenstein to The Matrix. Literature abounds with nightmarish future states in which our machines, given the power to execute the law, eventually become the law unto themselves. If we have a generalized dislike of bureaucracy, it’s because we fear the spectre of a faceless, mindless, autonomous system that knows who, what, where, when, and how, without ever knowing or caring why. And history supplies us with good reason to feel that way.
But I was also disturbed by what I felt was a deeper problem: that while this approach was clearly intended as a moral good that would improve fairness and correct injustices, nonetheless there was something vaguely wrong about the whole thing. So I did what anyone would do in these circumstances: I consulted a moral philosopher; in my case, Dr. Richard Matthews of King’s University College at the University of Western Ontario (who also happens to be an old and great high school friend) for his assistance. With his permission, here are excerpts from his illuminating response:
The article is deeply uneasy with human subjectivity. … The discussion of AI and improvements in legal computation suggests the possibility of improving on this, of making the legal system more rational. To be fair, he acknowledges that things could get better or worse with the introduction of AI. But what he does not notice is that the drive is to eliminate human fallibility as such from the process of legal reasoning — to render human judgment irrelevant.
Suppose that the trend towards legal computation is “successful,” whatever that would mean…. The consequence will be reduced human involvement in the most important aspects of the legal system, and thus increasing irrelevance of human beings as subjects in the process. This is, no matter what the ultimate results of the process are, the further objectification of human beings. Humans become the objects of judgments, not subjects.
What are some of the practical implications of this? Well, you have been mapping many of them in your blog already — the elimination of highly skilled and highly trained lawyers and judges from participation in a meaningful human activity; the organization and maintenance of law through mechanization of the kind that this article identifies; and by taking the labour that you cannot be bothered to mechanize and finding the least-well paid and most desperate people to do it. Obviously there are many others, but I find none of them attractive.
This is a mapping and reshaping of human life and its possibilities which has, at its root, the controlling and reshaping of human populations. The controlling will not produce better human beings or increased obedience to law. Instead, it always generates resistance. …
Such technologies also concentrate power in the hands of an increasingly small group of people, since they own and thus control access to the AIs. The issue of transparency is dodgy, in any event. We have to ask: To whom are computers transparent, since 99.9% of the world doesn’t have a clue what a computer is, even as we use them. Also, the computer does not function in a politically neutral environment. I would be highly surprised to find transparency applied to powerful individuals in the same way that it will be applied to the vulnerable.
I think Richard has struck several nails on the head here, which is why I’ve gone to such lengths to address this subject: because although the size of the risk that an increasingly automated justice system presents is very small, the potential impact of that risk is not. I’m fond of saying that lawyers were invented to serve the law, not the other way around. Well, the law was developed to serve people, not the other way around, and one of the services it’s meant to deliver is to support and extend the realm of human dignity. Humans aren’t always great at sustaining our own and others’ dignity; but we do try, here in the law, to accomplish that, and sometimes we succeed. Machines aren’t good at it at all.
Rest assured, I remain a strong proponent of improving and expanding the role of systems, processes and technology in the business of law and, to a more limited degree, in the justice system itself. The problem arises when we give in to the temptation to let these systems run loosely supervised, or not supervised at all — and that temptation is real, because every mechanized process is always telling us, “Go on, take a break, leave it to me, I’ve got this handled” — and, hard-pressed for time or money, we often acquiesce. Not everything requires watchful human guidance, but some things do, and the law is one.
The word “autonomy” comes from the Greek autonomos, which means “independent, living by one’s own laws.” (Emphasis added.) The implications of that definition for this discussion are too strong for me to pass up: these are our laws, meant for our good, and Peter Thiel notwithstanding, I recommend that we remain highly vigilant about and directly involved with their application.
shg
An interesting and thoughtful post, Jordan. One of the Reinvent supporters left me a comment suggesting that we likely have more in common than I realize. I don’t quite doubt that, but wonder why skeptics are kept at arm’s length rather than invited to participate. I note, with a wry smile, that Dan Katz didn’t include my post in his compendium of commentary. How does that song go, “where never is heard a discouraging word…”
Preaching to the choir is not only an ineffective means of vetting new concepts, but perhaps the worst way to achieve the very goals of the movement, provided that there is ultimately a concern that the legal system will serve people better in the future rather than just have more buttons and flashing lights.
I await an invitation to engage in lively discussion, but I won’t hold my breath.
Sam
Scott, the Reinvent Law Silicon Valley conference offered free tickets to everyone. The next event, ReInvent Law London, takes it one step further — it is currently accepting talk proposals from everyone, and the public will vote on who gets to talk. This is not, and never has been, and invitation-only thing.
shg
Well, sure, I could enter a beauty pageant in the hope of spending more time on my own dime to attend whatever is the latest flavor of Reinvent the Future of the New Normal of Law conference to see if the cheerleaders want to invite a skunk to the garden party, just like those who are trying to sell their latest, greatest concept, gadget, idea.
Or, I could question why those who want to sell their latest, greatest, concept, gadget, idea, fear subjecting it to scrutiny rather than just gather the choir and preach away. If the concept, gadget, idea had merit, one would think it could withstand the rigors of real-world skepticism. But hey, why wouldn’t a skeptic want to enter the beauty pageant? After all, isn’t 6 minutes segments of the deepest futurist thought all that’s needed to Reinvent Law?
Jon Busby
Jordan
The bit I don’t get is when this all became all or nothing, tech or no tech, automate or manual, off on, up down etc.
I encounter this attitude a lot and I am quite anti…and I am supposed to be the bloody technologist.
Technology is all about appropriateness…what is appropriate for the client, the lawyer and the situation. It is mapped onto a problem if it fits that problem, not because it can be.
Technology may be used a lot, it may not be used at all..the point being it can be if it needs to be.
Technology and content matter far less than relationships and trust. Suddenly technology seems to have control on this sector, mesmerising it…that needs to be corrected.
On a separate personal note…man do I hate the expression “choir.” It all starts to sound a bit too evangelical, a bit too zealot like, a bit too…sorry to be blunt…superior. It sounds like an exclusive club with all the hierarchy and ego that comes with the club culture.
Reinvent Law are doing interesting things and they should be applauded for kicking the door in but language at this stage is actually quite important and (as I said at their event last year) whilst innovation is important…so is communication. And communication involves a lot of listening. As a sector you have a lot, heck of a lot to learn about communication before you will get the real benefits of innovation.
Time will tell on Reinvent and not just them but a whole list of ‘pseudo-influencers’ because if they are not moving the debate on they will be judged accordingly and overtaken by others. Making noise is not the measure here. Creating relevance is.
I hope they do because this sector desperately needs people who are pushing on the edges rather than sitting in the centre.
I hope that helps.
Jon
Richard Granat
Jordon:
I agree with Jon Busby that the use of technology in legal service delivery is all about appropriateness. For a criminal defense practice like Scott Greenfield’s face to face interaction with client’s is obviously crucial. But there are all kinds of law practices and all kinds of legal processes. There are some kinds of law practice that have a high information content which can be digitized and automated. I am thinking of document intensive law practices. There are technology processes that can make these kinds of practices more efficient resulting in increasing lawyer productivity and maybe reducing legal pricing resulting in an expansion of the legal market. Its a bit of a straw man argument to assert that whereas as legal automation may not be relevant to a criminal law practice, it is not relevant to other kinds of law practice — particularly to those legal services which lend themselves to using a software application to assist the work of a lawyer.
I actually attended the Peter Thiel at Stanford Law School. The discussion was at a very high theoretical level and dealt with scenarios that are not likely to happen in the near or intermediate future. The lecture made for a good classroom discussion but was not relevant to current problems that that the broad middle class has in accessing legal services at an affordable price.
The fact is that in the domain of the delivery of personal legal services to the average American family, the US legal profession has priced itself out of of 90% of the US population. The moderate and middle income market for legal services is not “clearing” as to price, because the providers have a monopoly on supply and alternative solutions cannot be tested or tried easily.
The problem of access to legal services for the broad middle class at prices they can afford, is the problem that requires a solution of some sort. Solutions can be lower cost substitutes for a lawyer’s services, when appropriate, such as trained lower-cost non-lawyer providers, legal digital applications distributed over the Internet and through mobile platforms, and online dispute settlement. Lawyers can also adopt these approaches and technologies to offer “unbundled legal services” assisted by software, to streamline their service offerings resulting in reduced pricing without affecting their profit margins.
Jon Busby
Thanks Rich.
Lawyers seem to keen to blanket one solution on to lots of very different problems.
Where I see the future with technology is with the shift of routine process down the firm food chain and eventually out for the client to do. The intellectual bits; advice, recommendation, risk analysis etc will increasingly move towards the lawyer. technology will drive this break up.
Clients will be increasingly comfortable with this, partly because they will educate themselves on the benefits of doing it this way and also because we are all becoming data inputters. Lawyers will accept it because by moving cost out you can start to fix your pricing more easily and reduce risk exposure.
This is what I call inputters versus interpreters. Smart lawyers are positioning themselves more as interpreters and less as inputters.
In many (but not all) current legal processes it is the inefficiency of process that is drowning the value of intellect.
The difference between then, now and in the future will be not just technology but technology which involves the client.
I appreciate that is may be a bit of a head melt for many lawyers and goes completely against everything they know and how they have been trained…but it is also inevitable.
Jon
James F. Ring
Great article, and a great series of comments. I agree with Richard Granat’s remarks and note that his work is currently being used to help people in the real world. I also agree with Scott H. Greenfield (SHG)’s observations to the effect that – whether it’s due to apathy, self-interest, or censorship – much of what gets published and said about the potential of technology to reinvent the law appears to be premised upon a very narrow and idealistic view of what the legal system is and how and why it works in actual practice. Thanks for the thoughtful discussion.
Dominique Martin
“Reinventing” is the buzz word and a serious overstatement; there’s nothing being reinvented. Rather, it is a continuous evolution of technology seeping into all layers of the legal industry. If the industry is a car with a manual transmission, it is now exploring the efficiencies of an automatic. No biggie.
In addition, the industry is attempting to expand its menu board with approaches such as unbundling, DIY-sites, etc., etc. Again, to place all this under reinventing is giving this process way too much credit. To attach crisis-solving qualities to this evolution is delusional and in some cases a clever slide of hand.
Its impact on the “underserved” (another buzz word) is and will be negligible.
Richard Granat has it right when he opines that price is a core factor; access to legal services is simply beyond a majority’s budget. Legal Aid offices in the country have more requests for help than they can handle and 50% to 60% of these requests are being denied mostly because the consumer isn’t poor enough (125% rule). Across the board legal services are simply overpriced and that is not a technology issue but a product of a self-regulated industry that believes it is worth charging an amount most people cannot afford.
Sure, the right technology can induce savings but those savings must be passed on to the consumer — if those savings are used to protect profit margins or in the case of Big(ger) Law, partner income then the underserved group will only grow.
Ann Lee Gibson
Jordan, what an excellent post (thank you) and what an excellent discussion by commenters. I appreciate especially that values (not the same as “value”) and technology are being addressed in the same conversation. For me, so much of what is off-putting about the (rapidly aging) new normal conversation is that it is managed by people who do not practice law and, frankly, don’t seem to have much respect for those who do or for their clients’ actual needs. It seems that, with this blog post, the conversation about law’s future is digging a bit deeper than merely demonizing BigLaw and yearning for the Legal Technological Singularity. Again, thanks to all for your comments.