AI Esq. Issue Two

GenAI in the Legal Industry

Well, I’ve crossed the most important threshold of any newsletter—sending out a second issue! There are so many interesting developments in genAI and such a variety of insightful commentary, it’s difficult to choose what to highlight here. So for this month at least, I’ve laser-focused on the legal industry. I hope you enjoy.

Notable Finds

There are have been a whole slew of research papers evaluating the abilities of LLMs like GPT to do what lawyers or law students do. But this one really jumped out at me (hat tip to the Brainyacts AI newsletter). Obviously, strong performance in various studies notwithstanding, there are significant barriers to automating legal work. At the same time, this paper finds strikingly impressive performance of these models on a core, and expensive, legal task.

In case perusing academic studies isn’t your thing, here are a few juicy quotes:

In evaluating the efficiency of contract review, our analysis . . . indicates a substantial difference in the speed of contract review between LLMs and human reviewers. Specifically, the fastest LLM, Palm2 text-bison completed the contract review tasks in an average of 0.728 minutes. This contrasts sharply with the average time of 56.17 minutes for a Junior Lawyer and 201 minutes for an LPO. Such a disparity not only highlights the superior processing capabilities of LLMs but also suggests a paradigm shift in how contracts can be reviewed.

Martin, et al.

The data reveal that the cost per contract review for LLMs is significantly lower than that for human reviewers. For instance, while a Junior Lawyer incurs an average cost of 74 dollars per contract review, the fastest LLM performed the same task for approximately 2 cents.

Martin, et al.

The results show LLMs demonstrate comparable, if not superior, performance in determining legal issues when compared to Junior Lawyers and LPOs. However, a LLMs ability to locate issues within contracts, particularly where a standard is not present, is model-dependent and may not consistently outperform human practitioners, highlighting the importance of selecting the right model for the legal task.

Martin, et al.

Reading through this recent report from Lexis on genAI use in the UK legal community, I was struck by this graph showing the rise in usage over the past few months:

I had a two-step reaction to this data. My initial reaction was a common one when I’m reading about AI—wow, this technology is catching on quick! The fact that a quarter of the attorneys surveyed have tried some kind of genAI tool in their work is wild, given that this technology has only been widely available for around a year. But as I looked a little closer at the graph, I also noticed that that the daily users of this technology are more what I would expect from a new technology still finding its footing in a professional sphere, going from 1% to 3% in the last six months.

Questions I’m Mulling

How can we identify the most promising use cases for genAI in legal services, especially litigation?

Are the reasons that other potentially-helpful technologies have failed to achieve widespread adoption in legal practice equally applicable to adoption of genAI?

I’ve been thinking a lot lately about which of my lawyerly tasks & workflows are well suited for generative AI tools, and which may not be. The good news is that for those that aren’t a great fit for these shiny new tools, there are often preexisting processes or technologies that might be helpful. On the other hand, the fact that these other solutions have been available but are not widely adopted suggests significant implementation barriers.

I really like this mind map offering a high level summary of the sorts of data tasks that might be productively addressed by genAI vs. more established data science methods:

More specific to legal, I found that these thoughts from this legal tech specialist (and former biglaw bankruptcy attorney) really resonated with me:

I wonder where generative AI leaves some of the workflows that probably can't be (best) solved with generative AI. For example, bundling of documents (e.g. court binders), sharing large files externally or project management tools.

In my view, the majority of time save in legal could come from improving these kinds of menial processes. Solutions have been in place for years, but are not anywhere near as well adopted as they should be.

For example, I would say that at least 50% of the lawyers I speak to are still preparing PDF court binders/bundles using Adobe Acrobat, manually, rather than dedicated tools that would reduce the exercise to minutes.

I really hope we still try and fix these problems alongside looking at AI-driven ways of solving previously unsolvable problems.

Jack Shepard

I also wonder if some of the reasons why potentially helpful technologies are not adopted by law firms may be less applicable to some generative AI solutions.

For example, I recall some of the challenges in implementing a more advanced document management system at my firm. One persistent challenge was trying to get a slew of attorneys with their own habits, practices, and teams to all agree to a particular solution. Real buy-in was needed, as the new technology required everyone to learn and follow some new processes. It is difficult to get lawyers to spend significant amounts of time training on new technology, and lawyers are very sensitive to anything that disrupts their established workflows. But the new document management system would only work well if everyone understood how to use it and adopted it. Technology changes like this are possible, but tend to be a bit rocky, and require a clear need, sustained support & thoughtful implementation. Also, as many others have observed, the specific economic structure of law firm partnerships may lead to less empowered IT departments, more decentralized management structures, as well as a lower amount of capital for extensive technology product projects, compared to other types of businesses.

No wonder there are many technological solutions available that don’t get widely adopted!

It seems to me though that genAI technologies may not face these familiar barriers in the same way.

For example, at least some of the enterprise genAI solutions available allow firms to purchase as few or many licenses as they want. And the nature of the technology means that it does not need to be comprehensively adopted across an organization to be used effectively. So, I don't have to convince all of the lawyers in my litigation section to adopt a generative AI tool before I start potentially putting it into practice in my own work. I just need to get approval to purchase one low-cost license of the tool I am interested in and start finding ways that it can be useful to me, consistent with my organization’s acceptable use policies and security standards. The cost to my firm is tiny, and I don’t need buy-in from all the partners with their various practices, workflows, and interest levels. My firm doesn’t need to roll out any formal training program—the best way for me to learn how to use the tool is to fiddle with it, with the support of whatever vendor resources are available, and find creative ways to apply it. And if my experiment is successful, usage can grow organically at the pace of interest and need within the organization.

I do think there is a need for the folks that are interested in this technology to be thoughtful and creative in searching out ways to apply it to their work—it is clear that the technology is powerful and quickly developing; it is equally clear that its particular quirks and shortcomings create difficulties in using it to support legal work.

To wit, I love this pithy statement from one of my very favorite legal technology commentators:

On the topic of quality control, recent soundbites from three separate podcasts strike a chord when combined:

—AI is like having infinite interns.

—You don’t actually ship what the intern does. At least not without checking.

—Directionally accurate in legal will get you fired.

GenAI is amazing, but we do operate in an industry where the tolerance for error is extremely low, and the potential impacts of error are extremely high.

Peter Duffy

Really cuts to the heart of the issue!

Well, that’s what I’ve got for this month folks. If you have reactions, questions, or curiosities on any of these issues, please reach out—I’d love to hear from you.