• AI, Esq.
  • Posts
  • Is all this AI stuff just hype?

Is all this AI stuff just hype?

A couple critical perspectives on our current moment in genAI

Welcome to the April edition of AI, Esq. This month, I am most curious about various critical perspectives on genAI, including its use in the legal sector. I also have a couple practical resources I’d like to share.

And, right down at the bottom of this newsletter, I’ve got some information on an upcoming workshop where I’ll be speaking on the legal dimensions of genAI. I’d love to see you there!

/

Notable Finds

This article has been making the rounds, and I think it's an important read for anyone who is interested in using genAI and legal work. A sizable law firm’s GC lays out his firm’s case for banning the use of genAI for core legal tasks.

I should say right off the bat that I disagree with almost everything expressed in this article. I think that it shows a remarkable lack of curiosity about what genAI could do to support legal professionals. In my opinion the writer attacks a straw man—”should generative AI be used like a button that you press to generate a brief with all of the supporting research?” Of the folks I know who are interested in these tool, I’m not aware of any of them using or planning to use genAI in that fashion.

Also, the author essentially suggests a policy of banning use of these tools to help produce written work product and research. I would consider such a restrictive policy to be a big mistake for practically any firm. I hope that it comes across in this newsletter (or if you've seen me speak) that I think that there are myriad ways for these tools to support various attorney workflows and processes, including in the areas of research and writing. I also would suggest that given the large number of genAI product offerings, and various access points, including through web apps and smartphone apps, it would be a difficult task for any law firm IT department to actually enforce such a ban. Driving professional AI use underground strikes me as both a likely and unfortunate result of any policy that relies on a comprehensive ban.

However, I think it's very important to read this article. I find the author’s perspective quite understandable, and I think it is a perspective that we can all plan to encounter at least to some extent from our colleagues and our clients. Also, for me it has been important to expose myself to as many critical perspectives on genAI as possible, to leaven my natural enthusiasm for the area and to balance out all the positive coverage (up to and including hype) that I consume.

Over the past couple months I have run across another line of critical analysis that I find interesting. In short, many have observed that since the release of ChatGPT, investment in genAI projects and related infrastructure has exploded. At the present moment at least, the revenues being brought in by the various players in genAI, from the frontier model builders like OpenAI, Google, and Anthropic, to the legion of startups that have arisen in the space, are paltry in contrast to this massive investment. Repeatedly, the question has been asked whether this investment and the concordant interest in genAI will appear in hindsight to be mostly hype, with most of these investments proving unsuccessful.

My favorite treatment of this issue so far is a blog post by Tabrez Syed of Boxcars AI. [Full Disclosure—I recently met Tabrez, and we are collaborating on some upcoming workshops—more on that below] 

Syed draws parallels between the 1990s telecom boom and the current genAI hype cycle to talk about the lifecycle of new technologies and associated speculative bubbles. Syed observes that in the speculative frenzy surrounding new internet technology and telecom deregulation in the mid 90s that a massive amount of infrastructure overbuild occurred (which famously led to rough outcomes for firms and investors in the dot com crash).

He likens the internet frenzy to the recent massive infrastructure investment associated with genAI. However, Syed is not especially critical of this recent massive investment; he views it as a necessary phase in the technology lifecycle. And he points out, for example, that despite the speculative bubble around internet infrastructure bursting in the early 2000s, that over-investment yielded broad benefits over the long term, allowing the rise of innovative businesses that would have not been possible otherwise (e.g. YouTube). Although other commentators have certainly advanced similar perspectives, I think that Mr. Syed offers a particularly thoughtful and engaging look at the situation—giving caution for individual market actors during this period, while recognizing the exciting possibilities for us collectively over a longer timeframe. The technology will continue to develop and new applications will be discovered and refined. In Syed’s view, this process is likely to be supported by any infrastructure over-investment that occurs now, even if that investment does not turn out to be fruitful for the individual investors.

This article offers a comprehensive overview on the current state of genAI, and another perspective on where the field might be going.

There is a lot more to this piece, but I really liked the doom & gloom roundup that it opens with:

Growth, revenue, and margins are underwhelming. Visits to AI sites have stalled. Sequoia, a VC firm, estimates that in 2023 companies spent $50B on Nvidia hardware but only brought in $3B in revenue. AI startup valuations have been much higher than they should be. The low gross margins raise questions about profits and cloud providers are tamping down expectations.

High-profile startups are starting to fail. InflectionAI, a well-funded private model-building company, is being dismantled; Microsoft is picking up the pieces, including ex-CEO Mustafa Suleyman. StabilityAI’s future is unstable, to put it lightly, after founder Emad Mostaque’s downfall.

Enterprises have security doubts and deployment doubts. OpenAI’s GPT store is an utter failure. After an initial spark of interest, people are getting bored. NYT journalist Ezra Klein says he can’t figure out how to use the tech in his day-to-day job. Economist Tyler Cowen says the use of the tools among his fellow academics “has plateaued.” Experts consider AI tools a “fun distraction” but not useful or productivity-enhancing (and when they are, it doesn’t always go well).

Finally, most people don’t really care. The general public knows about ChatGPT but nothing else. Among those who do, the vast majority use the obsolete GPT-3.5, unwilling to swap to better successors, like GPT-4, Gemini, and Claude 3 for a few bucks. Among those who do, it’s the unscrupulous who are sadly getting the most value—to the detriment of peoplethe internet, and our culture.

Alberto Romero

if you are interested in digging into this topic, I highly recommend clicking through the links above. There is so much there!

Overall, the author’s opinion is a common one—yeah, there has been some overhype of this technology. But this concluding comment he makes really resonated with me:

The optimistic conclusion we can take away is that a calmer, non-hype stage will allow the much-needed adjacent work (technical, social, ethical work) to be done—just like it happened with the printing press, electricity, and the internet.

Alberto Romero

It’s easy to get excited about new technological developments, but I think this quieter implementation work that Romero is referring to is what’s really interesting. Regardless of how fast genAI frontier models develop over the next couple years, I’m excited to see how the legal sphere and our society at large digests this technology and weaves it into all our existing structures.

Tips and Tutorials

Enrico Schaefer of Traverse Legal is technology attorney who got to genAI quite early and has been offering insightful commentary over the course of the past year-plus. I’ve been enjoying his perspective, especially since he focuses a lot on practical use cases. He recently created an introductory presentation on genAI for lawyers that includes a lot of good information. I think it would be especially helpful for a lawyer who has a little bit of experience with ChatGPT or similar tools and is looking to deepen their use and find ways to incorporate genAI into their practice. In this video and generally, I find Enrico to be a little bit uncritically bullish on the prospects of genAI in the workplace, so I advise taking it all with a grain of salt. But it also includes a lot of specific suggestions for how to incorporate genAI into your practice.

Finally, as the father of a three year old, I was happy to run across this use for ChatGPT on reddit the other day:

I've been using it for other daily life things, such as to produce painting books (sheets) for my 3 yr old. She tells me what she wants (lions in a pool, elephants flying, turtle with shoes, etc.) and I type it into GPT to produce a black and white drawing suitable for a 3-4 yr old to paint.

Probably not a use case for your legal practice, but if you are a parent (or just and adult that likes coloring books), this could be a lot of fun!

Upcoming Speaking Gigs

I’d like to close with a shameless plug for an upcoming event. It won’t be the last time, as I have some other fun speaking engagements in the works!

Eminence M&A Strategies, boxcars.ai, and my firm McGinnis Lochridge are collaborating to offer a Generative AI Workshop, hosted at the McGinnis Lochridge offices in Austin, TX. We’ll be offering the workshop on two dates, May 14 and 21. I’ll be speaking on copyright, data security, and enterprise policy considerations associated with genAI. Tabrez Syed, cofounder of boxcars.ai, will give a talk going into more details on the overall genAI landscape and what companies can do to make effective use of these technologies. There will also be lunch as well as an interactive exercise. You should join us! For more details and to register, visit the event page here: Generative AI Workshop.