In this item, two new AI / legal services considerations:

(i) Is an AI algorithm practicing law when it provides a self represented party with legal advice and arguments?

Nippon Life Insurance Company of America thinks so as it explains in a recently filed a lawsuit in the US District Court for the Northern District of Illinois against OpenAI Foundation and OpenAI Group PBC (see court file 1:26-cv-0448).

Nippon Life alleges that the information provided by OpenAI’s artificial intelligence platform – ChatGPT – led a claimant, whose disability benefit claim was settled on the advice of counsel, to subsequently dismiss her counsel, breach the settlement agreement and file motions that serve no legitimate legal or procedural purpose. The action states that the claimant filed 21 motions with the goal of reopening her case constituting an abuse of process that was aided an abetted by ChatGPT.

The underlying allegation is that ChatGPT, by drafting arguments and court documents (and one has to ask if any of those included hallucinated decisions), engaged in the unauthorized practice of law and interfered with a settled long-term disability claim. The claims include tortious interference with contractual relations, abuse of process and violations of Illinois’ unauthorized practice of law statute.

(ii) If a litigant communicates about their legal matter with a public / open, generative AI model, are those communications subject to privilege?

In United States v. Heppner, Case No. 1:25-cr-00503(JSR), 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026), the Court held that Mr. Hoeppner’s AI queries and the responses he received were not protected by either solicitor / client or work product privilege. Apparently, Mr. Hoeppner did what we suspect many clients do these days – he inputted case strategy material he received from his lawyer into an AI platform. This material was found during execution of a search warrant and though his lawyers argued that the material should be protected, the Court concluded otherwise.

Of course, many questions come out of this. For example, the system Hoeppner used – Claude – has a privacy policy that states that the parent company has the right to disclose a user’s data to third parties. So, a question is whether this would be the same result if the person had used a private or closed AI system that maintains confidentiality. But the fact is, most users – including lawyers – are not using closed systems. They use the free stuff.

In considering Hoeppner and the rapidly evolving world of AI and its use, it is fair to say that we, as lawyers, have to think about this stuff. Given the propensity of clients to do their own research, is it time to consider updating retainer agreements to include comments on potential risks of using open systems in this manner and consider reviewing privacy policies of any systems used. And, as one commentator noted, should we advise clients – individuals and corporate – to disable AI notetakers during confidential meetings.

Clearly AI is changing the practice of law. The challenge is that these changes are happening fast, but law and lawyers evolve slowly.