In this and prior editions of LIANSwers, and in our more recent annual reports, we have talked about the increasing use of AI to prepare court submissions and the risks, in particular the resulting increase in AI generated cases that do not exist and non-existent quotations attributed to actual decisions.
An article in the March 19, 2026 edition of the Federation of Law Societies of Canada Media Monitor Newsletter takes the conversation further by asking how often AI-generated cases that do not exist are showing up in Canadian court filings. See OPINION: Fictitious case law a systemic problem in Canadian courts: 111 and counting.
For the period from January 1, 2024 through March 10, 2026, the authors found 211 instances of fake cases in 111 different decisions spread among 42 different Canadian courts. This compares to eight such instances the authors found from before 2024.
Of the 111 decisions, in 82 the Court concluded that AI was the cause, the source of the fake case not being conclusively determined in the other 29, leading to this ominous line in the article: The data, in other words, reflects the floor. The ceiling is unknown.
The article ends with this question: how do you build verification into a system that never anticipated fake cases being submitted as real law?
Though lawyers and Courts should always be able to rely on lawyers properly citing decisions being relied on, there has always been a risk that a case has been misquoted, misstated or misapplied. The use of AI just seems to add another layer. So, what is the response? To us it seems to be what is always has been – check citations and read quoted passages and enough of the decision to ensure it is correct and is being used in proper context. In both your filings and your opponents.