One of the leading Canadian decisions on the use of generative AI to do legal research is Zhang v. Chen, 2024 BCSC 285, it becoming a common citation when dealing with AI generated hallucinated decisions, the Court stating:

[25]      Ms. Zhang seeks an order of special costs personally against Ms. Ke.  This is premised on the submission of “the extraordinary situation for the costs incurred by MacLean Law to try to find and then expose the AI ‘hallucination’ and Ms. Ke’s delay in addressing the true nature of the cases”…

[29]      Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court.  Unchecked, it can lead to a miscarriage of justice.

[32]      In my view, the circumstances do not justify the imposition of a special costs award against Ms. Ke which include the significant negative publicity to which she has been subjected.  I accept her evidence that she was naïve about the risks of using ChatGPT and that she took steps to have the error corrected.  Though her legal education is extensive, there is a significant difference between academics and lawyering.  I do not find that she had the intention to deceive or misdirect.  I accept the sincerity of Ms. Ke’s apology to counsel and the court.  Her regret was clearly evident during her appearance and oral submissions in court.

[33]      It is unfortunate that Ms. Ke was not aware of the various notices from the Law Society regarding the risks of generative AI.

[35]      The Law Society issued further guidance to the profession in November 2023 on the use of generative AI tools, which affirmed that lawyers are responsible for work products generated using “technology-based solutions” and urged lawyers to “review the content carefully and ensure its accuracy.”

[37]      It is also unfortunate the fact the cases were fake was not conveyed to opposing counsel when Ms. Ke first discovered the true nature of the cases.

[39]      While I have dismissed the request for special costs, I recognize that as a result of Ms. Ke’s insertion of the fake cases and the delay in remedying the confusion they created, opposing counsel has had to take various steps they would not otherwise have had to take.

[43]      Additional effort and expense were incurred because of Ms. Ke’s insertion of the fake cases.  This additional effort and expense is to be borne personally by Ms. Ke.  As she herself notes, this was a “serious mistake” attributable solely to her own conduct…

[46]      As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers.  Competence in the selection and use of any technology tools, including those powered by AI, is critical. The integrity of the justice system requires no less. 

Breaking this down leads to this:

  • Citing fake cases in court filings, regardless of their source, is an abuse of process and is tantamount to making a false statement to the Court;
  • Lawyers may be subject to cost awards against them personally if they use hallucinated materials;
  • A lawyer arguing (1) that they were naïve about the risks of using AI; or (2) that they did not take note of warnings from law societies or the Courts on the use of AI, might not be the best argument anymore as we and others have warned the profession of the risks for some time;
  • If you discover a hallucinated case in your materials you should disclose it to the opposite party and the Court forthwith; and
  • As Justice Masuhara says in Zhang, which we have also said to you, generative AI is still no substitute for the professional expertise of lawyers.

But all this is directed to lawyers. What about self-represented litigants who use AI and file-hallucinated materials. Any person who files fakes materials in proceeding is committing an abuse of process. But a lay person is not subject to the same rules that a lawyer is. A lay person cannot be disciplined. And a cost award against that person will likely have collection challenges.

Which means that if you are dealing with a self-represented litigant, you should take the needed time to ensure that materials they file are real. Though you can start by asking if they used AI in their research, you should check the materials regardless of the answer but at least you will be starting from their representation.

As for lawyers, always read what you file. Before you file it.