In the July 2025 edition of LIANSwers, we referenced a decision from Alabama that sanctioned several lawyers for filing a factum created by generative AI that contained hallucinated decisions and incorrect citations. Readers will recall that the Court was not kind to counsel.
We are now aware of two decisions closer to home where Courts here have sanctioned lawyers for similar reasons.
In one, the Alberta Court of Appeal in Reddy v Saroya, 2025 ABCA 322, was faced with a situation where Appellant counsel prepared a factum that contained references to cases that did not exist. In its defence, Appellant counsel explained that he was ill, very busy, and it was the holiday season when the factum was due, all of which contributed to him failing to properly review the factum and recognize the issues with several cited cases. He also advised that he had retained a contractor to draft the factum and the contractor assured him that that a large language model was not used. It was when the issues came to light that counsel realized that the assurances might not have been true.
In discussing this issue, the Court stated:
[80] Rule 3.1-2 of the Law Society of Alberta’s Code of Conduct requires lawyers to perform all legal services to the standard of a competent lawyer, and the relevant commentary explains that lawyers should “develop an understanding of, and ability to use, technology relevant to the nature and area of a lawyer’s practice and responsibilities”. The Law Society of Alberta has also published “The Generative AI Playbook”,[1] a resource described as a “starting point for Alberta lawyers seeking to harness the benefits of disruptive technologies like [Generative AI] while safeguarding their clients’ interests and maintaining their professional competence”. The Law Society has further emphasized that lawyers who use large language models or Generative AI must understand their potential benefits and risks. When used without safeguards, large language models frequently introduce confusion and delay into proceedings, and worse, constitute an abuse of process that may potentially bring the administration of justice into disrepute.
[81] The Alberta Courts issued a Notice to the Public and Legal Profession dated October 6, 2023, titled Ensuring the Integrity of Court Submissions When Using Large Language Models,[2] to reinforce the integrity and credibility of legal proceedings. Importantly, the October 2023 Notice applies to lawyers and self-represented litigants alike.
[82] The October 2023 Notice urges those using large language models to exercise caution. Parties are advised to rely exclusively on authoritative sources such as official court websites, commonly referenced commercial publishers, or well-established public services such as CanLII when referring to cases, statutes or commentary in representations to the courts. In addition, the October 2023 Notice clearly requires the involvement of a “human in the loop”, stating “[i]n the interest of maintaining the highest standards of accuracy and authenticity, any AI-generated submissions must be verified with meaningful human control”. Parties are asked to cross-reference work prepared by a large language model with reliable legal databases, ensuring that citations and content hold up to scrutiny.
[83] The time needed to verify and cross-reference cited case authorities generated by a large language model must be planned for as part of a lawyer’s practice management responsibilities, especially during busy times and recognizing that exigencies may arise. Further, if a lawyer engages another individual to write and prepare material to be filed with the court, the lawyer whose name appears on the filed document bears ultimate responsibility for the material’s form and contents, as well as ensuring compliance with the October 2023 Notice.
[84] The consequence of failing to adhere to the October 2023 Notice is within the discretion of the panel or the individual judge involved with the matter. However, counsel and self-represented litigants should not expect leniency where they have failed to adhere to clear and unambiguous requirements. In most situations, courts will likely consider remedies available under the Rules, including striking submissions or imposing some form of cost award against the party or counsel who failed to follow the requirements of the October 2023 Notice. A court may also determine that a penalty should be imposed, contempt proceedings should be initiated, or that a referral to the Law Society of Alberta is warranted. Maintaining the integrity and credibility of court processes justifies the imposition of proportionate and meaningful sanctions.
The Alberta Rule and commentary cited in Paragraph 80 is also found in the Nova Scotia Barristers’ Society Code. In addition, the Nova Scotia Supreme Court has issued its own AI usage notice.
The second decision is Hussein v. Canada (Immigration, Refugees and Citizenship), 2025 FC 1060, the Court stating:
[38] Applicants’ counsel provided further correspondence advising, for the first time, of his reliance on Visto.ai described as a professional legal research platform designed specifically for Canadian immigration and refugee law practitioners. He also indicated that he did not independently verify the citations as they were understood to reflect well established and widely accepted principles of law. In other words, the undeclared and unverified artificial intelligence had no impact, and the substantive legal argument was unaffected and supported by other cases.
[39] I do not accept that this is permissible. The use of generative artificial intelligence is increasingly common and a perfectly valid tool for counsel to use; however, in this Court, its use must be declared and as a matter of both practice, good sense and professionalism, its output must be verified by a human. The Court cannot be expected to spend time hunting for cases which do not exist or considering erroneous propositions of law.
[40] In fact, the two case hallucinations were not the full extent of the failure of the artificial intelligence product used. It also hallucinated the proper test for the admission on judicial review of evidence not before the decision-maker and cited, as authority, a case which had no bearing on the issue at all. To be clear, this was not a situation of a stray case with a variation of the established test but, rather, an approach similar to the test for new evidence on appeal. As noted above, the case relied upon in support of the wrong test (Cepeda-Gutierrez) has nothing to do with the issue. I note in passing that the case comprises 29 paragraphs and would take only a few minutes to review.
[41] In addition, counsel’s reliance on artificial intelligence was not revealed until after the issuance of four Directions. I find that this amounts to an attempt to mislead the Court and to conceal the reliance by describing the hallucinated authorities as “mis-cited” Had the initial request for a Book of Authorities resulted in the explanation in the last letter, I may have been more sympathetic. As matters stand, I am concerned that counsel does not recognize the seriousness of the issue.
As we have said in previous writings and comments, feel free to use these models for research and drafting purposes. But if you do, check all citations and cited materials and, of course, exercise caution. Because if there are issues, they will rest with the person submitting the material. Or counsel personally which is where some Courts are headed.