
AI use in Canadian courtrooms carries risk of errors, penalties: lawyers
Global News
In one notable case, a Toronto lawyer is facing a criminal contempt of court proceeding after including cases invented by ChatGPT in her submissions earlier this year.
In the past, if a client who usually preferred to communicate via short emails suddenly sent a lengthy message akin to a legal memo, Ron Shulman would suspect they’d received help from a family member or partner. Now, the Toronto family lawyer asks clients if they’ve used artificial intelligence.
And most of the time, he says, the answer is yes.
Almost every week, his firm receives messages written or driven by AI, a shift Shulman says he noticed in the last several months.
While AI can effectively summarize information or organize notes, some clients seem to be relying on it “as some sort of a super intelligence,” using it to decide how to proceed in their case, he said.
“That forms a significant problem,” since AI isn’t always accurate and often agrees with whoever is using it, Shulman said in a recent interview.
Some people are now also using AI to represent themselves in court without a lawyer, which can delay proceedings and escalate legal costs for others as parties wade through reams of AI-generated materials, he said.
As AI infiltrates more and more aspects of daily life, it is increasingly making its way into the courts and legal system.
Materials created with platforms such as ChatGPT have been submitted in courts, tribunals and boards across Canada and the United States in the last few years, at times landing lawyers or those navigating the justice system on their own in hot water over so-called “hallucinations” – references that are incorrect or simply made up.













