Home Legal News B.C. law report explores adapting civil liability for AI-driven harms

B.C. law report explores adapting civil liability for AI-driven harms

by HR Law Canada

A recent report by the British Columbia Law Institute (BCLI) recommends adapting tort law to address the legal complexities arising from artificial intelligence (AI) systems that operate autonomously, potentially causing harm to persons and property.

The report, published in April 2024 and made available recently on CanLII, outlines the challenges of applying existing legal frameworks to AI-related incidents and proposes measures to ensure justice in such cases.

The Report on Artificial Intelligence and Civil Liability, prepared by an interdisciplinary committee, delves into the potential risks associated with AI systems, particularly those with machine learning capabilities, which operate with limited human oversight. The report highlights that when AI outputs result in harm, current tort principles, primarily developed for human actions, may prove insufficient. “The involvement of artificial intelligence complicates the application of tort principles to determine who is legally responsible, when, and for what,” the report notes.

One of the central questions addressed in the report is: Who is liable when harm results from decisions made by intelligent machines? The BCLI committee, which includes experts in law, computer science, and engineering, suggests that while AI technology brings significant benefits, it also introduces risks, including biases and unpredictable behaviours.

The report draws attention to the emerging issue of “algorithmic discrimination,” where AI systems may replicate and amplify biases hidden in the data they process, potentially leading to discriminatory outcomes. In such cases, the BCLI recommends adapting legal principles to provide remedies, either through legislative reform or judicial decisions. The committee advocates for a civil remedy to address algorithmic discrimination, a growing concern as AI continues to be integrated into sectors such as hiring, healthcare, and law enforcement.

The committee further recommends maintaining a fault-based regime, where civil liability arises from a failure to meet a reasonable standard of care. It rejects the notion of strict liability, which would hold AI developers and operators accountable for harm irrespective of fault. According to the report, strict liability would reduce the incentive for continual improvements in AI system design and oversight.

The report also discusses the difficulty plaintiffs may face in proving fault and causation in AI-related cases. Given the opacity and complexity of many AI systems, the report suggests courts may need to adjust evidentiary standards. It recommends allowing courts to infer a causal link between harm and a failure to exercise reasonable care in developing or using AI systems when plaintiffs cannot provide specific evidence due to the inherent complexity of the technology.

The BCLI’s recommendations are intended to guide both legislators and courts as they encounter AI-related cases, helping to bridge the gap between traditional legal frameworks and the evolving technological landscape. The report acknowledges that while Canada’s regulatory environment around AI is still in its infancy, these changes are crucial as AI becomes increasingly prevalent in everyday life.

The Report on Artificial Intelligence and Civil Liability was prepared with financial support from the Ministry of Attorney General for British Columbia and reflects over two years of deliberations by experts. It is seen as a forward-looking effort to ensure AI technology is integrated responsibly into society, balancing innovation with accountability.

See the full report on CanLII at https://www.canlii.org/en/commentary/doc/2024CanLIIDocs2510

You may also like

Leave a Comment