By Joanna Redden, Western University and Fenwick McKelvey, Concordia University
There is global consensus among civil society, academia and industry that artificial intelligence adoption comes with risks and harms. Addressing these concerns have been marginal in Canada’s national AI strategy. The federal government’s major response — the Artificial Intelligence and Data Act (AIDA) — is flawed and does not address AI’s current and tangible impacts on our society.
Our research demonstrates key gaps in Canada’s approaches to AI governance. The first issue is that AIDA as presently drafted does not address government use. This is despite widespread use across the public sector.
The Canadian Tracking Automated Governance (TAG) register lists 303 applications of AI within government agencies in Canada. The fact that AIDA as presently drafted will not apply to government use means this legislation is out of step with AI governance in other AI leading nations and the expressed interests of government employees.
That we know so little about how the Canadian government uses AI is just one shortcoming we know through a second report being released today. Our team has also identified key gaps that span the last decade of AI governance in Canada. Part of the Shaping AI Project is comprised of research teams from Germany, the United Kingdom, Canada and France. Our report on AI in Canada documents a lack of critical discussion by all levels of government of AI and its risks, alongside a failure to conduct public consultations.
Need for transparency
AIDA is Canada’s first focused attempt at regulating AI. The act has been tacked onto the end of Bill C-27, and is currently being reviewed by the Standing Committee on Industry and Technology. It has been widely criticized for not providing the protections Canadians need.
Even as parliament debates AIDA, the government accelerates AI adoption.
On April 7, the prime minister announced plans to spend $2.4 billion to increase AI adoption and use in Canada. Surprisingly, only four per cent of the announced funding is devoted to AI’s social impacts. These include vague existential risks, helping workers who might lose their jobs and a paltry amount for a forthcoming AI and data commissioner.
Making government and business uses of AI more transparent, and engaging in meaningful consultation to strengthen oversight and accountability, would demonstrate genuine interest on the part of government of taking public concerns seriously.
Our research shows a gap between the hopes and realities of AI that AIDA must address.
AI registries
We developed the Canadian TAG Register in collaboration with the U.K.-based Public Law Project.
Numerous organizations and governmental review bodies, including Canada’s Chief Information Officer Strategy Council, have been calling for public registries of AI and automated decision-making systems.
AI registries are already produced by a number of cities including Amsterdam, Helsinki, New York and Nantes, France.
Our Canadian TAG register is a start, but limited given the lack of information publicly available about where and how AI and automated systems are being used.
Documenting impacts
The argument for registries is based on the idea that in order to develop effective oversight, policymakers and the public need to be able to see how government agencies and businesses are already making use of AI.
Maintaining this registry — or a similar one — should be delegated to an independent and resourced public authority. This would make it easier for there to be more widespread and meaningful debate about if, where and how AI should be used and the kind of oversight we need.
There is an extensive body of research documenting the ways government and corporate uses of AI and automated systems have already led to harm. Previous research has also documented the strain placed on individuals, communities and review bodies to stop the use of harmful AI practices once in place.
The aims of the Canadian TAG register are to:
- advance discussion about the need for resourced, maintained and public registries of government and business uses of AI and automated decision systems (ADS);
- enable more widespread discussion about if, where and how AI and ADS should be used;
- stimulate more research and debate about the kinds of systems in use and their impacts;
- demonstrate the very limited information presently available about systems piloted or in use.
Maintaining registries and archiving these would require that government agencies dedicate sufficient resources to record and communicate about the systems clearly. Government agencies would also need to make procurement details and company processes more transparent, explain intentions and uses of AI and automated decision systems, and respond to citizens’ requests for information.
Advocates propose registers should include results of audits, details of datasets and variables being used, and how the system is intended to be used.
AI governance in Canada
The federal government introduced its Directive on Automated Decision-Making in 2019. This was supposed to make government uses of AI and algorithmic systems more transparent through mandated impact assessments. At the time of writing, only 18 of these have been published.
The need for a registry is just one finding from our research. Our report documents notable silences that AIDA has not addressed surrounding Indigenous rights and data sovereignty, as well as an absence of input from creative and cultural sectors and the environment.
Government policies, instead, have narrowly focused on AI as economic and industrial policy. Consultations have been largely theatrical, letting AI adoption continue despite deep concerns from the public, especially over facial recognition technologies.
Canadians trust suffers as a result. Canadians have one of the lowest levels of trust in AI, even though Canada has had one of the first national AI strategies.
Even the government’s own policies for procurement have largely sidelined effective consideration of AI’s social impacts. Instead, AI is seen as a remedy for the service-oriented — or deliverology — of Canada’s public sector and yet these changes have been made with little public consultation.
AI has profound social implications, despite being largely presented as an economic opportunity.
Withdrawal of AIDA
Our research reinforces critiques of Canada’s latest effort to regulate AI, pointing to two significant problems:
1) AIDA will not apply to public sector uses of AI, despite the widespread use of AI and automated systems. This runs counter to expressed concerns of public sector workers. The Canadian Union for Public Employees, the Professional Institute for Public Employees and the Canadian Labour Congress have called for AIDA to apply to government departments, agencies and crown corporations.
2) AIDA was rushed and there has been no meaningful consultation with the public.
Given these limitations, AIDA is already out of step with the needs of Canadians. Canadian legislation also falls short of the regulatory approaches taken by other nations.
Examples include the European Union’s recent AI Act and the White House Executive Order and Guidance, which apply to AI uses by government institutions.
Canada remains behind the curve. The prime minister’s recent spending announcements will not address the problems and challenges of regulating AI. AIDA should be split from the rest of Bill C-27, and sent back for the public consultations and redrafting it so clearly requires.
Joanna Redden, Associate Professor, Faculty of Information and Media Studies, Western University and Fenwick McKelvey, Associate Professor in Information and Communication Technology Policy, Concordia University
This article is republished from The Conversation under a Creative Commons license. Read the original article.