Canada faces an artificial intelligence governance challenge.  AI – the ability of machines to perform intelligent tasks such as sorting, analyzing, predicting and learning – promises substantial benefits for businesses and other organizations. These include improving operations, enhancing productivity, and generating health, social and economic benefits for all.

But some AI applications pose risks. AI-enabled automation threatens to disrupt employment; predictive analytics in finance, education, policing and other sectors can reinforce racial, gender and class biases; and data used in AI can be collected in ways that violate privacy.

At the heart of our AI governance challenge is the need to find a balance between supporting the development and diffusion of AI technologies that promise social, economic and other benefits, while ensuring that the risks to the rights and well-being of Canadians are minimized.

Some favour a laissez-faire approach to governance, which would place few limits on AI research and applications so that discovery and the associated benefits are accelerated. Others favour a precautionary approach, which would limit AI development until more is known about the risks and how they can be managed. Both approaches entail costs.

Adding to the challenge is that AI is an emerging technology that has a wide range of possible applications. The nature and extent of the benefits and risks is highly uncertain. This means decisions about how to balance innovation against risk must be made with imperfect and incomplete information. How should AI governance proceed?

There isn’t much evidence that federal and provincial agencies are developing comprehensive approaches to identifying and managing the ethical, social and political risks of AI.

The federal government’s Pan-Canadian Artificial Intelligence Strategy has little to say about AI ethics and governance, and there isn’t much evidence that federal and provincial agencies are developing comprehensive approaches to identifying and managing the ethical, social and political risks of AI. The pan-Canadian strategy does call for “thought leadership on the economic, ethical, policy and legal implications of advances in artificial intelligence” and supports academic researchers to explore these issues. But the “expected results” of the strategy largely ignore ethical, social and political implications.

When asked how AI will be regulated and governed, Innovation, Science and Economic Development Canada (ISED) says only that AI development and use must be consistent with the existing “marketplace framework,” the Canadian Charter of Rights of Freedoms and the Personal Information Protection and Electronic Documents Act. The Treasury Board Secretariat is leading consultations on responsible use of AI within the public sector, Global Affairs Canada coordinated a multi-university student symposium on AI and human rights issues, and some analysts within the federal government are working on approaches to deal with algorithm bias and impact assessment. Additionally, ISED has launched National Digital and Data Consultations, which should address some data collection and use issues.

Still, Canada is falling behind countries such as France, Sweden and the UK, which are further along in thinking about and addressing the innovation and ethical dimensions of AI governance. What can Canada do to ensure that our approach to AI governance supports innovation and addresses the full range of risks that Canadians might face?

Some indication that Canada will pay more attention to AI ethics and governance emerged during a December 2018 meeting of G7 nations to discuss the impacts of artificial intelligence. Canada and France announced that they are seeking to create an International Panel on Artificial Intelligence, whose mission is to “support and guide the responsible adoption of AI that is human-centric and grounded in human rights, inclusion, diversity, innovation and economic growth.” The panel aims to engage stakeholders in science, industry, civil society, governments and international organizations on issues such as data collection and privacy; trust in AI; the future of work; responsible AI and human rights; and equity, responsibility and public good. While this is an important sign that AI ethics and governance are on the Canadian agenda, it is not clear what tangible effect the panel’s work will have on AI governance in Canada.

As with other new and emerging technologies, a risk management approach is preferable to laissez-faire and precautionary approaches. Applied to AI governance, a risk management approach would involve a set of key principles, policies and institutional arrangements.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

To manage the tension between supporting innovation and addressing risks, Canada’s approach to AI governance should focus on specific AI applications, not AI research in general. AI risks will be manifested in the context of concrete uses in specific activities in different sectors, such as health diagnosis, loan assessments, predictive policing or benefits eligibility assessment. Risk assessment and management should focus on what is appropriate in those contexts, while leaving the theoretical development of AI largely unhindered.

As guidance for a risk management approach, Canada’s federal and provincial governments should develop a declaration on the responsible development and use of AI. Such a declaration should signal to private and public sector actors the importance of prioritizing fairness, safety, security, health and other values. Canada’s governments should also provide explicit guidance and funding to researchers in academia and non-profit organizations, industry and relevant government departments to explore and manage the ethical, economic, legal and social dimensions of AI that are discounted in the current innovation-focused pan-Canadian strategy. This would bring Canada more in line with other countries’ practices.

Algorithm impact assessments should be required before AI technologies are used in sensitive areas such as health care, education, public safety and government benefits delivery. These would be similar to health technology assessments and environmental impact assessments but focusing on AI’s risks and benefits for individuals and communities, as well as how these risks and benefits are distributed differently by age, gender, race, class and other demographic characteristics. Canada’s governments should together develop an algorithm impact assessment agency to examine sensitive AI applications that could affect the rights, interests and well-being of Canadians, as well as a dedicated artificial intelligence risk governance council to advise government and industry about AI innovation and risk management. Both the agency and the council should be composed of technical, legal and ethical experts.

Those who use AI-based systems to make decisions should be ready to explain how and why their AI systems produced the decisions they did.

Given the potential for AI-based systems to negatively affect individuals’ health, financial, legal and other important rights and interests, Canada’s governments should consider establishing a “right to an explanation” in key areas, guided by the European Union’s General Data Protection Regulation. Those who use AI-based systems to make decisions should be ready to explain how and why their AI systems produced the decisions they did. At a minimum, organizations in the private and public sectors using AI-based systems for decision making should be alerted that they will be held accountable for outcomes.

Canada has an opportunity to be a global leader in AI research and innovation, and in effective AI governance. But while generating health, economic and social benefits from AI is already a priority among Canada’s governments, managing the potential health, legal, economic and ethical risks of AI applications has taken a back seat. Experience with other emerging technologies should have taught us that prudent risk management is a precondition for identifying and minimizing harms, as well as for generating sufficient public confidence to allow innovation to proceed. Time will tell if those lessons will be applied to AI governance or whether we face a future of unregulated AI risk and stalled AI innovation.

This article is adapted from Canada Next: 12 Ways to Get Ahead of Disruption, a Public Policy Forum series of 12 reports on disruptive challenges and opportunities facing Canada.

This article is part of the Nimble Policy-Making for a Canada in Flux special feature.

Photo: Shutterstock/By MNBB Studio


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Daniel Munro
Daniel Munro is senior fellow in the Innovation Policy Lab at the Munk School of Global Affairs and Public Policy at the University of Toronto, and co-director of Shift Insights. Twitter @dk_munro, @munkschool and @InnovationPoli1

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License