- Leadership & Learning
- Research Services
- Advisory Services
- About Us
3 minute read
The recent study by the University of Toronto’s Citizen Lab paints a rather dismal picture of a government where artificial intelligence (AI) runs amuck. It specifically accuses the department for Immigration, Refugees and Citizenship Services Canada (IRCC) of opening up a Pandora’s box by using AI well before it has reached any sort of technological maturity. As the narrative goes, careless use of AI in the selection of new immigrants could well violate human rights. This is because in some cases AI-backed software has used data to come to conclusions that are flawed, if not outright discriminatory, as perhaps best highlighted in a 2016 investigation by Propublica about discriminatory AI sentencing in the US.
The Citizen Lab report raises some important issues, but also needlessly contributes to alarmism about AI, emphasizing the potential for risk in circumstances that do not exist, but could in the future. Certainly, there are risks that come with the use of AI but this is not new information. And despite the insinuation that AI can be discriminatory, AI itself lacks any capacity for bigotry; the cases where this critique is levied are marked by flawed input data that channels human discrimination. Regardless, this problem has been at the top of the discourse surrounding AI for several years now (The authors themselves note that IRCC has worked on AI since as early as 2013) and tackling “discriminatory AI” has in fact been a loud and proud priority of the government of Canada for sometime.
Aside from some of these clear oversights, one could be excused for not noticing a few key details that are buried in the study, glossed over, or not mentioned at all. For one, there is actually no use of AI to select immigrants to Canada. Nor has AI been rolled out across IRCC’s operations. There is in fact just one small pilot program in AI, an experiment if you will, that is being tested at IRCC. This pilot program involves only a small sliver of the entry visas issued by Canada each year. This has been the subject of years of intense oversight where every decision made by AI was compared with those that are made by humans. The pilot project has operated for some time now with direct human supervision of every decision, and has found that the AI operates with much better efficiency and (most importantly) better accuracy than its human counterparts. Today, the AI program continues to operate on a very small pool of entry visa applications. Every AI-backed visa rejection is submitted for secondary review by a human, and a proportion of all affirmative decisions are also reviewed by humans on the basis of random selection.
This would strike some as a reasonable precaution while others may seek more. Indeed, the federal government has been working feverishly to prepare for the onslaught of challenges that will stem from the inevitable adoption of AI. Not the least of which being the Treasury Board Secretariat’s publications on the Responsible Use of Artificial Intelligence in Government and its Directive (currently in draft, to be formally released in 2018) on the Use of Automated Decision-making in Government. Then there are the teams of dedicated experts who work exclusively on issues of AI policy. It is likewise within the mandate of the Treasury Board to oversee and govern departmental activities, including the use of AI and other disruptive technologies. This is clearly top of mind for the Trudeau government as well, having recently made Digital Government a cabinet portfolio, held by the President of the Treasury Board Secretariat, Scott Brison. Canada’s civil servants are recognized for global leadership in this space. International rankings have placed Canada as having the third most prepared government in the world to deal with the use of artificial intelligence.
The Citizen Lab argues for more oversight mechanisms and policies and we agree that getting the governance right will be crucial to continued progress. However, it is always possible to suggest longer consultation periods, more cooks in the kitchen, and less action from government. But this attitude leans against the values of the times. The government and the public at large are demanding that the civil service experiment, take measured risks, and yes, occasionally fail. This is what IRCC has done in Canada with its pilot project for artificial intelligence in entry visa processing in Canada, and frankly, this is the kind of proactive action that is desperately needed throughout the Canadian public service. It’s hard to imagine citizens clamoring for a public service that is more risk averse, less creative and confined to the technologies of decades past. By stifling all attempts at ingenuity and innovation in public administration, the result could be just that.
Dr. Theresa Tam to open Resilient Institutions conference in OttawaLearn More
Responsiveness and Openness by Government Key to Restoring Trust –Learn More
Government Science and Innovation in the New Normal Discussion PaperLearn More