InnovAItion in the Canadian Federal Government

7 minute read
Opinion Series

The Government of Canada finds itself at a crossroads in the digital revolution. While steps are being taken to develop the policies and guidelines that support the ongoing transition to #GCDigital, the federal government is rightly working deliberately to ensure that these policies reflect Canadian values and are able to withstand scrutiny on ethical grounds.

Underpinning this transformation is the widespread acknowledgment that the status quo no longer effectively serves the public interest and that the public service is struggling to fulfill its mandate. There is also recognition that public servants cannot respond to the challenges of the day without modern tools. We are also beginning to grapple with the consequences of the “move fast and break things” ethos as each of the major tech firms deal with ethical scandals in parallel.

The story of 2018 is the Cambridge Analytica scandal and the protracted revelations that followed. But we also learned that Google stored location data even when users told it not to and Apple throttled the performance of older iPhones. This was also the year of tech employees speaking out against the entities with which their companies are doing business. Google employees protested plans to build a secret censored search engine for Chinaand petitioned the CEO to drop out of the Pentagon AI project and make it a policy never to build “warefare technology.” At Microsoft contracts with the US Immigration and Customs Enforcement (ICE) were denounced, and Amazon employees demanded contracts with law enforcement for facial recognition be canceled. These stories, and many others, have led to debates over the governance of digital tech, including how and where AI should be deployed within a government context.

In the broadest sense, AI is the ability of a computer to perform tasks commonly thought of as requiring human intelligence. Under this umbrella are a number of techniques that appear to mimic the outputs of human intelligence, ranging from machine to deep learning. What these techniques have in common is that they leverage advances in computing power as well as vast troves of data to identify complex patterns or relationships that humans would not otherwise recognize.

Around the time that Deep Blue secured its symbolic victory against Garry Kasparov,[1]early AI applications were beginning to come to market. Now, just over twenty years later, anyone with a smartphone in their pocket interacts with AI on a daily basis. From search engines to chatbots to virtual assistants, much of the technology that we deploy uses algorithms that are considered “intelligent”.

While the private sector continues to make huge investments in the development of AI tools and techniques, the pervasive nature of technology means that it’s not just being deployed by FAMGA,[2] Silicon Valley, or even the tech industry writ large, but across organizations large and small from nearly every sector. IBM Watson is helping radiologists diagnose cancer while Microsoft Azure is supporting farmers in developing countries by telling them the most effective time to sow their crops.

Private sector adoption of technology – the foundation of 21stcentury life – has nearly always exceeded that of government both in pace and scope. Innovative private sector products that meet a market need have impacted the public’s perception of the government’s ability to deliver on its mandate as well as to improve the efficiency of their operations. The benefits of AI to government are clear. With demand for public services continually growing and budgets always stretched, there will be a wide array of AI opportunities to help government transform outdated organizational models, rethink core operations, and reshape citizens’ customer experience.

For service delivery, AI can help move away from a transactional approach to one that is more personalized, user-centred, and holistic, while also improving service standards. In the back office, AI can help support, or assume mundane decision-making tasks, freeing up the humans for more nuanced or strategic work. This will help government enhance productivity, drive innovation in policy and service delivery, and, ultimately, promote better outcomes for Canadians.

While it could be argued that the benefits of AI remain aspirational, AI has demonstrated its worth as a tool to drive efficiency anddeliver higher quality services, making it unsurprising that many are calling for the government to “move at a brisk pace.” But this enthusiasm must be tempered by examples of where AI has shone a light on biased data or where the technology has been deployed seemingly without considering the human element or social impact. While a brisk pace does not mean moving fast, or suggest breaking anything, we need to “move slow and mend things” when implementing AI. In this case, the mending could be considered the refinement of data, the new “digital oil,” by learning how to create, curate and confirm that balanced data is used to train the system that we will use to support evidence-based decision-making.

While recent reporting shows that AI has been piloted within the federal public service at least as early as 2014, the Canadian government has wisely opted to develop robust mechanisms of oversight before it goes mainstream across the system. In 2017, the Treasury Board Secretariat of Canada (TBS) set about charting a course for how AI could be responsibly deployed within government. A wide-ranging consultation with experts culminated in a white paper that explores the titular theme of responsible artificial intelligence in the government of Canada, and proposes seven principles that will underpin TBS policy on the use of AI systems in government. A recent post by the Government of Canada’s CIO, Alex Benay, outlines other recent developments, such as partnering with Public Services and Procurement Canada on a flexible procurement tool for AI products as well as upcoming policy pieces such as the Directive on Automated Decision-Making and an Algorithmic Impact Assessment tool to help those looking to deploy AI within the government better understand the solution and to ensure it is implemented ethically, responsibly, and in an open and transparent manner. These principles are also echoed at the global political level through the Carlevoix Common Vision for the Future of Artificial Intelligence, of which Canada is a signatory along with sister G7 nations.

So, the long-term vision for AI in the Canadian federal government is clear and aligns with principles that aim to protect the common good. This is, without reservation, a very good thing. However, this deliberate approach has put federal public servants in the position where they need to wait for policy before experimenting with AI. This risks seeing Canada fall behind in deploying AI within government, which, based on our in-house talent as well as sizeable investments in recent federal budgets, is an area where Canada should be leading. In AI, like so many other domains, preparation breeds the conditions for success.

But how do you practice AI? One option is a sandbox; an isolated computing environment in which a program or file can be executed without affecting the application in which it runs. A practical sandbox will allow public servants who may or will need to work with this technology to get their hands dirty, while significantly reducing risks of operationalizing bias, disrupting operations, or system failure.

Such an environment should not only serve the computer scientists, economists, and statisticians within the public sector, but also the managers who need to think about the socio-technical components, such as privacy, data governance, and how introducing automation will change how and where people are deployed. With the right people at the table, a practical sandbox can help prototype not only the technology, but the governance of that technology within government. This technique also supports the seventh principle in the aforementioned federal AI white paper which states: “AI systems should be deployed in a manner that minimizes negative impact to employees where possible, and should, where feasible, be created alongside the employees that will work with them.”

A practical sandbox would provide government a controlled environment to test an AI system and ensure that it operates as expected before introducing it into the workflow. This addresses a common pitfall of government adoption of AI outlined by the AI Now Institute in their third annual report:

“…because the underlying models are often proprietary and the systems frequently untested before deployment, many community advocates have raised significant concerns about lack of due process, accountability, community engagement, and auditing.”

Finally, a sandbox will also help align skillsets within the federal government to meet the needs of those implementing and managing an AI-enabled system, and could even help retain employees with those skillsets who may be tempted to leave for jobs where they can work on AI during what may amount to the Cambrian explosion of this technology. It also presents the opportunity to learn from industry as well as post-secondary researchers and students. This would not only help to ensure R&D within the federal government keeps apace, but also train the next generation of talent.

And so, as the federal government moves with appropriate caution as AI is deployed across the system, a practical sandbox can serve as a tool to promote its effective and ethical development, while ensuring that Canadian public servants have access to and familiarity with the modern tools required to deliver on their mandate in the digital age.

[1]1997

[2]Facebook, Apple, Microsoft, Google, and Amazon

About the author

Matt Jackson

Matt Jackson

Director

Matt supports the Institute's work on Digital and Public Governance, with an interest in the implications of new technology, such as blockchain and artificial intelligence. He has a strong track record of managing research projects from planning to completion, making evidence-based decisions, and providing actionable policy recommendations. Mr. Jackson is fluent in mixed-methods research methodologies as well as statistics.

Prior to joining the Institute, Mr. Jackson was a Senior Research Analyst with R.A. Malatest & Associates, Ltd., an independent Canadian Program Evaluation firm, where he managed several large-scale projects for federal, provincial, and municipal government clients, as well as private industry.

Mr. Jackson has completed an M.Sc. in Ecology and Evolutionary Biology at the University of Toronto, and also holds a Bachelor of Science degree from Carleton University.

LinkedIn613-562-0090 ext. 287