- Leadership & Learning
- Research Services
- Advisory Services
- About Us
6 minute readJanuary 16, 2020
On December 2nd, Toby Fyfe travelled to Quebec City to deliver the keynote address to the SAS conference Bonne gouvernance, donnée et intelligence artificielle. This is the second part of his speech, adapted from the French.
When we think about the challenges of technological change to governance, there may be nothing emerging as quickly or as significantly as artificial intelligence (AI). And for a good reason. The potential for AI in government is vast and its impact will be vast as well. It is already used in a variety of contexts and this will continue. And while some of the discussions around AI are more hype than substance, there are real issues that public service policymakers will need to think about today when preparing for the future.
The fact is, this is not something that can be allowed to be self-regulated by the private sector; the potential impact on citizens is too great. We made this mistake in the past by allowing the private sector to define how it would use information and data, and we have felt the impacts of that decision on issues such as privacy.
So the question is not ifAI is to be governed, but how. We need to think about the right balance between public interest and innovation. It is not a theoretical exercise. Artificial intelligence will become an increasingly important part of the work of public servants, so now is the time to think about these complex issues.
In recent years, governments and regulators have begun to take this issue seriously and to establish governance and standards for the use of AI in the public sector. Canada has generally been considered a leader in the development of systems governing the ethical use of artificial intelligence and there are some examples worth citing:
● A new directive on automated decision-making within the Federal Government will come into effect on April 1, 2020. It will be accompanied by a tool for evaluating the impact of algorithmic decisions. This directive will help federal departments better understand the potential dangers of using AI when providing services to citizens;
● The CIO Strategy Council, which brings together technology leaders from the private and public sectors, recently released a National Standard for Automated Decision Systems;
● Last year, a coalition of researchers and academics published the Montreal Declaration for the Responsible Development of Artificial intelligence; and
● Immigration, Refugees and Citizenship Canada, in response to the initiatives it has put in place in recent years, has created a “Policy Guide” on automated decision-making. This publication contains guiding principles and guidance on responsible design, data management and governance, privacy, procedural fairness, transparency and accountability. It allows people to ask the right questions at the right time.
These are good first steps. But more work needs to be done to refine our approach to artificial intelligence governance. The evolution of our governance approach will require us to answer deceptively simple questions regarding the use of AI in government, questions such as:
● Who ultimately decides?
● Who will be consulted?
● What is the process for reaching a decision?
● Who will remain responsible?
These questions are difficult enough with respect to any discussion of governance among humans. They become even more complex in the case involving artificial intelligence. Why? Because for the first time in the history of humanity, we will have to adopt laws for robots, not just for humans.
It is not foolish to think that the next generation of civil servants will work with, in varying degrees, systems that are largely autonomous. This means that we will need laws and policies to govern the actions of these systems, just as we do with the humans of today’s public service. Of course, it is not the robots themselves that we will govern, but rather those who conceive and create them. We will have to govern the actions of these systems rather than their operation.
The complexity of this task could lead us to focus on the risks and lose sight of the real benefits. If we did that, we would repeat the same mistakes we made with previous technologies, which would cause even more cynicism on the part of citizens.
As an example of a challenge that can also be an opportunity, let’s talk about the “black box” that affects how decisions are made. Indeed, one of the major concerns related to the use of AI is this “black box” that we do not understand. We are afraid that we will not be able to reverse decisions being made and that a bias could be introduced into the system without being detected. An artificial intelligence system that uses biased data can lead to biased decisions. When you consider, for example, that we will use these systems for hiring, immigration, health care or criminal justice decisions, this impact is not insignificant.
We will have to anchor ourselves in the reality of today as we plan for tomorrow. The fact is that every official has their own intelligence system, their own “black box,” between their ears. Yes, they can tell you why they made a decision, but we understand enough about human behavior to know that everyone has their own unconscious biases that may appear in their work. So, we have set up a system of assessment and group decisions that appeal to good judgment. In the same way, we must consider decisions made by artificial intelligence as an element of a decision and not as the final decision without appeal. Let’s admit, however, that it is easier and more practical to hide behind the “system” decision than to assume and explain our decision to an angry citizen. The problem of the ʺblack boxʺ is, therefore, essentially a human problem and not a technological one.
On the other hand, we can recognize that well-made artificial intelligence can also help eliminate some of the very real prejudices that exist in our current system because of human weaknesses. A robot does not get tired, is not sick, is not hungry. It doesn’t matter if it’s Monday or Friday. It does not have good days or bad days. And its greatest advantage, which is, at the same time, its greatest risk, is that everything it does is large-scale, done in the blink of an eye.
If the rules for AI are well-defined, the benefits are considerable. These technologies have the potential to provide citizens with timely services and reliable and unbiased decision-making. If we are wrong, the damage to our public institutions, already facing a crisis of confidence, could be enormous.
Immigration, Refugees and Citizenship Canada, which currently uses AI in a concrete way, offers a few guiding principles which are worth considering:
There is no simple answer that solves all of our AI issues or questions. We are in new territory and we will all need to explore the possibilities and limits. We must create opportunities for conversation and to explore the use of these powerful new tools. And we must do so responsibly, while protecting public trust.
Dr. George Land, who invented the first computer-interactive approaches to group innovation, decision-making, and strategic thinking, and who formulated Transformation Theory, a theory of natural processes that integrates principles of creativity, growth, and change, believed that, when faced with a paradigm shift, there are three possible reactions:
● Those who do not respond to the paradigm shift disappear.
● Those who adapt survive but eventually decline.
● The real winners are those who succeed in reinventing themselves and capitalize on the new paradigm.
Which strategy will you choose for your organization?
The IOG is proud to be sponsoring the Parliamentary InternshipLearn More
Aurele Theriault, Chair of the Board of Directors of theLearn More
With contribution from IOG Fellow Dr. Sara Filbee. This articleLearn More
With contribution from IOG Fellow Dr. Sara Filbee. We areLearn More