6 minute read
Part one of a two-part series.
by Toby Fyfe, IOG President
On December 2nd, Toby Fyfe travelled to Quebec City to deliver the keynote address to the SAS conference Bonne gouvernance, donnée et intelligence artificielle. This is the first part of his speech, adapted into English. Part two will appear in January.
At a time when smart phones, smart homes, smart cities and smart wearable objects are becoming more and more ubiquitous, what will the impact of Artificial Intelligence (AI) be on government, and how will we ensure good governance in this application?
If the answers are not yet clear, one thing is certain: the field of artificial intelligence, and more generally the development and use of cognitive machines, is progressing at an unprecedented rate. Governments already apply it in a wide range of fields, from health care to transportation, and even to the justice system.
The technological transformation brought about by artificial cognitive machines is already beginning to have considerable, and sometimes unintended, consequences on a scale and at a pace that goes beyond previous industrial revolutions. These machines offer, or make decisions, in a complex way that sometimes escapes human understanding.
The work of the Institute on Governance is increasingly focused on what I like to call ʺthe challenges of public sector governance in the 21st century.ʺ And it’s clear that AI is one of those challenges.
Harper Reed, American entrepreneur, engineer, futurist and self-described ʺhacker,ʺ has said that we need to think of artificial intelligence as an extension of the man rather than his replacement. We could also consider it as an extension of government in the policy and operational spheres.
Polls currently show that citizens’ confidence in all institutions, including government, is declining. This decline is attributed to the acceleration of the pace of change and the slowness of institutions to respond to it. Since many citizens simply believe that their institutions are no longer able to solve their problems, this decline in confidence leads to a rise in populism. In this “post-truth” era, debates focus more on emotions than on objectively verifiable facts. We are witnessing the collapse of fact-based problem solving and an increase in identity-based politics.
Technology is the main driver of the speed of change that governments now face, whereas public sector governance is in many ways rooted in the past as it searches for social consensus. But reaching such consensus takes time. When we try to complete reforms and changes too hastily and without consensus, the population’s discomfort is quickly felt.
Conversely, emerging digital technologies are evolving rapidly. Citizens therefore expect, through these technologies, greater inclusion in decision-making and better consideration of their needs. Klaus Schwab, from the World Economic Forum, says that society is going through the “fourth industrial revolution.” He notes that this emerging era could lead to increased economic disparity, a feeling of injustice, and even social unrest.
From the point of view of government services, rapid advances in the technology sector have significantly altered the expectations of citizens who are influenced by their experiences with Google, Amazon and Facebook. Their interactions with government are increasingly out of sync with the quality, speed and user-experience they have grown accustomed to in other aspects of their lives.
In parallel to this, think of the growing difficulties of governments in the purchase of technologies and the failures of digital projects—the Phoenix pay system comes immediately to mind. Public sector organizational cultures and processes take time to adapt to the more agile approaches that characterize the digital age.
That’s in part why the use of artificial intelligence in public administration poses many challenges. But it also offers unique opportunities.
We all know that discretion and decision-making in the public sector is strongly influenced by the information that is available. However, the capabilities of cognitive machines exceed the capacity of humans in many areas and for many tasks. The improvements that AI can bring have the potential to help improve the quality, cost and speed of administration and services to citizens.
This technology is also changing the nature of risks to good governance in a significant and important way: it is broadening existing threats to governance actors, introducing new threats, and modifying the characteristics of the threats themselves. For example, how does this new technology affect the field of public law, administrative law, human rights and the right to privacy?
The rule of law, in principle, must ensure that the law is administered in a transparent and predictable manner. This provides a form of guarantee to those who are affected by these laws. But what are the legal implications when we use AI for administrative decisions? How can the obligations of procedural fairness be ensured when decisions are automated? How can we defend this kind of decision-making?
The ambivalent side of this technology can involve a series of risks that should not be underestimated. To ask the right questions is not enough; we must also reflect on the frameworks we need in order to use AI wisely.
From a policy point of view, technological changes are a source of disruption and significant pressure, such as the regulation of autonomous vehicles, for example, or the impact of “fake news” on the electoral process, where everything is amplified by social networks.
The increase in the use of AI is one of these disruptive technologies. Its followers see it as the way of the future, a solution for improving services to citizens and for developing evidence-based policies. Others see the potential to reinforce or aggravate existing inequalities in our society if they are not used ethically and responsibly.
I remember the discussions we had 30 years ago about the impact that the Internet would have on government and governance. And 20 years ago, the discussions on the impact of social media. In both cases, some were convinced that these new technologies would transform government and save the world. There were also those who thought it was the beginning of the end.
The reality, as always, is more complex. Over the last two decades, these tools and ʺonline platformsʺ have given us the opportunity to better engage with citizens and make our democratic governance systems more open. We have also seen the disadvantages of misusing them for disinformation and division. The way a new tool is used determines its impact on society, and artificial intelligence is no exception. This raises the fundamental question of governance.
Let’s be honest: in the last two revolutions, the Internet and social media, governments were asleep at the switch. Rather than shaping the future, governments mostly just reacted. And even when they did react, they did so without the awareness and foresight to understand the impact these technologies would have on society and its institutions. In many cases, it was too little too late to think about new regulations or putting new governance systems in place.
We cannot afford to adopt the same approach to artificial intelligence. Our public institutions must be ready now for any governance challenges to come.