AI presents potential, and aspects that should give us pause
By Rhonda Moore, Executive Director, Science & Innovation
Published in the Hill Times on October 20, 2025
Complex technologies—whether AI or the next frontier in defence research, climate change, or cancer—require support from social sciences and humanities to explain the new technology according to the social and ethical norms by which we live.

It is the absence of social and ethical considerations explored in tandem with the development of a technology that prevent us from considering and understanding its full impact, writes Rhonda Moore. Photograph courtesy of Tung Lam, Pixabay.com
On Oct. 8, The Weekly Show podcast dropped a 100-minute episode with guest Dr. Geoff Hinton, University of Toronto Professor Emeritus and the “godfather” of AI. In the episode, Hinton uses plain language to explain what AI is, how it works (with helpful analogies), the potential it presents, and the aspects that should give us pause.
Indeed, if you aren’t sure what AI is or what all the fuss is about, the episode is worth your time. If that is you, don’t worry, you’re not alone. An August 2025 poll by Leger finds that Canadians are divided on this topic; thirty-four percent of Canadians say AI is good for society, 36 per cent believe its harmful, and 31 per cent are unsure. Yet, 85 per cent want AI to be regulated by government to ensure safe and ethical use.
The ethical concerns and considerations of AI is something Hinton and host Jon Stewart discuss at length in the podcast. In the interview, Hinton admits that he didn’t think about the ethical considerations until “it was far too late.” The fault is not Hinton’s, but a byproduct of how we fund research in Canada.
In Canada, much academic research is funded through one of three federal research-granting agencies. The Canadian Institutes of Health Research funds research in medicine and health. The Natural Sciences and Engineering Research Council (NSERC) funds research that advances our understanding of the natural world (e.g., biology, chemistry, physics), engineering, and maths. The Social Sciences and Humanities Research Council funds work that helps us better understand how people think and act individually (e.g., psychology, sociology), in groups (e.g., anthropology, political science), how we motivate each other (e.g., behavioural psychology, economics) and our expressions of culture (also known as the humanities, e.g., literature, philosophy, and religion).
Researchers may seek funds from different granting agencies separately to collaborate, but there are no mechanisms that require an NSERC-funded researcher (like Hinton) to partner with someone who can help him think through the social and ethical dimensions and concerns of his work. Nor would the funds he receives from NSERC allow him to hire someone for this purpose.
The real world is not organized by narrow disciplines. It’s time we stopped funding research that way. Complex technologies—whether AI or the next frontier in defence research, climate change, or cancer—require support from social sciences and humanities to explain the new technology according to the social and ethical norms by which we live. That we fail to do so forces us to experience Collingridge’s dilemma each time we adopt a new technology. (Collingridge’s dilemma is the idea that when a technology is in its early stages, it is considered easy to control because its full impact is not yet understood. When the full impact becomes clear, the technology has become too widely adopted to be easily altered or regulated.) Indeed, it is the absence of social and ethical considerations explored in tandem with the development of a technology that prevent us from considering and understanding its full impact and so we struggle—or fail—to effectively regulate its use.
Experimental governance offers a solution. Through early, continuous collaboration among stakeholders, the full range of societal risk may be explored. Such an approach can foster transparency, trust, and co-develop accountability structures. A proactive approach also presents an opportunity to start with what is possible and work towards what is desirable; the AI tools and resources Canadians deserve.
The Science Writers and Communicators of Canada (SWCC), like many professions, have concerns about the social and ethical implications of AI, and are ready to be a part of the solution. In March 2025, the SWCC polled its members—science writers, science communicators, and science journalists—about the use of generative AI. The survey findings demonstrate a nuanced understanding of the potential, and the threats, that generative AI presents for these professions. In response, the SWCC has developed a series of guidelines for the ethical use of generative AI, freely available on their web site. As to what Canadians deserve? To start, accurate information about evolving AI capabilities, risks, and best practices.