Collaborative AI Governance
A Policy Roadmap for Canada and the UK
Ann M. Fitz-Gerald, et al. | 2024.11.13
The UK and Canada, leveraging their strengths as trusted middle powers, are well-positioned to lead in setting global AI standards, fostering ethical, responsible and innovative AI governance.
AI has moved out of the academic sphere and is rapidly becoming a “general-purpose” technology with a significant impact on industry, government and society. As a result, there is an urgent need to develop a cross-sectoral environment for AI within countries to attract foreign direct investment, and actively participate in global trade and meet both economic and national security goals.
However, in parallel to the technological investment in and development of this capability, robust governance systems must be in place to ensure the development and application of AI is trustworthy.
These levels of trust, predictability and protection are particularly important at a time when collaboration between global powers and economies of a lesser magnitude is vital, but when different forms of regulation may be better suited to smaller economies and political structures. Moreover, the so-called “rules-based order” that once governed global interactions is now inconsistently applied. This presents a unique opportunity for like-minded middle economies to unite and collectively set the highest standards for AI’s ethical and responsible use in ways that uphold their strong commitment to democratic principles, human rights and freedoms.
In times of global uncertainty, the UK and Canada have consistently been viewed as trusted partners and “safe pairs of hands”. Both countries have respected political systems, strong higher education institutions, a shared monarchy, similarities in the legal and regulatory regimes, and military alliances (NATO and the “Five Eyes”). While many middle economies wait for powers like the US and the EU to set data and AI standards rules in the digital world, progress has been slow. Therefore, taking stock of the milestones and progress achieved in countries such as Canada and the UK is crucial, as well as considering the formation of a coalition of middle economies to set and influence global standards in this area.
Both countries are navigating their roles as middle powers in the global AI race, which is dominated by the US, the EU and China. Ranked third (UK) and fifth (Canada) in the 2024 top 10 countries leading in AI research and technology, with a market value of $21 billion for the UK and 144 generative AI-related startups in Canada, and scores of 38.1 (UK) and 34 (Canada) in the leading 20 AI countries in 2023 by research capacity, these countries have significant potential to collaborate on AI governance and the establishment of trustworthy and responsible AI cooperation. Demis Hassabis and Geoffrey Hinton, two thought leaders in AI research and practice, are both British, with Hinton having spent a significant part of his career in Canada – which is also home to other leading global AI scholars like Yoshua Bengio.
Canada and the UK’s Progress and Existing Challenges
Progress in data privacy and protection in the UK is an excellent example of progressive national efforts in the data-driven and digitalised world. These efforts have already resulted in comprehensive legislative frameworks like the Data Protection Act 2018 and UK-GDPR, both of which align with the EU’s GDPR to safeguard personal data across various sectors. The UK has also made progress in sector-specific privacy advancements, particularly in health informatics, with initiatives such as NHS Digital implementing stricter controls on patient data access through the Data Security and Protection Toolkit. Innovations like the UK National Data Strategy and the UK National AI Strategy highlight the government’s commitment to ensuring the responsible use of data while fostering innovation in digital services and AI applications. The Alan Turing Institute, which is the UK’s National Institute for Data Science and AI, focuses on advancing world-class research and applying it to national and global challenges, developing skills and training for the future, and driving informed public conversation. The Ada Lovelace Institute develops research, policy and practice to ensure that data and AI is used and harnessed in ways that maximise social wellbeing and put technology at the service of humanity.
There is a unique opportunity for like-minded middle economies to unite and collectively set the highest standards for AI’s ethical and responsible use
Canadian progress in AI ethics and regulation is also worthy of note. Canada has been at the forefront of promoting ethical AI through the Pan-Canadian Artificial Intelligence Strategy and Algorithmic Impact Assessment tools. In 2018, as Ontario’s Information and Privacy Commissioner, Dr Ann Cavoukian introduced the seven foundational principles of “Privacy by Design”. These principles directly informed the development of the EU’s GDPR. Currently, the Canadian government is actively working on regulations – including the proposed AI and Data Act (AIDA) – to ensure transparency, fairness and accountability in developing and using AI systems. Canada’s strength in AI research, notably through the Canadian Institute for Advanced Research and Center for International Governance Innovation, has positioned the country as a global leader in AI ethics and governance, encouraging responsible innovation. Programmes including the Vector Institute (Ontario) and Mila (Quebec AI Institute) contribute significantly to research on ethical AI and developing tools for assessing algorithmic fairness.
The key challenge lies in integrating these ethical standards into broader legal frameworks and ensuring uniform implementation across provinces, particularly given Canada’s decentralised governance. There are also significant concerns regarding the ethical use of AI in public services, such as law enforcement, where algorithmic bias and surveillance risks must be carefully managed. Balancing the rapid development of AI technologies with ethical considerations and privacy preservation remains a complex and ongoing issue, particularly as AI systems become more pervasive in sectors like healthcare, education and finance. Despite progress on ethical frameworks, the lack of comprehensive national AI legislation presents hurdles in ensuring consistent oversight across industries.
Current Collaborations and Joint Efforts
This progress notwithstanding, challenges remain, particularly in balancing innovation with compliance, as industries such as health informatics seek more flexible use of personal data while maintaining privacy. The issue of cross-border data flows post-Brexit presents regulatory complexities, especially with potential data adequacy rulings from the EU. The rapid advancement of AI and big data analytics in sectors like finance and healthcare tests the limits of existing frameworks, prompting ongoing discussions about updating privacy laws to cover emerging technologies like AI-driven decision-making systems and biometric data usage.
Canada and the UK have been actively fostering collaboration on AI-related initiatives. For example, in 2019, the UK-Canada AI Initiative was launched to fund collaborative research projects between the two countries. This initiative focuses on AI’s application in areas like health and environmental sustainability and on the ethical challenges posed by AI technologies. The goal is to leverage the AI expertise of both countries while promoting research that aligns with their shared values of transparency, fairness and privacy protection. The 2024 UK-Canada science of AI safety partnership serves as another collaborative initiative.
The UK and Canada are both active internationally in advocating for the responsible use of AI. They are co-signatories of the 2019 OECD AI Principles, the 2022–2024 Council of Europe AI Convention drafting group, the 2023 G7 Ministers’ Statement, the 2023 Bletchley Declaration and the 2024 Seoul Ministerial Declaration. They have also both participated in the Global Partnership on Artificial Intelligence. This multilateral engagement runs in parallel with national legislative frameworks such as the Pan-Canadian Artificial Intelligence Strategy and the UK National AI Strategy, both of which are oriented towards building AI ecosystems that prioritise ethical standards, transparency and fairness.
How Can Canada and the UK Make a Difference?
Building on a long history of collaboration between the UK and Canada in emerging technologies, shared values and similarities in political cultures and the diversity of their populations, as well as the novel approaches and practices established and implemented in both countries, joint efforts could be enhanced in the following areas:
Data Governance and Data Protection
Both countries could build a bilateral data governance framework that enhances data protection while facilitating secure data sharing for research, public health, and economic growth. This would ensure that data governance aligns with ethical standards and that personal data is protected across both countries while enabling the flow of information critical to innovation and commerce.
For example, the US–EU Privacy Shield Framework offers a reference point for cross-border data governance. Although invalidated in 2020, it was initially designed to facilitate data transfers while ensuring compliance with stringent privacy laws on both sides of the Atlantic. A UK–Canada data governance initiative could draw lessons from this experience to build a framework that maintains high data privacy standards without impeding cross-border innovation and AI research.
Development of Cybersecurity and Defence Against Hybrid Informational Threats
Joint investments in cybersecurity systems include cooperation on AI-driven threat detection, defensive measures against cyberattacks and developing countermeasures to combat disinformation and hybrid attacks. This could involve research collaborations between top universities in both countries, pooling expertise to create cutting-edge tools that protect national digital infrastructure.
Australia and Singapore’s cybersecurity cooperation is a strong example of two countries collaborating to enhance cyber defences. Under their Comprehensive Strategic Partnership, Australia and Singapore signed a Memorandum of Understanding in 2017 to cooperate on cybersecurity initiatives. This partnership includes joint cyber threat exercises, collaboration on cyber capacity building, sharing good practice on cybersecurity governance, and developing common frameworks for cyber risk management. The UK and Canada could adopt a similar model by establishing a formal cybersecurity cooperation framework focusing on joint threat intelligence sharing, collaborative defence against cyberattacks, and coordinated responses to disinformation and hybrid informational threats.
Investment in AI Startups and Innovation
The AI ecosystem in the UK and Canada is thriving, with solid support for startups and innovation hubs. The UK’s Alan Turing Institute and Canada’s Vector Institute are global leaders in AI research, and both countries have numerous AI startups working on cutting-edge innovations. Earlier this year, Innovate UK invited UK companies developing AI for improving healthcare to apply to join its Global Incubator Programme in collaboration with MaRS Discovery District in Toronto, Canada. However, more can be done to foster cross-border investment and collaboration in this space. A bilateral initiative to increase investment in AI startups could include creating joint AI venture capital funds, protection for creator rights and intellectual property, support for incubators, and cross-border mentorship programmes.
Franco-German AI collaboration is a prime example of bilateral cooperation aimed at fostering innovation. By signing the Aachen Treaty in 2019, the two countries solidified their collaboration in research and digital transformation.
Fostering Responsible AI Talent and Establishing Educational Exchange Projects
As AI continues to grow, the need for a skilled AI workforce has become critical. The UK and Canada are home to world-class universities and AI research institutions that have already made strides in educating the next generation of AI talent. However, the demand for responsible AI practitioners is rising, and there is room for deeper cooperation in building the skills pipeline.
The UK and Canada could build a bilateral data governance framework that enhances data protection while facilitating secure data sharing for research, public health, and economic growth
The MIT-IBM Watson AI Lab is an example of collaboration between academia and industry on AI. Through this partnership, researchers from both institutions work together on AI research, focusing on responsible AI development.
Enhancing Healthcare and Public Health Systems with Trustworthy AI
Canada and the UK have advanced healthcare systems and have increasingly adopted AI to improve healthcare delivery, particularly during the Covid-19 pandemic. AI technologies in healthcare can enhance diagnostics, predict patient outcomes, and streamline public health responses. However, AI’s use in healthcare must be carefully regulated to ensure it is trustworthy, ethical and aligned with public trust.
Canadian-French health cooperation, including the Future of Healthcare using Artificial Intelligence mission, is focused on establishing industrial R&D collaboration opportunities (specifically for technology co-development validation and adaption) in the digital health sector, and has been designed to promote discussions on commercialisation-oriented R&D collaboration opportunities.
Towards a Deeper Future UK–Canada Cooperative Initiative on AI
A proposed cooperative initiative must grow to encompass more countries than just the UK and Canada. It must include other leading middle economies that are deeply committed to these issues and working to develop national capacities for AI- and data-driven systems. Such a cooperation framework could serve as a model for establishing bilateral agreements and enhancing collaboration among other middle economies within the G20 and beyond. It could also encourage countries like the Netherlands, Australia, South Korea, New Zealand and Japan to strengthen their efforts in developing ethical frameworks for responsible and efficient AI governance.
Ann Fitz-Gerald is a Professor of International Security and the Director of the Balsillie School of International Affairs in Waterloo, Canada.
Carsten Maple is a Professor of Cyber Systems Engineering at the University of Warwick’s Cyber Security Centre.
Halyna Padalko is a PhD student in Computer Science at the National Aerospace University (KhAI) and a Fellow at the Balsillie School of International Affairs.