Canada artificial intelligence strategy.
Canada artificial intelligence strategy. Credit: SpaceQ/Adobe.

The Canadian government has announced a new “Sovereign AI Compute Strategy,” fulfilling a promise from the 2024 Budget, while questions swirl over how to approach international governance and regulation of artificial intelligence. 

Canadian Sovereign AI Strategy

Last week, the federal government’s Department of Innovation, Science and Economic Development (ISED) made an announcement that the Federal Government would be focused on developing a “Sovereign AI Computer Strategy.” This elaborates on their previous announcement that Budget 2024 would feature a $2 billion investment in artificial intelligence in Canada over five years, starting in 2024-2025. ISED said in their announcement that this strategy follows input from “more than 1,000 stakeholders from research, industry and civil society.) 

According to ISED, up to $1 billion will go towards “public supercomputing infrastructure,” as part of a “transformative investment…that will meet the needs of researchers, government and industry.” ISED said that this would include several different investment targets over the long term: a “large sovereign supercomputing facility” that “supports researchers and a cross-section of industry,” with a call for proposals starting in the spring 2025; and a “smaller secure computing facility to be led by Shared Services Canada and the National Research Council of Canada,” which will be used by government and industry to perform research and development, notably “including for national security purposes.” 

In the short term, this public infrastructure will include up to $200 million for “existing public compute infrastructure to address immediate needs.” This expansion of capacity “could be delivered by expanding existing public infrastructure”, such as that provided by the National Research Council or the Digital Research Alliance of Canada. This decision, ISED said, was in response to stakeholders emphasizing the importance of “acting now to address the compute capacity issue.” 

There will be two other major programs as part of the AI Strategy: an AI Compute Access Fund, and investment into mobilizing the private sector through an “AI Compute Challenge.” Up to $700 million will be invested “to support the Canadian AI ecosystem” through the AI Compute Challenge, which will “focus on projects from companies, consortiums and academic-industry partnerships.” ISED elaborated that “preference will be given to proposals leading to fully integrated AI data-centre solutions ready for commercial deployment,” and that proposals should “maximize private and public funding” as well as “have meaningful participation by Canadian industry.” 

ISED added that applications are being accepted on an ongoing basis, and that interested parties should contact the Strategic Innovation Fund for a consultation.

The Compute Access Fund will be a an investment of up to $300 million that will, according to ISED, “support the purchase of AI compute resources by Canadian innovators and businesses.” The aim  is to “address barriers to AI development in sectors that require high-performance computing capacity and have high potential for AI adoption.” More details will be shared when the program is launched in Spring 2025.

François-Philippe Champagne, Minister of Innovation, Science and Industry, said that this will be “a major step toward securing Canada’s place as a global AI leader”, and “will help businesses, innovators and researchers boost the Canadian economy and stand out on the world stage.”

UN calling for increased AI governance

On that world stage, however, questions are being raised about implementing international governance over national-level AI development. 

New UN report. Credit: United Nations.
New UN report. Credit: United Nations.

In September the OECD and UN announced “a new enhanced collaboration” on the question of global AI governance. The two organizations will “focus on regular science and evidence-based AI risk and opportunity assessments,” according to the announcement, as well as “leverage their respecting networks” to help “foster a globally inclusive approach.” 

UN Under-Secretary General Amandeep Singh Gill said that “the speed of AI technology development and the breadth of its impact requires diverse policy ecosystems to work more cohesively and in real time,” and that they would be willing to “work with all stakeholders” to address this governance challenge. OECD Deputy Secretary General Ulrik Vestergaard Knudsen, who was also at the announcement, said that the joint effort would “help countries to seize all the opportunities of AI while mitigating and better managing the associated risks and disruptions to foster human-centred, safe, secure and trustworthy AI.”

This exhortation to foster “human-centric” and “trustworthy” AI closely followed the release of the United Nations’ Advisory Body on Artificial Intelligence Final Report, “Governing AI for Humanity.” The report emphasizes that AI “offers tremendous potential for good,” but sounds a cautionary tone, saying that the benefits “may not manifest or be distributed equally,” which could lead to the only real beneficiaries being “a handful of states, companies and individuals.”

The ongoing issues of biased AI models and so-called “hallucinations” (incorrect-but-confident assertion of verifiable falsehoods by large language models) were directly called out in the Report as potentially serious issues, as well as the well-known problem of “energy consumption of AI systems at a time of climate crisis.” The Report emphasized that the need for “global governance” is critical given these challenges, especially since “AI’s raw materials, from critical minerals to training data, are globally sourced,” and since “no one  no one currently understands all of AI’s inner workings enough to fully control its outputs or predict its evolution.” 

The development of AI “cannot be left to the whims of markets alone,” according to the UN Advisory Body, but since there are no regulations or agreements that are “global in reach and comprehensive in coverage,” gaps are quickly appearing between various national and supra-national approaches to the AI issue. This will need to be addressed, they said, through a “holistic vision for a globally networked, agile and fexible approach to governing AI for humanity.” 

The Advisory Body made a number of recommendations: 

— An “international scientific panel” of “diverse multidisciplinary experts” that can provide globally-available reports on the international state of AI development and where new research is needed; 

— A “policy dialogue on AI governance” that features “a twice-yearly intergovernmental and multi-stakeholder policy dialogue on AI governance on the margins of existing meetings at the United Nations;” 

— An “AI standards exchange” that would help develop “a register of definitions and applicable standards;”

— A “capacity development network” that would “make available trainers, compute and AI training data across multiple centres to researchers and social entrepreneurs seeking to apply AI to local public interest use cases;”

— A “global fund for AI” to “put a floor under the AI divide” between countries and regions, including shared computing resources and datasets; 

— A “global AI data framework” that would help to establish common standards around AI training data provenance and “rights-based accountability across jurisdictions,” as well as “model agreements for facilitating international data access and global interoperability;”

and

— An AI office within the Secretariat that would report to the Secretary General, one that is “light and agile in organization” and acts as “the glue that supports and catalyzes the proposals in this report.” 

These kinds of issues were also highlighted recently at the “DigitAI for ALL” panel organised by the UN Development Program (UNDP) in Serbia, at the Global Partnership on Artificial Intelligence summit. Representatives from Canada, France, the United Kingdom, Serbia, and the Centre for the Fourth Industrial Revolution (C4IR) were at the panel, and discussed the need for governance of AI, especially where it risks “deepening existing inequalities and reinforcing stereotypes.” 

Heriberto Tapia, Head of Research at UNDP’s Human Development Report Office, said in his keynote that “artificial intelligence and digital technologies have a significant social and cultural impact on all stages of life,” including “the physical development of children and early brain development to the skills and mental health of students and even adults,” and emphasized issues like “increased stress levels and faster spread of misinformation” and “the displacement of jobs due to automation” as factors to consider.

In turn, H.E. Michelle Cameron, Ambassador of Canada to Serbia, said that “ensuring AI is used effectively and ethically…is the responsibility of all states and other stakeholders,” and that “only by working together, through forums such as the GPAI, can we achieve this globally.” 

Concern in Europe about AI Act “narrative” leading to AI exodus

Yet a new debate in the EU is illustrating the challenges involved in attempting to govern AI. The former AI minister of Spain and current co-chair of the UN Advisory Board that authored “Governing AI for Humanity,” Carme Artigas, said that EU companies are bristling at the EU’s own restrictions on AI development, and that that could create serious issues going forward. 

According to Science|Business, Artigas has “denounced” accusations that the European Union’s AI Act has led to over-regulation of digital technologies in the region and discourages investment in AI in Europe, and was quoted at the Europe Startup Nations Alliance’s (ESNA) 2024 Forum as calling it an “absolute lie.” Artigas said that she believes that this is a deliberate move to “disincentivize investment in Europe and make our start-ups cheaper to buy,” adding that European AI start-ups are  “buying that narrative” and increasingly looking to move to the United States.

Lucilla Sioli, head of the European Commission’s AI Office, was quoted as saying at the ESNA Forum that “you need the regulation to create trust, and that trust will stimulate innovation”. While the EU’s AI Act is the first of its kind, Sioli said that compliance is actually “not too complicated” and “affects only a really limited number of companies” developing especially high-risk applications. Even in those cases, Sioli said, companies “mostly have to document what they are doing, which is what I think any normal, serious data scientist developing an artificial intelligence application in a high-risk space would actually do.”

Sioli said that the Commission needs to “really explain these facts” so as to avoid a potential AI exodus to the lightly-regulated United States AI scene.

Canada may face a similar exodus problem. BetaKit reported that at least one prominent AI startup is quietly moving to the United States, despite the Vector Institute, and despite Canadian Geoffrey Hinton’s now Nobel-winning role as the “godfather of AI” that developed fundamental technologies underpinning modern AI.  

In the case of that company, AI chipmaker Tenstorrent, this followed a successful Series D funding round that raised more than $693 million USD. BetaKit said that this may have led to the relocation, however, as one of the larger investors wanted to increase their stake but was limited as to how much they could invest in a non-American company. Given the choice between their Canadian home and this American funding, Tenstorrent appears to have chosen the United States. 

This kind of move, and the issues it reveals, may have helped spur the Federal Government’s investment in “sovereign” AI infrastructure. It also illustrates the serious challenges that Canada, the United Nations, the European Union, and the rest of the world are facing regarding managing the governance and development of Artificial Intelligence in the face of a sector that, like Tenstorrent, may be choosing to heavily concentrate in America, under American governance and using American investment dollars.  

Will this increasingly Americanized industry accept any global role in its governance, considering its effects on both global climate and the global economy? That’s the question of the hour.

Craig started writing for SpaceQ in 2017 as their space culture reporter, shifting to Canadian business and startup reporting in 2019. He is a member of the Canadian Association of Journalists, and has a Master's Degree in International Security from the Norman Paterson School of International Affairs. He lives in Toronto.

Leave a comment