Ethical AI Development
Image credit: Shutterstock.

On May 27th, Innovation, Science and Economic Development Canada (ISED) announced that eight more organizations have signed onto an agreement on the ethical management of Artificial Intelligence (AI) systems in Canada. The agreement is called the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.ย 

Generative AI

This announcement is part of the tech sectorโ€™s ever-increasing focus on AI, including the space sector. In particular, both businesses and investors have devoted a great amount of interest and money to building so-called โ€œGenerative AIโ€ (GenAI), which uses large language models (LLMs) like OpenAI and image generators like Stable Diffusion to create convincing simulations of human-made text and art. This can include computer programming code, including the HTML, Javascript, and Cascading Style Sheets code used to make websites. OpenAI has even made some initial forays into AI-generated video.ย 

In the space sector, machine learning AI models have been a familiar part of the landscape for years, and serve as a key focus for companies developing satellite-based edge computing and hyperspectral imaging analysis. GenAI has not played as great a role yet, though Metaspectral has said that they have been working on โ€œusing GenAI to automate complex tasks to make the data format more accessible,โ€ and itโ€™s expected that GenAI will appear in space-based scenarios just as it has with terrestrial ones.ย 

With that intense interest has come intense scrutiny over where this is all going. While some have pointed to classic science fiction scenarios of artificial intelligences run amok, more practical questions have been raised regarding GenAI: about what effects GenAI is having on the job market, whether GenAI might reproduce or even reinforce biased organizational decision-making that affects marginalized groups, and about possible violations of the intellectual property rights of the artists, writers, and other humans that the GenAI models have been โ€œtrainedโ€ on.ย 

In some cases, these concerns have led to legal action, most notably in the New York Timesโ€™ suit against OpenAI and Microsoft regarding training data for their ChatGPT GenAI model, and the U.S. Federal Trade Commissionโ€™s investigations into OpenAI and Nvidia.ย 

Code of Conduct

With all those issues in mind, itโ€™s no surprise that ISED has taken steps to develop a framework focused on resolving these concerns, and to ensure that AI is developed ethically. This led to the creation of the Code of Conduct that these companies and others have agreed to adhere to. 

The code emphasizes six key outcomes that signatories must focus on. These include: Accountability regarding their systems; safety (including risk assessments), potential impacts regarding fairness and equity; enough transparency to allow consumers and experts to make informed decisions regarding the AI models; human oversight and monitoring of the AI models; and robustness of the systems โ€œin response to the range of tasks or situations to which they are likely to be exposed,โ€ including potentialย  potential cyber attacksย 

ISED said that โ€œthe code is based on the input received from a cross-section of stakeholders through these engagements and through the consultation on the development of a Canadian code of practice for generative AI systems.โ€ 

The eight organizations mentioned in the May announcement include:

  • Kyndryl, an IT infrastructure services provider;
  • Alloprof, a Quebec charity focused on helping students with homework;
  • computer-maker Lenovo;
  • Levio, a technology consulting firm;ย 
  • Mastercard;
  • Salesforce;
  • MaRS Discovery District; and
  • Organisme d’autorรฉglementation du courtage immobilier du Quรฉbec (OACIQ), a real estate brokerage authority.

Including the eight organizations mentioned in the announcement, 30 have signed on in total. 

In the ISED announcement, Franรงois-Philippe Champagne, Minister of Innovation, Science and Industry, said that โ€œleading Canadian organizations continue to adopt responsible measures for advanced generative AI systems that will help build safety and trust well into the future.โ€ 

Meanwhile, Mastercard Canada president Sasha Krstic said that while AI will โ€œunlock profound opportunities to enable solutions that benefit everyone,โ€ he also said that โ€œpeople will need to trust itโ€ฆ[a]nd that starts with foundational practices that respect and protect individual rights and communities.โ€ He further stated that the code of conduct โ€œis an important step forward in creating a unified framework for responsible data-driven innovation.โ€ย 

Alison Nankivell, Chief Executive Officer at MaRS Discovery District, said that MaRS is โ€œdedicated to upholding these standards to cultivate an ecosystem where innovation flourishes ethically and responsiblyโ€; while Nadine Lindsay, President and CEO of OACIQ, said that the code โ€œ demonstrates the importance of adopting an ethical approach in this field of innovation.โ€ 

In a separate announcement, Lenovoโ€™s Senior Vice President and Chief Security Officer. Doug Fisher, said that the company joined the Code โ€œto support Canada in creating a responsible AI ecosystem,โ€ and that โ€œwe will continue working to deepen our engagement in Canada and deliver smarter AI for all.โ€ Lenovo also pointed to their support of the UNESCO Recommendation on the Ethics of Artificial Intelligence.

Craig started writing for SpaceQ in 2017 as their space culture reporter, shifting to Canadian business and startup reporting in 2019. He is a member of the Canadian Association of Journalists, and has a Master's Degree in International Security from the Norman Paterson School of International Affairs. He lives in Toronto.

Leave a comment