Weeknote 39 2023

§1 Bianca Wylie on Canada’s Failing AI Regulatory Process

Bianca Wylie is a writer and an open government and public technology advocate with a dual background in technology and public engagement. She’s become increasingly uncomfortable with the AI regulatory process in Canada and she joins the Law Bytes podcast to provide her thoughts about AIDA, generative AI regulation, and a process she believes is in dire need of fixing.

Law Bytes, Episode 178: Bianca Wylie on Canada’s Failing AI Regulatory Process
September 26, 2023 • 00:50:22, Michael Geist

§2 An Open Letter in regards to Bill C-27

I learned of this from Open Media:

SEPTEMBER 25 2023 — Today 45 leading civil society organisations, experts and academics released an open letter to Industry, Science, and Economic Development (ISED) Minister François-Philippe Champagne outlining key concerns with the current draft of the Artificial Intelligence and Data Act (AIDA), currently wrapped into the government’s proposed privacy bill, Bill C-27.  

September 25th, 2023, Advocates demand proper consideration for AI regulation

Organizations who signed this letter include

  1. BC Civil Liberties Association
  2. Canadian Civil Liberties Association
  3. Digital Public
  4. International Civil Liberties Monitoring Group
  5. OpenMedia
  6. Tech Reset Canada
  7. Rideau Institute on International Affairs
  8. Open North
  9. Just Peace Advocates
  10. Digital Justice Lab
  11. Public Interest Advocacy Centre
  12. Centre for Digital Rights
  13. Women’s Legal Education and Action Fund (LEAF)
  14. Ligue des droits et libertés
  15. Freedom of Information and Privacy Association
  16. The Centre for Free Expression
  17. Amnistie internationale Canada francophone
  18. Amnesty International Canadian Section (English speaking)
  19. Inter Pares

You can find briefs from some of these organizations at the first meeting of the Standing Committee of Industry and Technology that was on Thursday, September 28, 2023

§3 The Government intends to take an agile approach

In the AIDA Companion document, you can find this curious phrase:

Specifically, this paper is intended to address both of these sets of concerns and provide assurance to Canadians that the risks posed by AI systems will not fall through the cracks of consumer protection and human rights legislation, while also making it clear that the Government intends to take an agile approach that will not stifle responsible innovation or needlessly single out AI developers, researchers, investors or entrepreneurs. What follows is a roadmap for the AIDA, explaining its intent and the Government’s key considerations for operationalizing it through future regulations. It is intended to build understanding among stakeholders and Canadians on the proposed legislation, as well to support Parliamentary consideration of the Bill.

The Artificial Intelligence and Data Act (AIDA) – Companion document

I’m going ignore the phrase needlessly single out for the moment, and instead bring your attention to the fact that the government has stated that it is going to take an agile approach to regulation.

Agile is a methodology for software development that was started in 2001 from this manifesto:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Manifesto for Agile Software Development

I guess we can always hope that this will mean that the government will strive for transparent and public workflows, that it will consider its work as defined and expressed in user stories and user experiences, and that it will be continuously examining its processes, stop what is not working, and make immediate changes needed to serve and protect all Canadians.

§4 A Guide on the use of Generative AI from the Government of Canada

I found This Guide on the use of Generative AI from the Government of Canada as a more inspiring document. It includes this text:

To maintain public trust and ensure the responsible use of generative AI tools, federal institutions should align with the “FASTER” principles:

  • Fair: ensure that content from these tools does not include or amplify biases and that it complies with human rights, accessibility, and procedural and substantive fairness obligations
  • Accountable: take responsibility for the content generated by these tools. This includes making sure it is factual, legal, ethical, and compliant with the terms of use
  • Secure: ensure that the infrastructure and tools are appropriate for the security classification of the information and that privacy and personal information are protected
  • Transparent: identify content that has been produced using generative AI; notify users that they are interacting with an AI tool; document decisions and be able to provide explanations if tools are used to support decision-making
  • Educated: learn about the strengths, limitations and responsible use of the tools; learn how to create effective prompts and to identify potential weaknesses in the outputs
  • Relevant: make sure the use of generative AI tools supports user and organizational needs and contributes to improved outcomes for Canadians; identify appropriate tools for the task; AI tools aren’t the best choice in every situation

I’m going to spend more time with this document. It addresses some issues related to generative AI tools that I have rarely seen elsewhere. Like this passage:

Public servant autonomy

Issue: overreliance on AI can unduly interfere with judgment, stifle creativity and erode workforce capabilities

Overreliance on generative AI tools can interfere with individual autonomy and judgment. For example, some users may be prone to uncritically accept system recommendations or other outputs, which could be incorrect. 21  22 Overreliance on the system can be a sign of automation bias, which is a tendency to favour results generated by automated systems, even in the presence of contrary information from non-automated sources. 21 As well, confirmation bias can contribute to overreliance 21 because the outputs of generative AI systems can reinforce users’ preconceptions, especially when prompts are written in a way that reflects the user’s assumptions and beliefs. 23 Overreliance on AI systems can result in a decline in critical thinking and can limit diversity in thought, thereby stifling creativity and innovation and resulting in partial or incomplete analyses. Overreliance on AI can impede employees’ ability to build and maintain the skills they need to complete tasks that are assigned to generative AI systems. This could reinforce the government’s reliance on AI and potentially erode workforce capabilities

Guide on the use of Generative AI

§5 What Should Students Pay for University Course Readings?

What Should Students Pay for University Course Readings? An Empirical, Economic, and Legal Analysis” is a 2022 article by John Willinsky and Catherine Baron that I only became aware of, thanks to this short Slaw column by John called, Joining the Call for Canadian Copyright Reform Now:

Catherine Baron and I explored the issue in a 2022 study that examined over 3,000 English- and French-language course syllabi from 34 Canadian universities.

As I shared earlier in this column, we examined the type of readings students were being assigned, finding that 90 percent, by page count, were from academic publications, whose authors are on university payrolls with the publications often subscribed to by the university library…

… In our study, at least, we found this work amounted to ten percent of the assigned university readings or roughly 63 pages a student annually, which at the time amounted to, at Copyright Board rates ($0.021/page), an annual fee of $1.33 per student.

Leave a Reply

Your email address will not be published. Required fields are marked *