The official write up of the December 11th panel organized by Library Futures is now available. It includes a very useful list of links to recommended readings from the panels and those in the very active chat.
For the final session of our series on Machine Learning and Artificial Intelligence for Information Professionals, Library Futures hosted Dr. Andrea Baer, Associate Professor of Practice in the School of Information at the University of Texas-Austin; Dr. Damien Patrick Williams, Assistant Professor of Philosophy and Data Science at the University of North Carolina Charlotte; and Mita Williams, Law Librarian for the Don & Gail Rodzik Library at the University of Windsor for a lively conversation about what happens when generative AI enters the spaces where students research, learn, and work.
My slides and notes from the talk (what I was supposed to have said, not necessarily what came out) are below.

Hello everyone! Before I get started, I first want to thank Library Futures for hosting this series of discussions about Ai for us. It’s an honour and a pleasure to be part of this conversation.
And because I want lots of time for conversation, I’m going to do my best to keep my presentation to as close to five minutes long as I can. So with that: here’s where we are going to start and where we are going to end up in this short talk.
I’m going to tell you two very short stories about learning.
I’m going to tell you two very short stories about teaching.
And I’m only going to present a single slide that places GenAi in a library context, but please know that there is more library stuff that can be found in the bibliography.
(This is going to be like a subway ride – there’s not going to be a lot of transitions between stops!)

Let’s start. Here’s short story number one.
Can you remember being a child and trying to learn how to tie your shoelaces? You probably don’t even remember the steps involved in tying your laces, because it’s now so second nature. As a child, it used to take up all of your concentration and now it takes none. You don’t even have to think about it. That’s what it means to learn something.

Here’s short story two. In 2019, a psychology lab at the University of Richmond found that rats could learn effectively how to drive little rodent cars after as few as 24 learning sessions.

I am telling you these stories to remind us all that GenAi systems do not have general intelligence. They cannot innovate, they cannot learn.
Alison Gopnik – an eminent researcher in the field of developmental psychology – suggests that with their ability to access, share, summarize knowledge, LLMs are better understood as cultural and social technologies, like Wikipedia and like Libraries.

Now, if this is the case, how should GenAi be situated within educational institutions? What does teaching look like when these tools have been made to be readily available. I think it’s too soon to know the answer to this question, but I have come across two approaches I’d like to share.
The first approach is to set up an environment in which the student is required to think on one’s feet. Political Science professor Steven Mintz has his students lead discussions in class where LLMs are not allowed. Another example are those law students who compete in mooting competitions. Private study, practice, peer review, and coaching are all necessary for exceptional public performance and to be ready to handle a curve-ball question from a judge.
Generative Ai systems have been designed to give us answers. We want to prepare our students so they have the capacity to answer questions for others and for themselves.

But the approach that I’d like us to consider is one that does not require that we have to maintain a hermetic seal that separates GenAi from our students. It is the approach of art schools. Visual arts students are responsible for their own problem formation, for creating a realization of their work through the problem, and for communicating their process. Their work is also subject to formal critique from both their instructors and from their peers.
We don’t go to school to get answers. We go to school to better understand problems.
And at this moment, we have a very real problem on our hands with GenAi systems threatening to alienate ourselves from our labour and our educational practice. Perhaps the problem betrays the shape of the solution: we need more convivial and social learning in response.

Which brings me, at the very last moment, to the context of librarianship and information literacy.
This has not been the first time that people have tried to extract facts from texts and to create a World Brain for the betterment of humanity. Situated after the development of the foundational works of modern Anglo-American Library practice, but before the age of Information Science, was the time of the European Documentationalists. If you want to learn more, you can read this book Cataloguing the World about Paul Otlet.
This slide is from a presentation I made at a Canadian copyright conference earlier this year. In that presentation, I made the argument that an essential component of the work of librarians is to make and re-make claims of authority and authorship of the materials we hold, preserve, and share. This is work that is now more essential than ever, as GenAi systems threaten to dissolve the connections that we choose to make to each other in text.

I’m looking forward to our conversation about what we will do next. Thank you.

Bibliography
GenAi and Learning
Farrell, Henry, Alison Gopnik, Cosma Shalizi, and James Evans. “Large AI Models Are Cultural and Social Technologies.” Science 387, no. 6739 (2025): 1153–56. https://doi.org/10.1126/science.adt9819.
Alderman, Naomi. “Very very good letter in the FT magazine this morning pinpointing what humans definitely do when we ‘learn’ and why machine LLM “learning….” Post. Bluesky, August 30, 2025. https://bsky.app/profile/naomialderman.bsky.social/post/3lxmccs2xks2o.
Chiang, Ted. “Why A.I. Isn’t Going to Make Art.” The Weekend Essay. The New Yorker, August 31, 2024.https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art.
Crawford, L. E., L. E. Knouse, M. Kent, et al. “Enriched Environment Exposure Accelerates Rodent Driving Skills.” Behavioural Brain Research 378 (January 2020): 112309.https://doi.org/10.1016/j.bbr.2019.112309.
GenAi and Teaching
Williams, Mita. “We Go to School to Better Understand Problems.” Librarian of Things, September 1, 2025. https://librarian.aedileworks.com/2025/09/01/we-go-to-school-to-better-understand-problems.
Mintz, Steven. “AI Killed the Take-Home Essay. COVID Killed Attendance. Now What?” Substack newsletter. Steven Mintz, November 20, 2025.https://stevenmintz.substack.com/p/ai-killed-the-take-home-essay-covid.
White, David. “How ‘Art School’ Teaching Avoids a Losing Battle with Technology.” David White: Digital and Education, February 9, 2024.https://daveowhite.com/nogoodhorse/.
GenAi and Librarianship
Williams, Mita. “Libraries and Large Language Models as Cultural Technologies and Two Kinds of Power.” Librarian of Things, September 20, 2025.https://librarian.aedileworks.com/2025/09/20/libraries-and-large-language-models-as-cultural-technologies-and-two-kinds-of-power/.
Addendum
This talk was supposed to 5 minutes but I when I checked my timer, I was over by 2 minutes, largely because I wanted to include the fact that we can teach rats how to drive with fruit loops.
There are two points that I wished I could have included in my talk.
The first regret is that I didn’t provide the why behind talking about shoelaces and rats at the beginning of my talk. Instead, I could only hope that the audience picked up the fact that living creatures learn much more efficiently than machines and that the Ai systems, as designed at present, never learn from their queries. Meanwhile…
While generative AI can do amazing things, it is also perhaps the most wasteful use of a computer ever devised. If you do 1+1 on a calculator, that’s one calculation. If you do 1+1 in generative AI, that is potentially a trillion calculations to get an answer. That consumes a huge amount of chip capacity and electricity.
The second point is one that I didn’t think I could properly do justice in such a short time and so I opted to leave it out. And yet, this particular connection was the reason behind how I framed my talk.
In the work above, I wanted to convey that our education exists to help us better understand problems.
And one of the features of Large Language Models — a feature that arguably may not be redeemable — is that it sees people as a problem.
I made this connection after watching Dr Tanksley discussing the harms that algorithmic racism within school-based technologies have for Black students.
