I read quite a few technology themed newsletters. Many of these authors will self-disclose that they had generated an idea, accompanying illustration, or summary text using one of the many generative AI tools currently out there. As a reader, I appreciate this form of self-disclosure. Unlike scholarship, newsletters and blog posts don’t generally come with a methods section.
Helen Beetham, in her newsletter, recommends:
Finally, I often go back to the guidelines for writers produced by Nature group of academic journals back in January and explored in my very first stubstack post on language, language models and writing. They follow two basic principles:
1. Generative AI tools are not authors and should not be cited or credited as such 2. Use of such tools as part of a research (or writing) method should be reported in that context (i.e. methodologically)
I have to agree. Most of the citation style guides do not recommend giving authorship to generative AI tools (otherwise you end up with a mess like this).
What is curious to me is the number of style guides that recommend that we treat generative AI like ‘personal communication’. Evidently, this is because “content from AI tools like ChatGPT is usually nonrecoverable, so it cannot be retrieved or linked in your citation.” According to this LibGuide from Dalhousie University, the APA, the Chicago Style Manual, Vancouver Style Guide, and the CSE all recommend using personal communication as the template for citation for the use of this particular type of software.
This isn’t a post about citation styles. What I better want to understand is how generative AI is understood as something that one chats with. You can give these systems tasks to do like an intern. You can ask for help from these systems as if you were asking a tutor. It is suggested that one day, you could ask questions of these systems as if you were asking a trusted adviser, like a lawyer or doctor.
This suggests that the alternative to generative AI is talking to people that you trust or have reason to trust.
While a library’s reference desk provides a private place where a person can ask questions without fear that their ignorance will be on display or disclosed to their instructor, the number of interactions at library reference desks have plummeted over the last twenty years. I believe this model needs repair.
I’ve been collecting other examples of designs that attempt to serve as a social structure that can make the asking for help easier to do on an individual. These include:
- Superbetter: this game encourages players to recruit allies to help them in the challenges and in doing so, helps strengthen social connectedness
- The Hologram: “A viral four-person health monitoring and diagnostic system practiced from couches all over the world. Three non-expert participants create a three-dimensional “hologram” of a fourth participant’s physical, psychological and social health, and each becomes, the focus of three other people’s care in an expanding network.”
- Unconferences: Remember these? These events were designed meet-ups in which people could discuss and answer thorny questions together, like “How would you digitize a steam engine?“
And let’s not forget that once upon a time, people would write blog posts and tweets with requests for help or information and in response, more often than not, there would be someone on the other end with either words of assistance or assurance. Which is why I wrote this post.
2 Responses to “The alternative to AI is talking to other people”
[…] While I think you should read Fister’s talk, that’s not what just I’m trying to do in this blog post. I also want to bring attention to another passage from Barbara so I start building from a line of thinking that I started with in, “The alternative to AI is talking to other people“. […]
[…] In a separate post, Williams states that, “The alternative to AI is talking to other people.” —Mita Williams 2023-09-19 […]