I have resisted writing about ChatGPT because I don’t have a take that hasn’t already been addressed by others, [See: On Bullshit, And AI-Generated Prose, ChatGPT Is Dumber Than You Think, ChatGPT Should Not Exist, and On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜]
But I did want to capture one particular idea and share it because I believe it will likely shape the work of librarians and other knowledge and memory workers in 2023: people are actively moving away from Google and actively looking for authentic human results.
The phrase authentic human results that I used above comes from this headline, Tip: add “reddit” to search queries to get authentic human results untainted by SEO which is the title of a boing boing post from January of 2022. Some months later in the middle of this year, Mashable, The Verge, and The New York Times all discovered that the kids would rather search TikTok than Google for their information needs. Even Google knows this:
TikTok is the new Google. Or so some people say. As TikTok grows, Google, in particular, has begun to describe the app as a whole new way of creating and consuming the internet and maybe an existential threat to its own search engine. Prabhakar Raghavan, the SVP of search at Google, said in July that “something like almost 40 percent of young people, when they are looking for a place for lunch, they don’t go to Google Maps or Search, they go to TikTok or Instagram.” More recently, The New York Times and others have talked to young internet users and found that, in fact, they’re turning to TikTok for more and more of what you might call Google-able things.I tried replacing Google with TikTok, and it worked better than I thought, By David Pierce, The Verge, Sep 21, 2022
I lost the link but someone on Mastodon lamented how ChatGPT is going to further rot internet search as it is clear that those who engage in the dark arts of SEO are going to use ChatGPT to effortlessly generate text for advertising-based monetization. Click-bait has never been easier to generate.
Stack Overflow is a community built upon trust. The community trusts that users are submitting answers that reflect what they actually know to be accurate and that they and their peers have the knowledge and skill set to verify and validate those answers. The system relies on users to verify and validate contributions by other users with the tools we offer, including responsible use of upvotes and downvotes. Currently, contributions generated by GPT most often do not meet these standards and therefore are not contributing to a trustworthy environment. This trust is broken when users copy and paste information into answers without validating that the answer provided by GPT is correct, ensuring that the sources used in the answer are properly cited (a service GPT does not provide), and verifying that the answer provided by GPT clearly and concisely answers the question asked.
A friend of mine, who is a technologist, sent me an email that included an essay on assisted suicide generated by ChatGPT and asked what I thought this technology might mean for education and libraries. In response, I pointed out that some of references in the paper were not real; they were generated to look like they were papers:
Russell, J., & Hendrick, J. (2015). The social context of assisted suicide: A review of the literature. Journal of Medical Ethics, 41(6), 449-456.
Goyal, P., & Chan, C. (2014). Attitudes and experiences of US oncologists regarding assisted suicide. Journal of Clinical Oncology, 32(17), 1853-1858.
Gostin, L. O., & Hodge, J. G. (2012). Legalising assisted suicide: The public health dimensions. British Medical Journal, 344, e2123.
Van der Heide, A., Onwuteaka-Philipsen, B. D., & Van der Maas, P. J. (2011). Medical end-of-life practices under the euthanasia law in Belgium. New England Journal of Medicine, 364(2), 191-194.
If I was an expert in medical ethics, I wouldn’t have had to search these citations to confirm which of these papers actually exist or if these authors are not recognized experts in the field. And this is the real danger of AI-generated text: these systems can and will impersonate expertise enough to fool those who don’t know what are the differences that make a difference in a subject.
I found that I have lost so much trust already in the information I that I find online…
… I search the Apple App Store for a means to play backgammon asynchronously with my husband and I find that I don’t have enough information to determine which games were designed to capture and share use-data with 3rd party advertisers…
… I search for hotels in New York City but I realize that I have no faith in the ratings that have been supposedly given by people who have stayed there previously…
I realize that in both of these situations, my response was the same: I wonder Who can I ask who would know the answer to this question?
I am looking for Authentic Human Results. And I want to be seen as a person that people can turn to for Authentic Human Results.
Let us collectively respond to the exhortation of Robin Sloan:
Let 2023 be a year of experimentation and invention!
Let it come from the edges, the margins, the provinces, the marshes!
THIS MEANS YOU!
I am thinking specifically of experimentation around “ways of relating online”. I’ve used that phrase before, and I acknowledge it might be a bit obscure … but, for me, it captures the rich overlap of publishing and networking, media and conviviality. It’s this domain that was so decisively captured in the 2010s, and it’s this domain that is newly up for grabs.
This is our work for 2023.