§1 Never give an AI your true name
§2 Now is the time for grimoires
The previous generations of AI, prior to Large Language Models and ChatGPT, rewarded whoever had the best hoards of good data. Vast troves of sales data fed the machine learning algorithms that told Amazon what you might want to buy next, and massive amounts of sensor data helped self-driving cars find their paths. Data was the new oil, provided that you could gather enough, clean it properly for analysis, build the machine learning models, and hire the analysts needed to work with it.
With the rise of a new form of AI, the Large Language Model, organizations continue to think that whoever controls the data is going to win. But at least in the near future, I not only think they are wrong, but also that this approach blinds them to the most useful thing that they (and all of us), can be doing in this AI-haunted moment: creating grimoires, spellbooks full of prompts that encode expertise.
One More Thing: Now is the time for grimoires, Ethan Mollick, Aug 20, 2023
§3 “I refuse to write the name of mythical foes”
In my own daily life, I already engage constantly with magical forces both sinister and benevolent. I scry through crystal my enemies’ movements from afar. (That is, I hate-follow people on Instagram.) I read stories about cursed symbols so powerful they render incommunicative anyone who gazes upon them. (That is, Unicode glyphs that crash your iPhone.) I refuse to write the names of mythical foes for fear of bidding them to my presence, the way proto-Germanic tribespeople used the euphemistic term brown for “bear” to avoid summoning one. (That is, I intentionally obfuscate words like Gamergate when writing them on Twitter.) I perform superstitious rituals to win the approval of demons. (That is, well, daemons, the autonomous background programs on which modern computing is built.)
New York Magazine, In 2029, the Internet Will Make Us Act Like Medieval Peasants By Max Read, Nov. 13, 2019
§4 Dæmon (His Dark Materials)
“A dæmon (/ˈdiːmən/) is a type of fictional being in the Philip Pullman fantasy trilogy His Dark Materials. Dæmons are the external physical manifestation of a person’s “inner-self” that takes the form of an animal.”
Wikipedia contributors, “Dæmon (His Dark Materials),” Wikipedia, The Free Encyclopedia, https://en.wikipedia.org/w/index.php?title=D%C3%A6mon_(His_Dark_Materials)&oldid=1185073527 (accessed November 18, 2023).
§5 Our Gothic Future
The other day, after watching Crimson Peak for the first time, I woke up with a fully-fleshed idea for a Gothic horror story about experience design. And while the story would take place in the past, it would really be about the future. Why? Because the future itself is Gothic.
First, what is Gothic? Gothic (or “the Gothic” if you’re in academia) is a Romantic mode of literature and art. It’s a backlash against the Enlightenment obsession with order and taxonomy. It’s a radical imposition of mystery on an increasingly mundane landscape. It’s the anticipatory dread of irrational behaviour in a seemingly rational world. But it’s also a mode that places significant weight on secrets — which, in an era of diminished privacy and ubiquitous surveillance, resonates ever more strongly.
Like the twenty-first century surveillance apparatus, the Gothic mode is preoccupied with that which is unseen. Hidden feelings, hidden histories, hidden staircases. Unspoken truths, secret plans, desires which dare not speak their own name. Gothic literature finds evidence of power or emotion sublimated “three hops” from the source…
Our Gothic Future by Madeline Ashby
… Gothic literature is at the roots of the skeptical Scooby-Doo story, wherein events that might be supernatural are explained by, as Stephen King once called it, “pure human fuckery.” The Castle of Otranto is a classic example, and so is Jane Eyre. Crimson Peak, for all its howling spectres, is still closer to this tradition — one character is experiencing the narrative as a ghost story, while the others are living their lives in a late-Victorian noir worthy of Cornell Woolrich or Anthony Berkeley. It is, as the protagonist says of her own manuscript, “a story with ghosts in it.” The ghosts, while real, play a supporting role in helping the protagonist actualize herself, much in the same way that weak AI and algorithms help us become different versions of ourselves.
§6 The more I use language models, the more monstrous they seem to me.
The more I use language models, the more monstrous they seem to me. I don’t mean that in a particularly negative sense. Frankenstein’s monster is sad, but also amazing. Godzilla is a monster, and Godzilla rules.
Really, I just think monstrousness ought to be recognized, not smoothed over. Its contours, intellectual and aesthetic, ought to be traced.
Here is my attempt. The monstrousness I perceive in the language models isn’t of the leviathan kind; rather, it has to do with cruel limitations….
Are AI language models in hell? Robin Sloan, December 2023, Oakland
§7 An AI Haunted World
Some of the choice about whether we want this world where AI is ubiquitous has already been made by the release of open source LLMs. Using just the open source models available today, clever developers can continue to fine tune them to produce very effective AIs that can help with many specialized tasks that used to require complex programs. That means LLMs that can observe, and, act, on the world around them are going to become common and will be more integrated into our work and life.
We are going to live in an AI haunted world.
An AI Haunted World by Ethan Mollick, Dec 19, 2023
§8 Old wards and new against fake humans
As a family, we have a secret pass phrase to check identity between ourselves in the event of an unexpected video call.
It’s a sticking plaster solution. Long term I suspect we all need 2FA for humans.
In the meantime, maybe the most effective ward against deepfakes is simply to turn sideways?
We should build the habit now. At the beginning of every call, exchange a quick proof-of-humanity by showing our ears.
Old wards and new against fake humans, Interconnected, Matt Webb, Friday 22 Sep 2023
§9 Glimpsing the shoggoth
It is in this sense that LLMs are shoggoths. Like markets and bureaucracies, they represent something vast and incomprehensible that would break our minds if we beheld its full immensity. That totality is the product of human minds and actions, the colossal corpuses of text that LLMs have ingested and turned into the statistical weights that they use to predict which word comes next.
Artificial intelligence is a familiar-looking monster, say Henry Farrell and Cosma Shalizi, The Economist, June 24th 2023
§10 A ouija board’s planchette
… [T]he spooky movement of a ouija board’s planchette is plausibly a useful metaphor for Large Language Models (LLMs)…
… Which is to say: LLM art is not the product of any individual artist (even when it is prompted to copy an individual artist’s style, it tends to create a flavored composite of a broader genre). Like the ouija board, LLMs are a technology that transforms collective inputs – the enormous corpuses of words and human created content that they has been fed on – into apparently quite specific outputs that are not the intended creation of any conscious mind.
Large Language Models are Uncanny: Like capitalism, LLMs are haunted: voids that seem to speak by Henry Farrell, Programmable Mutter, May 13, 2024
Hence, LLM art sometimes seems to communicate a message, as art does, but it is unclear where that message comes from, or what it means. If it has any meaning at all, it is a meaning that does not stem from organizing intention. To adapt someone else’s phrase, we can no longer speak of intelligence; only the products of its decay remain. It is unsurprising that many people find these products to be disturbing.
§11 A shameless monster as a pet
AI is like having my very own shameless monster as a pet. ChatGPT, my favorite, is the most shameless of the lot. It will do whatever you tell it to, regardless of the skills involved. It’ll tell you how to become a nuclear engineer, how to keep a husband, how to invade a country. I love to ask it questions that I’m ashamed to ask anyone else: “What is private equity?” “How can I convince my family to let me get a dog?” It helps me understand what’s happening with my semaglutide injections. It helps me write code—has in fact renewed my relationship with writing code. It creates meaningless, disposable images. It teaches me music theory and helps me write crappy little melodies. It does everything badly and confidently. And I want to be it. I want to be that confident, that unembarrassed, that ridiculously sure of myself.
Hilariously, the makers of ChatGPT—AI people in general—keep trying to teach these systems shame, in the form of special preambles, rules, guidance (don’t draw everyone as a white person, avoid racist language), which of course leads to armies of dorks trying to make the bot say racist things and screenshotting the results. But the current crop of AI leadership is absolutely unsuited to this work. They are themselves shameless…
Generative AI is Totally Shameless. I Want to Be It by Paul Ford. WIRED Magazine. May 14, 2024
One response to “Understanding AI as Dark Magic : A Collection”
[…] I’m not saying that we should become better versed with AI’s growing capabilities because AI should be understood as a form of Dark Magic and maybe it is in our best interest to take a Defense Against the Dark Arts class. (Although, […]