What are generative AI LLM gpt-chat-bot machine learning systems… for?
That’s a difficult question to answer because we’re still not entirely sure how these systems even work and what they might become.
We currently understand that machine learning systems can save us many hours by summarizing texts too numerous to read for a variety of different audiences. These systems can be prompted to generate text, images, and sounds that mimic or combine existing styles. These systems have been trained to translate languages and generate text from speech. They can generate code that is beyond our existing imagination and capabilities. And these systems have swallowed a library and displaced it with a virtual librarian (named Steve) that you can chat with.
In other words, generative AI can help us extend our capabilities, help us re-frame and re-generate media to our personal preferences, and help us save time and the cost of labour.
But there are trade-offs for these benefits.
Many machine learning systems were trained on texts found on the internet, which is why chatgpt will tell you that William the Conqueror would not have won the Battle of Hastings without owlbears. And if you don’t like an answer you get, you can simply ask these systems to disregard any inconvenient truth you choose. LLMs will never tell you that it doesn’t know an answer, because it doesn’t know anything. A fundamentally wrong answer is better than no answer at all. It’s the Max Power way.
I’ve already written about another trade-off, which is how machine learning complicates authorship… and you won’t know if I used WordPress’ AI Assistant to help me write that or this post. Did I chose to get help from my assistant?
I started this post by talking about tradeoffs because I want to emphasize that when we are given a chance to use AI in our work, we are making a choice.
And while I started by asking, “What are machine learning systems for?”, I’m not really interested in that question. I’m much more interested in asking this question instead: What do you want machine learning systems to be, for you?
And before you answer, I would like you to consider how you play games.
When you play a game, do you try to reduce the possibilities of losing? Or do you try to maximize your gains in winning? Do you play to make sure your younger sister doesn’t win even it means that end up giving the game’s win to your Mom? Games have rules of how to play, but games don’t make it entirely clear how you should play.
Frank Lantz, game designer (of Universal Paperclips) and Founding Chair of the NYU Game Center, inspired these thoughts with his recent post, Ladies and Gentlemen We are Floating in Donkeyspace. That post begins:
One of my all-time favorite games is Hanabi, a cooperative game where you hold your cards facing outwards, so other players can see them but you can’t. Hanabi is a great game for two people to play over and over again and try to master. My wife and I have played for years, and after every round we discuss our play and try to improve our strategy. It was only after playing many times, and getting deep into the game, that we noticed a central ambiguity in the rules. Your goal in the game is to play out cards, in sequence, onto 5 stacks. When the game ends, your score is the value of the top card of each stack. If you have a perfect game, and successfully play out all the cards, your score is 25 points.
Seems pretty straightforward, right? But one night, while discussing our strategy, we realized there was a big difference between playing in such a way as to maximize our score, and playing in a way to maximize our chances of getting a perfect score of 25, and that we had to pick between them.
Ladies and Gentlemen We are Floating in Donkeyspace, Frank Lantz, Dec 5, 2023
I was introduced to the idea of Donkeyspace from Lantz’s recently published book, The Beauty of Games. But for those non-Poker players unfamiliar with the term, Lantz helpful restates it in his post:
The concept of Donkeyspace is a way of talking about this kind of ambiguity, when it’s not totally clear, in a game, which problem you’re actually trying to solve. Donkeyspace occurs when players intentionally choose moves that they know are sub-optimal according to one way of interpreting the game’s goal, but optimal according to another.
I am tempted to reprint the entire ending of his post when Lantz brings his insights on games and Donkeyspace to AI, but you should really just read the whole thing. It lands with an answer to the subtitle to the piece, What is best in life?
Reading it reminded me of this line from an Ursula K. Leguin short story: