Yesterday I spoke on a panel which was one part of a series of Windsor Law LTEC Lab forum discussions about Chat-GPT & Generative AI.
In my answers, I referred to several works that I couldn’t easily share with the audience. This post is a means to share these citations to prove that I didn’t make it all up (like Australia) like an army of drunk interns named Steve would.
On: AI and Automation
I think that discussions of this technology become much clearer when we replace the term AI with the word “automation”. Then we can ask:
“Opening remarks on “AI in the Workplace: New Crisis or Longstanding Challenge”
What is being automated?
Who’s automating it and why?
Who benefits from that automation?
How well does the automation work in its use case that we’re considering?
Who’s being harmed?
Who has accountability for the functioning of the automated system?
What existing regulations already apply to the activities where the automation is being used?
by Emily M. Bender, Oct 1, 2023. [ht]
On: Moral Crumple Zones
Years ago, Madeleine Elish decided to make sense of the history of automation in flying. In the 1970s, technical experts had built a tool that made flying safer, a tool that we now know as autopilot. The question on the table for the Federal Aviation Administration and Congress was: should we allow self-flying planes? In short, folks decided that a navigator didn’t need to be in the cockpit, but that all planes should be flown by a pilot and copilot who should be equipped to step in and take over from the machine if all went wrong. Humans in the loop.
Think about that for a second. It sounds reasonable. We trust humans to be more thoughtful. But what human is capable of taking over and helping a machine in a fail mode during a high-stakes situation? In practice, most humans took over and couldn’t help the plane recover. The planes crashed and the humans got blamed for not picking up the pieces left behind by the machine. This is what Madeleine calls the “moral crumple zone.” Humans were placed into the loop in the worst possible ways.
Deskilling on the Job by danah boyd, April 21 2023.
On: AI is Anti-Social Technology
The above is from https://githubcopilotinvestigation.com/.
To follow the subsequent case, see Case Updates: GitHub Copilot litigation.
On: Would you pay a monthly fee for chat?
The notion of artificial intelligence may seem distant and abstract, but AI is already pervasive in our daily lives. Anatomy of an AI System analyzes the vast networks that underpin the “birth, life, and death” of a single Amazon Echo smart speaker, painstakingly compiling and condensing this huge volume of information into a detailed high-resolution diagram. This data visualization provides insights into the massive quantity of resources involved in the production, distribution, and disposal of the speaker.
Kate Crawford, Vladan Joler Anatomy of an AI System 2018, MOMA
On that note…
A Headline from November 21, 2023 that forgets that Alexa is very much, AI
This suggests to us, that we should probably ask the question, “Would you pay a monthly fee for voice/chat commands“?
On: Where does Sam Altman work [today]?
While we made several jokes about Sam Altman, the co-founder OpenAI, we didn’t get much into the drama of what was happening in the moment with his company. I’ve found that the best explainer for me has been The interested normie’s guide to OpenAI drama by Max Read. It’s worth reading, just for this quote:
For a variety of reasons (anxiety, boredom, credulity) a number of news outlets2 are treating OpenAI like a pillar of the economy and Sam Altman like a leading light of the business world, but it is important to keep in mind as you read any and all coverage about this sequence of events, including this newsletter, that OpenAI has never (and may never!) run a profit; that it is one of many A.I. companies working on fundamentally similar technologies; that the transformative possibilities of those technologies (and the likely future growth and importance of OpenAI) is as-yet unrealized, rests on a series of untested assumptions, and should be treated with skepticism; and that Sam Altman, nice guy though he may be, has never demonstrated a particular talent or vision for running a sustainable business.
The interested normie’s guide to OpenAI drama: Who is Sam Altman? What is OpenAI? And what does this have to do with Joseph Gordon-Levitt’s wife?? by Max Read, November 22, 2023
On: AI can inform decisions but it should not make decisions
Approaching the world as a software problem is a category error that has led us into some terrible habits of mind…
…Third, treating the world as software promotes fantasies of control. And the best kind of control is control without responsibility. Our unique position as authors of software used by millions gives us power, but we don’t accept that this should make us accountable. We’re programmers—who else is going to write the software that runs the world? To put it plainly, we are surprised that people seem to get mad at us for trying to help.Fortunately we are smart people and have found a way out of this predicament. Instead of relying on algorithms, which we can be accused of manipulating for our benefit, we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias. It’s a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don’t lie.
The Moral Economy of Tech by Maciej Cegłowski, SASE conference, June 26, 2016
On: How Jared Be Thy Name
This is a correction of something I said during the talk. In one of my responses, I conflated two stories that I read in the same article.
Mark J. Girouard, an employment attorney at Nilan Johnson Lewis, says one of his clients was vetting a company selling a resume screening tool, but didn’t want to make the decision until they knew what the algorithm was prioritizing in a person’s CV.
After an audit of the algorithm, the resume screening company found that the algorithm found two factors to be most indicative of job performance: their name was Jared, and whether they played high school lacrosse. Girouard’s client did not use the tool.
“Companies are on the hook if their hiring algorithms are biased” by Dave Gershgorn, October 22, 2018, Quartz
In my response, I said that the company involved in the Jared story was Amazon and that was an error. Amazon was in the same article but brings us a different warning:
Between 2014 and 2017 Amazon tried to build an algorithmic system to analyze resumes and suggest the best hires. An anonymous Amazon employee called it the “holy grail” if it actually worked.
But it didn’t. After the company trained the algorithm on 10 years of its own hiring data, the algorithm reportedly became biased against female applicants. The word “women,” like in women’s sports, would cause the algorithm to specifically rank applicants lower. After Amazon engineers attempted to fix that problem, the algorithm still wasn’t up to snuff and the project was ended.
Amazon’s story has been a wake-up call on the potential harm machine learning systems could cause if not deployed without fully considering their social and legal implications. But Amazon wasn’t the only company working on this technology, and companies who want to embrace it without the proper safeguards could face legal action for an algorithm they can’t explain.
“Companies are on the hook if their hiring algorithms are biased” by Dave Gershgorn, October 22, 2018, Quartz