§1 Management Consulting Firms
So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.
Will A.I. Become the New McKinsey? by Ted Chiang, The New Yorker, May 4, 2023.
A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.
§2 The Apparatus
Not all of the reviews take the bait. Michelle Santiago Cortés identifies the precise point where the book goes off the rails: Chayka interviews the anthropologist Nick Seaver, who tells him that ‘the algorithm is metonymic for companies as a whole…The Facebook algorithm doesn’t exist; Facebook exists. The algorithm is a way of talking about Facebook’s decisions.’ …
… The irony of this critique is that in the book itself, Chayka’s thesis is clearly consistent with Seaver’s point that “The Algorithm” is today used to disguise the larger systems in which social media are embedded. It’s as if it has become impossible to actually read the book, to follow the through-line of the argument, to use the tools of linear conceptual reason that this media technology requires. Instead, everyone already knows what it’s about.
This isn’t a book, in the way we were raised to expect. To an ever-intensifying degree, even books are produced by and for The Apparatus. The current case makes this especially clear: Chayka wrote an essay for The Verge in 2016 about “Airspace” and the sterility of modern aesthtics. It went viral — The Apparatus demanded more. And so this book was produced …
“The Algorithm” is the only critique of “The Algorithm” that “The Algorithm” can produce
Kevin Munger, Never Met a Science, Jan 29, 2024.
§3 Automation
What is AI?
In fact this is a marketing term. It’s a way to make certain kinds of automation sound sophisticated, powerful, or magical and as such it’s a way to dodge accountability by making the machines sound like autonomous thinking entities rather than tools that are created and used by people and companies. It’s also the name of a subfield of computer science concerned with making machines that “think like humans” but even there it was started as a marketing term in the 1950s to attract research funding to that field.
I think that discussions of this technology become much clearer when we replace the term AI with the word “automation”. Then we can ask:
Opening remarks on “AI in the Workplace: New Crisis or Longstanding Challenge”, Emily M. Bender, Oct 1, 2023.
What is being automated?
Who’s automating it and why?
Who benefits from that automation?
How well does the automation work in its use case that we’re considering?
Who’s being harmed?
Who has accountability for the functioning of the automated system?
What existing regulations already apply to the activities where the automation is being used?
§4 Surveillance Capitalism
Instead of relying on algorithms, which we can be accused of manipulating for our benefit, we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias. It’s a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don’t lie….
Surveillance capitalism has some of the features of a zero-sum game. The actual value of the data collected is not clear, but it is definitely an advantage to collect more than your rivals do. Because human beings develop an immune response to new forms of tracking and manipulation, the only way to stay successful is to keep finding novel ways to peer into people’s private lives. And because much of the surveillance economy is funded by speculators, there is an incentive to try flashy things that will capture the speculators’ imagination, and attract their money.This creates a ratcheting effect where the behavior of ever more people is tracked ever more closely, and the collected information retained, in the hopes that further dollars can be squeezed out of it.
Just like industrialized manufacturing changed the relationship between labor and capital, surveillance capitalism is changing the relationship between private citizens and the entities doing the tracking. Our old ideas about individual privacy and consent no longer hold in a world where personal data is harvested on an industrial scale.
Those who benefit from the death of privacy attempt to frame our subjugation in terms of freedom, just like early factory owners talked about the sanctity of contract law. They insisted that a worker should have the right to agree to anything, from sixteen-hour days to unsafe working conditions, as if factory owners and workers were on an equal footing.
Companies that perform surveillance are attempting the same mental trick. They assert that we freely share our data in return for valuable services. But opting out of surveillance capitalism is like opting out of electricity, or cooked foods—you are free to do it in theory. In practice, it will upend your life.
Many of you had to obtain a US visa to attend this conference. The customs service announced yesterday it wants to start asking people for their social media profiles. Imagine trying to attend your next conference without a LinkedIn profile, and explaining to the American authorities why you are so suspiciously off the grid.
The Moral Economy of Tech, Maciej Cegłowski, SASE conference panel, June 26, 2016
§5 Thatcherism
Faced with social structures whose foundations have been eaten away by decades of privatisation and austerity, the political response is to pump money into ‘frontier AI’ while hyping it up as the most awe-inspiring technology since the Manhattan Project. The Prime Minister says he will “harness the incredible potential of AI to transform our hospitals and schools” while ignoring leaking roofs in the NHS and the literally collapsing ceilings in local schools. This focus on the immaterial fantasies of AI is a deliberate diversion. When large language models are touted as passing basic medical exams, it’s because they’ve absorbed answers from across the internet. They are incapable of the embodied understanding and common sense that underpin medicine, education or any other form of care.
One thing that these models definitely do, though, is transfer control to large corporations. The amount of computing power and data required is so incomprehensibly vast that very few companies in the world have the wherewithal to train them. To promote large language models anywhere is privatisation by the back door. The evidence so far suggests that this will be accompanied by extensive job losses, as employers take AI’s shoddy emulation of real tasks as an excuse to trim their workforce. The goal isn’t to “support” teachers and healthcare workers but to plug the gaps with AI instead of with the desperately needed staff and resources.
AI as Algorithmic Thatcherism, Dan McQuillan, 21 December 2023.
§6 Technofeudalism
To understand where Varoufakis is coming from, you need to go beyond the colloquial meanings of “capitalism” and “feudalism.” Capitalism isn’t just “a system where we buy and sell things.” It’s a system where capital rules the roost: the richest, most powerful people are those who coerce workers into using their capital (factories, tools, vehicles, etc) to create income in the form of profits.
By contrast, a feudal society is one organized around people who own things, charging others to use them to produce goods and services. In a feudal society, the most important form of income isn’t profit, it’s rent. To quote Varoufakis: “rent flows from privileged access to things in fixed supply” (land, fossil fuels, etc). Profit comes from “entrepreneurial people who have invested in things that wouldn’t have otherwise existed.” …
Pluralistic: Yanis Varoufakis’s “Technofeudalism: What Killed Capitalism?” (28 Sep 2023), Cory Doctorow.
This distinction is subtle, but important: “Profit is vulnerable to market competition, rent is not.” If you have a coffee shop, then every other coffee shop that opens on your block is a competitive threat that could erode your margins. But if you own the building the coffee shop owner rents, then every other coffee shop that opens on the block raises the property values and the amount of rent you can charge…
… Think of Amazon: Varoufakis likens shopping on Amazon to visiting a bustling city center filled with shops run by independent capitalists. However, all of those capitalists are subservient to a feudal lord: Jeff Bezos, who takes 51 cents out of every dollar they bring in, and furthermore gets to decide which products they can sell and how those products must be displayed …
… The postcapitalist, technofeudal world isn’t a world without capitalism, then. It’s a world where capitalists are subservient to feudalists (“cloudalists” in Varoufakis’s thesis), as are the rest of us the cloud peons, from the social media users and performers who fill the technofuedalists’ siloes with “content” to the regular users whose media diet is dictated by the cloudalists’ recommendation systems…
§7 Exploiting Labour
Many of these systems are developed by multinational corporations located in Silicon Valley, which have been consolidating power at a scale that, journalist Gideon Lewis-Kraus notes, is likely unprecedented in human history. They are striving to create autonomous systems that can one day perform all of the tasks that people can do and more, without the required salaries, benefits or other costs associated with employing humans. While this corporate executives’ utopia is far from reality, the march to attempt its realization has created a global underclass, performing what anthropologist Mary L. Gray and computational social scientist Siddharth Suri call ghost work: the downplayed human labor driving “AI”.
Tech companies that have branded themselves “AI first” depend on heavily surveilled gig workers like data labelers, delivery drivers and content moderators. Startups are even hiring people to impersonate AI systems like chatbots, due to the pressure by venture capitalists to incorporate so-called AI into their products. In fact, London-based venture capital firm MMC Ventures surveyed 2,830 AI startups in the EU and found that 40% of them didn’t use AI in a meaningful way.
Far from the sophisticated, sentient machines portrayed in media and pop culture, so-called AI systems are fueled by millions of underpaid workers around the world, performing repetitive tasks under precarious labor conditions. And unlike the “AI researchers” paid six-figure salaries in Silicon Valley corporations, these exploited workers are often recruited out of impoverished populations and paid as little as $1.46/hour after tax. Yet despite this, labor exploitation is not central to the discourse surrounding the ethical development and deployment of AI systems.
The Exploited Labor Behind Artificial Intelligence by By Adrienne Williams, Milagros Miceli and Timnit Gebru, NOEMA, October 13, 2022
§8 Tech Bubbles
Contrast that bubble with, say, cryptocurrency/NFTs, or the complex financial derivatives that led up to the 2008 financial crisis. These crises left behind very little reusable residue. The expensively retrained physicists whom the finance sector taught to generate wildly defective risk-hedging algorithms were not able to apply that knowledge to create successor algorithms that were useful. The fraud of the cryptocurrency bubble was far more pervasive than the fraud in the dotcom bubble, so much so that without the fraud, there’s almost nothing left. A few programmers were trained in Rust, a very secure programming language that is broadly applicable elsewhere. But otherwise, the residue from crypto is a lot of bad digital art and worse Austrian economics.
AI is a bubble, and it’s full of fraud, but that doesn’t automatically mean there’ll be nothing of value left behind when the bubble bursts. WorldCom was a gigantic fraud and it kicked off a fiber-optic bubble, but when WorldCom cratered, it left behind a lot of fiber that’s either in use today or waiting to be lit up. On balance, the world would have been better off without the WorldCom fraud, but at least something could be salvaged from the wreckage…
Cory Doctorow: What Kind of Bubble is AI?, Locus Magazine, December 18, 2023
What kind of bubble is AI?
§9 Corporations
12 Naruto v. Slater, 888 F.3d 418, 426 (9th Cir. 2018) (holding that only a human can be an author); U.S. COPYRIGHT OFFICE, COMPENDIUM OF U.S. COPYRIGHT OFFICE PRACTICES § 306 (3d ed. 2021) (“The U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being.”); Pamela Samuelson, Allocating Ownership in Computer-Generated Works, 47 U. Pitt. L. Rev. 1185, 1199, 1192 (1986) (“it is still fair to say that it was not withing Congress’ contemplation to grant intellectual property rights to machines.”). The current law is a bit hard to square with the law’s treatment of corporations, which Charles Stross has called the original AIs, as authors in their own right. Charles Stross, Keynote Address at the 34th Chaos Communication Congress (Dec. 2017), http://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html (“I’m talking about the very old, very slow AIs we call corporations, of course.”). See also U.S. COPYRIGHT OFFICE, CIRCULAR 30: WORKS MADE FOR HIRE 4 (explaining that as defined in 17 U.S.C. § 101 “[i]f a work is a work made for hire, the employer or the party that specially ordered or commissioned that work is the author of that work.”). If corporations can be not just owners but authors, it’s not clear why other artificial entities couldn’t be.
Lemley, Mark A., How Generative AI Turns Copyright Upside Down (July 21, 2023). Available at SSRN: https://ssrn.com/abstract=4517702 or http://dx.doi.org/10.2139/ssrn.4517702
One response to “You don’t hate AI; You hate… : a collection”
[…] in another post, “You don’t hate AI, you hate dot dot dot” I published a list of concerns that might underlie many of our collective fears when we say that […]