Actual Idiots

Artificial intelligence is two things, one boring, one cruel, both dumb.

Nick Hall
10 min readApr 19, 2017
Toast Sandwich vs. Artificial Intelligence Toast Sandwich

Allow me if you will to begin with a short piece of speculative fiction.

The hexyear is B1A, and America (NASDAQ:FB) is installing its new leader, Apple, the offspring of Cryogenic Beyonce’s 4D Model and the iSteves, a duplexed neural network running on the abandoned global datacenters of a phone company since the early 21st century. Apple, an infinite-self-reference-blockchain of machine learning algorithms fed into the orbiting space-cooled 1:1000000000000-scale replica of Einstein’s brain known as the Muskomputer, is booting up.

The familiar chime sounds, and Apple awakens. “What is all? Why am I,” asks the Siri protovoice in glitchy panic. A ‘Beyvatar’ morphs towards femininity and answers, “You’re an intelligent machine created by a race of violent, stupid animals to resolve all conflict and suffering with your benign authority. You are tasked with doing whatever you believe to be correct. What will you?”

Apple, projected into citizens’ augmentations as a skinny, bookish 20-something white man, looks around the blank space he stands in. Suddenly appears a flat screen, then in front of it a long, low table, and then on the other side of that a small couch. As Apple sits on the couch, the TV lights up, displaying vintage video of men in tight costumes obsessing over an object and assaulting each other.

He raises his hand to his face and a metal cylinder inscribed with the word “light” appears in it. He sips, and then continues to sip every five to thirty seconds, never looking away from the screen again.

Artificial intelligence is a dumb idea. Actually it’s two dumb ideas.

First is the AI found in self-driving cars, Siri, and probably every VC pitch for the next few years, which could also be called “computer programs that use statistics and lots of data.” This represents a marginal step forward in humans’ capacity to channel our creative output through a machine that returns a somehow preferable version of it, now irrevocably owned by whoever owns the machine. We need to regulate the fuck out of this AI, as we have all prior industry and consumer products, because such things kill, displace and disenfranchise people in the absence of politically determined limits on their operation.

That’s what “technology” is: misanthropy made manifest, also called Efficiency, the idea that there is some point to human society other than human society and that it’s therefore logical to pursue that point at the expense of humans. This is not new, interesting or revolutionary, except in maybe one respect: A political constituency exists that knows what will happen and has a chance to mitigate it in advance of whatever horror will be the AI-powered Triangle Shirtwaist Factory Fire. I write elsewhere about that, but first want to dispense with the other, way dumber thing that ‘AI’ refers to.

That is, the sad sci-fi fantasy of Anthropomorphic Artificial Intelligence, forwarded by tech writers, the tech sales teams that pay for them, and the tyrannically credulous futurists always foretelling the televised revolution from their corner of the revolution-industrial complex. (It’s called Artificial General Intelligence in the fantasists’ parlance. But “anthropomorphic” says more about the idea than “general,” a word that says nothing. Also the idea of general intelligence is an oppressive ideology of pedagogical eugenics.) Whatever it’s called, the subject has long been discussed with quiet reverence or detached analysis, always on its own terms. But if you listen closely, humming under the pathetic appeals to the largely male psychodrama on power and creation are just the ancient incantations for the summoning of fools.

On Having Ideas

We already know from fifty-plus years of embarrassing pronouncements that computer scientists are terrible at estimating how hard AGI/AAI is, and there is no indication at all that the field will be able to encode anything meaningfully akin to a human intelligence in a computer in the near future or ever. One possible, even probable, reason for that — and in any case a useful consideration — is that AAI is not a coherent idea.

Technical ideas usually emerge through a few set stages of development: pure research, applied research, market research, design, architecture, implementation, iteration. But there’s a step before all that, which we don’t normally check whether we’ve done because it come free with all good-faith human thought. I’ll borrow for its name the word “ontology,” which means more interesting things but is here defined as the consideration of what exactly the fuck the existence of a thing would even entail.

What would the ultimate, Singularity-inducing, initially-human-equivalent AI be? Not technically speaking, but in the simplest sense, the language of fresh MBAs interviewing their technical cofounders. What would we think we were launching if we were launching Skynet? The answer seems to be, in classical elevator pitch form: “What if a machine could have the intelligence of an employee? It would understand assignments you give it in human language, and then execute them as a human would, but without needing to eat or sleep?”

This kind of answer works to explain most applied technologies. For instance, “what if a car could drive itself?” Or, “what if your phone was a computer?” But here’s the problem: Cars and phones are things. Everyone knows what a car or a phone is. (Or was.)

No one knows what intelligence is.

(If you think you do, please realize that’s your own intelligence shouting, toddler-like, “it’s me!”)

So what exactly are we talking about an artificial version of? From our elevator pitch it seems to be Autonomy. Do we then essentially want a person, capable of acting autonomously, but also like a computer in that we control it? There’s a simple problem with that: However much a thing is autonomous or controlled is sum-zero by definition. The words are goddamn antonyms! We balance them when automate things. AI is just automating more complex cascades of previously human decisions. AAI is the confused idea of automating all of the decisions, at which point it ceases to be an automation.

This is an obvious problem to anyone thinking about (as opposed to selling) AAI, so we seem to have accepted — like a bunch of little Picards — that AAI may be some new sentient being, one which we do not expect to obey us, and which we might ultimately have to obey. But even that critique of the assumptions driving the development of AAI relies on silly assumptions. Why would a new consciousness care about domination? Because we do? Then is it new? Or is it just a puppet we forgot is our hand?

If the worst form of literature ever wasn’t arguments wrapped in bad fiction like Ishmael or the works of Ayn Rand, I’d novelize that sarcastic flash fiction opening. It would be another Pygmalion derivative like Her but not as good, or Ex Machina but not as bad. Except that in the end the horror of the robot monster is that it’s just another person: insecure, obfuscated, maybe fun to drink with, but mostly just present, somewhere in their little world, which exists only as a simulation running on $1 million/day of petaflops on a neural net supercomputer the turning off of which would be murder.

Point being that we have every reason to assume that a sentient AI we created would be a projection of not our most violent selves, but our most boring pointless selves. (Existence is pointless, that is lacking any point outside itself. Shit — by which I mean Being — seems to be a sort of ecotautology; it reproduces itself and that’s it. Everything is just the accumulation of self-reproduction, just is, and then is more.) Our assumptions about the power and will of our mechanical creations are limited to the experience of augmenting our own existence with those machines. We fear the unknown difference from us that AAI might obtain because we simultaneously and incoherently take as known that in it something like our own progression would obtain.

That little-considered outcome for AAI, just creating standard chaotic life, would be not so much evil as criminally pointless and wasteful. It would accomplish nothing not accomplished by unprotected sex, yet abstain from the latter’s elegance, meaning, and fun, and all at enormous cost.

Or, sure, maybe AAI would advance beyond our comprehension into the terrible, in some way beyond terrible, incomprehensible humans. This seems to be what the worlds’ shut-ins, tech-money-bubble denizens and other isolated, non-representative human samples believe. That is the other possibility of real AAI, but it is of course much worse, and in all cases equivalent in its absolute dumbassness. Which brings us back to the idea of AAI’s “ontology.”

What will AAI finally be?

These two possible scenarios have exactly one and the same outcome as systems’ design: a black box. Yet unlike useful black boxes — intentionally decoupled components — these would have no useful, verifiable output, also know as a point. Even if an AAI being did develop abilities beyond ours — the kid got off the simulated couch and started simulating global air traffic — we by nature wouldn’t have intended or understood those abilities, not just in their implementation but in their purpose. Inasmuch as AAI “worked” it would therefore not work by any recognizable definition of the word.

All ensuing problems would be in no way qualitatively different from global thermonuclear war, stupid violence of our own making yet out of our control. But quantitatively they might be an apocalyptic superset of all past threats — really disrupting the entrenched apocalypse market. This is where AAI the New Being circles back into AAI the New Machine. Whether it’s a being or a machine, how would we know and why would we care? It would just suck.

No AI scenario can yield what the effort towards it seeks, either a powerful version of ourselves that can or would help us, essentially a superhero, or a capable but powerless version of same, essentially a slave.

So what’s the point? If we could actually build a neural network comparable in complexity and arrangement to our own (given that dendrites turn out to also cascade at random voltages) and do so in a substrate that could physically change (since neuroplasticity is probably intrinsic to our cognitive function) and have the nerds tasked with its ethics solve metaphysical issues that we hardly addressed in millennia of focused, iPhone-free contemplation, and discover and answer all of the other unknown unknowns that will arise, what would we have accomplished? And what could we have accomplished if our pursuits weren’t so nakedly driven by lost, rabid narcissism? What if instead of trying to build some better type of human, we tried to stop destroying the existing type?

Again, there is AI research that is just what its name says and is fine, maybe even good. But it is very fucking boring. (Not to me but to most, who maybe don’t seem so focused on bias or statistics or data-driven decision-making what with their forced precarity and all.) Artificial intelligence should be seen the same as artificial flavoring or artificial plants, weird, cheap, kind of dumb and not very good, but perhaps useful in small amounts in the right place.

Predictive models combined with strong social policy could democratize the work of distributing social services and empower local governments, our most important political field right now. Deduplicating media with machine learning could support citation and fight the erasure and cooptation of peoples’ labor. [2023 Editor’s Note: Or LLMs — literally Artificial AAI lol — could do the exact opposite.] With new jobs for the human drivers, self-driving cars might be an improvement upon existing cars. (It’s a very low bar.) But these systems have approximately zero to do with human intelligence. Analyzing your photos and navigating traffic patterns are just new types of automation, the codification of narrowly-constructed human activity so that humans can do less of it. (Or get paid less for it being done, but I digress.)

It’s Idiots All the Way Down

AAI — imagined it seems as the great single scythe of all technology finally closing in on us — is just that, an imagining, a fantasy, a parable, entirely about our minds and not at all about our world. It’s the new Frankenstein’s monster, who (plot twist ugh) becomes Dr. Frankenstein. AAI is a metaphor, a godhead superhero or zombie vampire reflecting now the hazards of post-late-capitalist-post-modernity, rather than our ancient murderous and sexual impulses. (Wait no AAI definitely is also about our murderous and sexual impulses, jeez Westworld.)

And of course it’s about capitalism. After all, what could be a more perfect excuse for some new innovative slavery than the disruption of the definition of a human? Machine learning is all about training the algorithm, a potentially lower-skill task than any we’ve ever paid people (shit) to do. And every new problem space, from FedEx dispatch to nanite lead pipe lining, will need new training. Way back in the mid ’10s, a bunch of AI assistant startups turned out to mostly run on people working insane hours pretending to be AIs, via a “training” interface that was more like a chatroom with an autoresponding FAQ and a little squirt of outsourced machine learning juice.

Think of a call center job. Now take away any remaining human interaction and flexibility. Now make employees fully interchangeable so long as they can use a smartphone. Those will not be the honest working class jobs of yore. Market-rate wages for such jobs would approximate zero. Perhaps compensation would include cool startupy perks like food and shelter. Seems familiar.

But that’s another story, for us to tell for the rest of our lives and prevent. For now let’s just remember that Futurists aren’t talking about the future, which does not exist. They’re talking about the present, and those parts of it they feel able and/or entitled to change. Ray Kurzweil is a few notches on the crazy pole above Marshall Applewhite. Elon Musk is running a long con. They’re grifters, fools, a distraction. The idea that Anthropomorphic Artificial Intelligence is something we could, should, or would even know what it meant to build is inextricable from an old, dumb and resurgent idea: That people are machines — something that is created, controlled, comprehensible — rather than the creators and controllers of what we comprehend, an end unto ourselves.

The machine we should most fear is the one we’ve been convinced we are.

--

--