Going to be a short newsletter today because Sarah and I did a super-sized Secret Podcast this morning. You’re going to love it. Instant classic this week.
Also: Next week is going to be a weird Triad schedule because I’m not sure when I’ll find windows to write while heading to Minneapolis for the live shows. (It looks like just a few tickets are still available for the February 18 event.)
1. Systems
This week a tech guy named Matt Shumer wrote a big, apocalyptic warning about the chaos AI is about to unleash.
His essay is worth your time, so I’d encourage you to read it in full. But the basic summary is:
AI has experienced a step-change in quality in recent months.
The pace of AI improvement, from model to model, has sped up noticeably.
AI models are now useful enough that they help build their successors.
There’s a lot more in it. Again, read the whole thing. What worries me—what I want to talk about today—is the problem of speed.
If I could do one thing to change American education it would be to focus on ecology early and often. That’s because humans don’t think enough about systems and the easiest way to introduce the concept of a system is to talk about local environments.
Get kids thinking about how an ecosystem works and they can learn how a financial market, or an industry, or a network functions. It helps them understand stable-states, and systemic shocks, and evolutionary change. There’s a lot to learn.
One of the big lessons of ecology is that complex systems are tremendously resilient and adaptable if the change comes slowly enough. Complex systems are not vulnerable to change so much as they are vulnerable to shocks—sudden, rapid change.
That’s what worries me most about AI.
In the early days of ChatGPT, people were worried about the robot apocalypse. The big fear of the moment is white-collar job displacement, especially at the entry level. What happens when AI can do everything a paralegal, or a research assistant, or a data analyst does, and cheaper? What happens when AI can do journalism, coding, graphic design, and anything you might have hired McKinsey to do?
A lot of white-collar workers may be out of a job.
That wouldn’t worry me if it happened over the course of twenty years. Because the market would adapt. New industries would emerge; new pathways would be established. The system would find a Pareto optimal state.
But what if the pace of adoption is much faster? What if the AI-induced shifts happen over a 5- or 10-year timeline?
It’s the second-order effects that scare me.
Let’s say you own Acme Widgets and you have a stable business. You discover that you can keep productivity constant, but cut costs by employing AI to do the work of 10 percent of your workforce. So you cut those workers. You’re now making more money. Good for you.
But workers are also consumers. And if many other companies are also finding productivity gains by replacing their workers with AI, then suddenly there are going to be a lot of workers without jobs—which means a lot of consumers without paychecks.
Which means a lot of crashing demand for goods and services, across the board.
I suspect that there’s a sliding scale for AI adoption where, if the number of displaced workers is low enough, then the value of productivity gains outbalances the value of lost consumption from unemployed workers. But as you slide up the scale to more and more workers being displaced, that balance shifts. There must be a point at which the destruction of jobs is actually a net harm to the macroeconomy—because the zeroing out of consumer demand far outweighs productivity gains.
Of course, what I’m describing here is a classic shock. If either the scale of the change is small enough or the timeline in which the change is introduced is long enough, then the system will be able to manage it. Not painlessly. Not perfectly. But we’ll all muddle through to some new equilibrium.
What worries me is that if the pace of AI development is accelerating, then we’re both (a) increasing the scale of the coming change while (b) shrinking the timeline on which it will arrive.
Which is a recipe for overwhelming the system. And when systems—even complex systems—become overwhelmed, they are vulnerable to collapse.
2. Artifice
I don’t know what the answer is here. Maybe AI will be less impactful than people like Matt Shumer think. Or maybe it will develop on a time horizon that is manageable.
But if it’s neither of those things? If it’s as disruptive as people expect and it materializes faster? Then what?
One option would be artificial controls.
Technologists like to say that genies cannot be put back into bottles, but that is not exactly true. A technology can’t be unlearned, exactly, but it can be regulated. It is possible for our society to choose to limit the application of AIs, and to enforce that limitation by law.
Maybe that’s a bad idea. Maybe it won’t be necessary. But it is an option. Just as we have rules for how labor works, or salaries are paid, or taxes are levied, we can create rules that govern how industries may use AI.
We do not have to walk into a dystopian future just because OpenAI builds it.
We have agency. It is possible to use society’s power—the consent of the governed—to establish laws that mandate the use of certain human labor and prohibit the use of certain machine labor. This is no different in principle from how regulations and laws govern the use of pesticides, or the genetic manipulation of crops, or the use of chemicals on livestock.
We get to decide how technology is used; or if it is used at all.
This seems like something the next Democrat who wants to be president should think about. 🤷♂️
3. Claude
Anthropic’s Claude is my AI of choice. The New Yorker has a profile of it.
A large language model is nothing more than a monumental pile of small numbers. It converts words into numbers, runs those numbers through a numerical pinball game, and turns the resulting numbers back into words. Similar piles are part of the furniture of everyday life. Meteorologists use them to predict the weather. Epidemiologists use them to predict the paths of diseases. Among regular people, they do not usually inspire intense feelings. But when these A.I. systems began to predict the path of a sentence—that is, to talk—the reaction was widespread delirium. As a cognitive scientist wrote recently, “For hurricanes or pandemics, this is as rigorous as science gets; for sequences of words, everyone seems to lose their mind.”
It’s hard to blame them. Language is, or rather was, our special thing. It separated us from the beasts. We weren’t prepared for the arrival of talking machines. Ellie Pavlick, a computer scientist at Brown, has drawn up a taxonomy of our most common responses. There are the “fanboys,” who man the hype wires. They believe that large language models are intelligent, maybe even conscious, and prophesy that, before long, they will become superintelligent. The venture capitalist Marc Andreessen has described A.I. as “our alchemy, our Philosopher’s Stone—we are literally making sand think.” The fanboys’ deflationary counterparts are the “curmudgeons,” who claim that there’s no there there, and that only a blockhead would mistake a parlor trick for the soul of the new machine. In the recent book “The AI Con,” the linguist Emily Bender and the sociologist Alex Hanna belittle L.L.M.s as “mathy maths,” “stochastic parrots,” and “a racist pile of linear algebra.”
But, Pavlick writes, “there is another way to react.” It is O.K., she offers, “to not know.”
What Pavlick means, on the most basic level, is that large language models are black boxes. We don’t really understand how they work. We don’t know if it makes sense to call them intelligent, or if it will ever make sense to call them conscious. But she’s also making a more profound point. The existence of talking machines—entities that can do many of the things that only we have ever been able to do—throws a lot of other things into question. We refer to our own minds as if they weren’t also black boxes. We use the word “intelligence” as if we have a clear idea of what it means. It turns out that we don’t know that, either.
Now, with our vanity bruised, is the time for experiments. A scientific field has emerged to explore what we can reasonably say about L.L.M.s—not only how they function but what they even are. New cartographers have begun to map this terrain, approaching A.I. systems with an artfulness once reserved for the study of the human mind. Their discipline, broadly speaking, is called interpretability. Its nerve center is at a “frontier lab” called Anthropic.




You miss one point. No one knows if it is cheaper. These companies are lighting money on fire for their janky shit. When the dust settles. Who is using this when generating an email costs 100 dollars. But the destruction by then may be unstoppable. It is so irresponsible to push this out in the manner it’s happening now. I work in tech and my C Suite is forcing this shit on us and threatening to fire people who don’t adopt….
Two things:
1) The stock market lost its mind yesterday over some AI tech nonsense. So... all is not rosy among the AI-tech billionaires and their LLM utopia. I'd love to see a huge swath of them lose their shirts over their promises made of smoke.
2) I am in grad school studying Linguistics. My area of study is Language as a Cognitive Science. I tend to agree with Bender and Hanna. AI is not this amazing thing. It's database calls at super-high speeds and zeros and ones. No consciousness is emerging from this. It's math.
I see a definte trend among the tech bros who talk about AI as if it's a magic black box that no one really understands. No, bro. *You* don't understand it. But the people who programmed it do. You often see this kind of "It's-so-amazing-I-can't-even-explain-it-to-you" talk among people like Elon Musk and Peter Thiel who aren't in any way experts in the things their companies make. They didn't build it. But they're the Face Men. So they have to front and say it's AMAZING! And maybe it is amazing. But just because they don't understand it doesn't mean no one understands it. It's really tiring to hear them use overly-simplistic vocabulary to explain something they don't understand to an audience that they don't understand either.
In my world, nearly every time someone asks AI to do a job beyond crunching some data, it performs so badly that a bunch of know-nothing freshman could have done it cheaper and faster and more correctly. So there is that. My friend asks ChatGPT everything like it's her personal life coach. The handful of times I've tried to use it, it gave me an answer that was diametrically opposed to the correct answer, so I think it's not worth using. I'll look it up myself, thanks.
Yes. We can decide what AI is and what AI does.
But we might want to start thinking about Universal Basic Income unless we want more people living on the streets than paying their rent.