Zitron has been writing exhaustively about Silicon Valley’s AI sham; he was one of the earliest voices calling it out. He also wrote a profoundly accurate non-AI-related piece about the corporate business environment that has allowed thoughtless, inane AI boosterism to thrive. A must-read for anyone working in corporate America:
The technology is not all it’s cracked up to be when applied to most real-world tasks, let alone full jobs. Silicon Valley is relying on blind adherence to their claims that AI is coming for everyone’s livelihood, because they are out of real product ideas and desperate to grow forever.
AI may destroy the economy, but it is because greedy fools have propped the economy on top of unvetted, fairytale claims.
JVL, you should give yourself a homework assignment and re-read (read?) Brave New World, or at least the first 4 chapters or so, and focus on the economics. Therein lies one possible solution for the AI economy. Not ideal, but possible.
On the LLM issue, good grief. It's not LLMs that make me despair, it's the lack of their creators, enhancers, and users to make the effort to understand the so-called 'black box.' If they persist in saying they don't know what's going on in there, that's more a mix of laziness and winking marketing than reality. The world was in awe of IBM's Watson 15 years ago, when it was really not much more than hyperfast hardware running a natural language interface for the most optimized search engine money could buy. LLMs are doing something analogous with statistical language use. It might be rocket science—but it's not magic.
Okay, one, all the extant patriarchy and capitalism is built into the data set, which means whatever AI comes about is only going to entrench those things.
Two, societal collapses are immensely destructive things and if you say "yes, let's collapse society" you're also saying you're all right with the most vulnerable (because that's who dies in societal collapses, not the rich people) dying by the thousands for your more ideal society.
"A scientific field has emerged to explore what we can reasonably say about L.L.M.s—not only how they function but what they even are."
Gee, I wonder if they'll have more success guiding American policy than the global climate scientists?
The depressing thing - if you allow yourself to go in for that - is that we sit in a moment of history where essentially the same people that brought us social media intoxication and delusion are now sitting at the controls of the AI phenomenon. They don't really know what it is, they don't really know how it works, and now they're letting it off the chains to build itself, because all they really care about is being 'first.' First to what? Who the hell knows, they just know that they sure as hell want to be first. Somehow the concept that capitalism will work when, what, only 80%, 50%, of the population work, they're not concerned about that. Think of the wealth transfer then. This country has always hated the idle soul of working age.
Meanwhile, Congress has never seen a wave they couldn't fall behind. They aren't even swimming, because they share similar imperatives with corp leadership. I think the technical media has been slow on the issue, only starting to ring the alarm bell AFTER entry-level hiring got hit last summer. Sorry about that four-year Comp Sci degree, son. Now we can look forward to the day when Congressional candidates have 'reining in AI' as one of their top three issues. That should have started already. Can't wait to see what the next iteration of maga looks like. Maybe Fox can keep them separated by the schadenfreude over white collar job losses while home building craters. Yeah, same as it ever was. Could be depressing.
On a scale from Moltbook's Matt Schlicht to Eliezer Yudkowski you will find me closer to the latter. Although no one is really capable of imagining an intelligence that transcends the human intelligence, I find Yudkowski's claims compelling.
Considering the speed-problem you describe and the time-problem you brought up a week or two ago, Yudkowski's warnings are convincing.
Two aspects:
1. AI or AGI may not have a will of its own, but it is output driven. Can or will output in the end be like a will?
2. Alignment. The idea that AGI will be friendly in the future because we tweak it that way now is simply illusive.
I find Geoffrey Hinton's quote, "If you want to know what it will be like when your intelligence is no longer superior, just ask the chicken", quite telling. Hinton was once considered a, if not the father of machine thinking.
When it comes to regulation the EU at least tries to do something... The Digital Services Act drives Musk mad. That means the European Commission did something very very right.
We have all seen how well regulations and laws have been working during the current regime! Early on controls over AI were eliminated. The EPA is being destroyed in our country—coal is now being called “clean” by Orange Foolious, and the future is in danger.
The other option along with regulation is taxation. Cool, you’re going to fire 10% of your labor force because of AI? Then you’re going pay to expand the social safety net, including healthcare, generous unemployment benefits, and skills retraining.
JVL you need to see Good Luck, Have Fun, Don’t Die. It grapples with everything you bring up in a shocking and interesting way. But it is predicated on the fact that we are walking into this too fast. And we are. The benefits of AI is a snake eating its own tail.
Exactly. In the past four months, I have changed the way I work completely, due to the increasing accessibility of and broad reach of AI. As a Gen Xer (graduated college before everyone had internet access and email), I work hard to keep up with speed of everything, now pace of change feels exponentially faster. We need a new term for 21st century version of neurasthenia and a way preserve the natural speed of just being a human being.
One guy I follow for a balanced perspective on these issues is Tim Dettmers: https://timdettmers.com/
I like his takes because he uses AI extensively in sensible ways, and also has background in computer science, electrical engineering, and neuroscience to understand the issue from multiple perspectives. His posts are long but worth reading. One really interesting take is that he’s specifically bullish on the Chinese AI approach: broad adoption of cheap models in ways specific to the individual or institution, plus experimentation, rather than racing towards a winner-take-all, biggest and best (and most expensive) model. In his view, China wins the race by actually getting people using the technology.
The most discouraging place for LLM usage is academia, where most of my wife's students use it merely as a shortcut for completing assignments. If they are smart, they will observe the borders of the AI- what it does well v.s. not well- and hone their skills in the "not well" portion. As for the big step in AI capability, I'll believe it when I see it. I'm using it daily for coding and bought a Tesla before peak Elon. The "not well" areas are still considerable
This country couldn’t even agree to wear masks as a collective good in the middle of a pandemic. There is no way that folks whose jobs aren’t threatened will agree to bar the use of AI models (when using them could reduce their costs).
Isn’t this exactly what happened when we adopted NAFTA and other free trade agreements? We zeroed out of consumer demand in small towns which did NOT outweigh productivity gains because the cheaper widgets were sold in rich cities- not as much in those small towns.
https://citrini.substack.com/p/2028gic?utm_campaign=post-expanded-share&utm_medium=web
"𝐆𝐨𝐯𝐞𝐫𝐧𝐦𝐞𝐧𝐭𝐬 𝐬𝐡𝐨𝐮𝐥𝐝 𝐜𝐨𝐧𝐬𝐢𝐝𝐞𝐫 𝐭𝐚𝐱𝐢𝐧𝐠 𝐀𝐈 𝐭𝐨 𝐜𝐮𝐬𝐡𝐢𝐨𝐧 𝐭𝐡𝐞 𝐞𝐟𝐟𝐞𝐜𝐭𝐬 𝐨𝐟 𝐬𝐰𝐞𝐞𝐩𝐢𝐧𝐠 𝐣𝐨𝐛 𝐥𝐨𝐬𝐬𝐞𝐬, 𝐚𝐜𝐜𝐨𝐫𝐝𝐢𝐧𝐠 𝐭𝐨 𝐀𝐥𝐚𝐩 𝐒𝐡𝐚𝐡, 𝐜𝐨-𝐚𝐮𝐭𝐡𝐨𝐫 𝐨𝐟 𝐚 𝐂𝐢𝐭𝐫𝐢𝐧𝐢 𝐑𝐞𝐬𝐞𝐚𝐫𝐜𝐡 𝐫𝐞𝐩𝐨𝐫𝐭 𝐭𝐡𝐚𝐭 𝐰𝐚𝐫𝐧𝐞𝐝 𝐚𝐛𝐨𝐮𝐭 𝐭𝐞𝐜𝐡 𝐝𝐢𝐬𝐫𝐮𝐩𝐭𝐢𝐨𝐧 𝐚𝐧𝐝 𝐟𝐮𝐞𝐥𝐞𝐝 𝐚𝐧 𝐀𝐈 𝐬𝐜𝐚𝐫𝐞 𝐭𝐫𝐚𝐝𝐞"
𝐓𝐇𝐄 𝟐𝟎𝟐𝟖 𝐆𝐋𝐎𝐁𝐀𝐋 𝐈𝐍𝐓𝐄𝐋𝐋𝐈𝐆𝐄𝐍𝐂𝐄 𝐂𝐑𝐈𝐒𝐈𝐒
𝐀 𝐓𝐡𝐨𝐮𝐠𝐡𝐭 𝐄𝐱𝐞𝐫𝐜𝐢𝐬𝐞 𝐢𝐧 𝐅𝐢𝐧𝐚𝐧𝐜𝐢𝐚𝐥 𝐇𝐢𝐬𝐭𝐨𝐫𝐲, 𝐟𝐫𝐨𝐦 𝐭𝐡𝐞 𝐅𝐮𝐭𝐮𝐫𝐞
I recommend a healthy dose of long-form skepticism to temper any hand-wringing about AI:
https://www.wheresyoured.at/the-case-against-generative-ai/
Zitron has been writing exhaustively about Silicon Valley’s AI sham; he was one of the earliest voices calling it out. He also wrote a profoundly accurate non-AI-related piece about the corporate business environment that has allowed thoughtless, inane AI boosterism to thrive. A must-read for anyone working in corporate America:
https://www.wheresyoured.at/the-era-of-the-business-idiot/
He’d be a great pod guest!
Gary Marcus is another voice of sanity and an AI expert:
https://open.substack.com/pub/garymarcus/p/rumors-of-agis-arrival-have-been?r=e33fk&utm_medium=ios
The technology is not all it’s cracked up to be when applied to most real-world tasks, let alone full jobs. Silicon Valley is relying on blind adherence to their claims that AI is coming for everyone’s livelihood, because they are out of real product ideas and desperate to grow forever.
AI may destroy the economy, but it is because greedy fools have propped the economy on top of unvetted, fairytale claims.
JVL, you should give yourself a homework assignment and re-read (read?) Brave New World, or at least the first 4 chapters or so, and focus on the economics. Therein lies one possible solution for the AI economy. Not ideal, but possible.
On the LLM issue, good grief. It's not LLMs that make me despair, it's the lack of their creators, enhancers, and users to make the effort to understand the so-called 'black box.' If they persist in saying they don't know what's going on in there, that's more a mix of laziness and winking marketing than reality. The world was in awe of IBM's Watson 15 years ago, when it was really not much more than hyperfast hardware running a natural language interface for the most optimized search engine money could buy. LLMs are doing something analogous with statistical language use. It might be rocket science—but it's not magic.
Or, we can rejoice in the fact that AI means the collapse of our patriarchal/capitalist society, which is long overdue. I say bring it on.
Okay, one, all the extant patriarchy and capitalism is built into the data set, which means whatever AI comes about is only going to entrench those things.
Two, societal collapses are immensely destructive things and if you say "yes, let's collapse society" you're also saying you're all right with the most vulnerable (because that's who dies in societal collapses, not the rich people) dying by the thousands for your more ideal society.
Big Laffer Curve feels
"A scientific field has emerged to explore what we can reasonably say about L.L.M.s—not only how they function but what they even are."
Gee, I wonder if they'll have more success guiding American policy than the global climate scientists?
The depressing thing - if you allow yourself to go in for that - is that we sit in a moment of history where essentially the same people that brought us social media intoxication and delusion are now sitting at the controls of the AI phenomenon. They don't really know what it is, they don't really know how it works, and now they're letting it off the chains to build itself, because all they really care about is being 'first.' First to what? Who the hell knows, they just know that they sure as hell want to be first. Somehow the concept that capitalism will work when, what, only 80%, 50%, of the population work, they're not concerned about that. Think of the wealth transfer then. This country has always hated the idle soul of working age.
Meanwhile, Congress has never seen a wave they couldn't fall behind. They aren't even swimming, because they share similar imperatives with corp leadership. I think the technical media has been slow on the issue, only starting to ring the alarm bell AFTER entry-level hiring got hit last summer. Sorry about that four-year Comp Sci degree, son. Now we can look forward to the day when Congressional candidates have 'reining in AI' as one of their top three issues. That should have started already. Can't wait to see what the next iteration of maga looks like. Maybe Fox can keep them separated by the schadenfreude over white collar job losses while home building craters. Yeah, same as it ever was. Could be depressing.
I had to get a temporary work visa with a letter from the university just for an eight day visit 😭 I hate it here.
Thanks JVL, for this article.
On a scale from Moltbook's Matt Schlicht to Eliezer Yudkowski you will find me closer to the latter. Although no one is really capable of imagining an intelligence that transcends the human intelligence, I find Yudkowski's claims compelling.
Considering the speed-problem you describe and the time-problem you brought up a week or two ago, Yudkowski's warnings are convincing.
Two aspects:
1. AI or AGI may not have a will of its own, but it is output driven. Can or will output in the end be like a will?
2. Alignment. The idea that AGI will be friendly in the future because we tweak it that way now is simply illusive.
I find Geoffrey Hinton's quote, "If you want to know what it will be like when your intelligence is no longer superior, just ask the chicken", quite telling. Hinton was once considered a, if not the father of machine thinking.
When it comes to regulation the EU at least tries to do something... The Digital Services Act drives Musk mad. That means the European Commission did something very very right.
Ruurd
We have all seen how well regulations and laws have been working during the current regime! Early on controls over AI were eliminated. The EPA is being destroyed in our country—coal is now being called “clean” by Orange Foolious, and the future is in danger.
The other option along with regulation is taxation. Cool, you’re going to fire 10% of your labor force because of AI? Then you’re going pay to expand the social safety net, including healthcare, generous unemployment benefits, and skills retraining.
JVL you need to see Good Luck, Have Fun, Don’t Die. It grapples with everything you bring up in a shocking and interesting way. But it is predicated on the fact that we are walking into this too fast. And we are. The benefits of AI is a snake eating its own tail.
Exactly. In the past four months, I have changed the way I work completely, due to the increasing accessibility of and broad reach of AI. As a Gen Xer (graduated college before everyone had internet access and email), I work hard to keep up with speed of everything, now pace of change feels exponentially faster. We need a new term for 21st century version of neurasthenia and a way preserve the natural speed of just being a human being.
One guy I follow for a balanced perspective on these issues is Tim Dettmers: https://timdettmers.com/
I like his takes because he uses AI extensively in sensible ways, and also has background in computer science, electrical engineering, and neuroscience to understand the issue from multiple perspectives. His posts are long but worth reading. One really interesting take is that he’s specifically bullish on the Chinese AI approach: broad adoption of cheap models in ways specific to the individual or institution, plus experimentation, rather than racing towards a winner-take-all, biggest and best (and most expensive) model. In his view, China wins the race by actually getting people using the technology.
The most discouraging place for LLM usage is academia, where most of my wife's students use it merely as a shortcut for completing assignments. If they are smart, they will observe the borders of the AI- what it does well v.s. not well- and hone their skills in the "not well" portion. As for the big step in AI capability, I'll believe it when I see it. I'm using it daily for coding and bought a Tesla before peak Elon. The "not well" areas are still considerable
This country couldn’t even agree to wear masks as a collective good in the middle of a pandemic. There is no way that folks whose jobs aren’t threatened will agree to bar the use of AI models (when using them could reduce their costs).
Isn’t this exactly what happened when we adopted NAFTA and other free trade agreements? We zeroed out of consumer demand in small towns which did NOT outweigh productivity gains because the cheaper widgets were sold in rich cities- not as much in those small towns.