1) The best analogy I heard about AI today (particularly LLMs) is that it's like when ancient man discovered fermented beverages. He put a bunch of ingredients into a pot in the corner of the cave, went out hunting for a few weeks, came back, and had magic. He didn't know how the magic worked, he just knew how to make it. Some of the ingredients we put into LLMs may turn out to be toxic to us btw.
2) It's on us to decide what we do with the technology we develop. Nuclear weapons aren't super sophisticated once you understand how they're made. We have limited their ownership by treaty and restricting access to fissile material. AI is easily 100x more dangerous than nukes because the attack vectors are all over the place.
Tech bros need an attitude adjustment and we need to have a real talk with China about our mutual concerns. Not sure I trust Trump admin to handle ANY of this, but hey, a guy can dream.
3) From what I've seen, the tools that exist save time, but don't necessarily replace expert judgment and expertise. Maybe someday the machines will be able to "what if" with me, but until then I'm fairly convinced the world needs me for a bit. Most of the work that the machines replace is administrative and "skut" work where you need a document typed up or a model created. I'm really good with excel, but the real value I bring is the ability to determine what to model and what it tells us when we're done.
The biggest concern here for me is who replaces me? If the junior staff are never hired and never cut their teeth building the expertise, how do we get better? In a lot of ways it resembles what I was doing when we had outsourced staff to India or other markets to turn the crank but not the actual higher-level work. India got progressively more expensive to the point that it didn't make sense to outsource everything.
4) The real cost to run all of these models is WILDLY underreported. ChatGPT loses money incrementally on every prompt. If they just lost money on training and then the steady state was break-even or better I might believe they're a viable company, but that's not the case. The price of the use of their model is way below what it's going to be steady state. THis is before we even know what the longterm cap-ex requirements are for this stuff. The servers and chips aren't going to run forever, they burn out.
If they think they're going to trojan horse their way into Corporate America, they may be surprised. We've already seen this show before with Amazon, MSFT, and others selling at less than cost to then rachet up pricing in the future. If you think the CFOs of American companies are that dumb, well, you might be right, but I wouldn't bet on it.
Their IPOs later this year are going to be fascinating to read.
The reason I think this could all blow up is that there is a real greed problem among the corporate and financial elites. The CEOs all know the big picture suggests that too many layoffs all at once can create problems for everyone, but at the same time they don’t want to be the one caught with too many workers and not enough productivity.
I totally agree that new technologies can and should be regulated for the benefit of the governed, and yes, the next cohort of Dem presidential candidates need to be considering the macroeconomic issues you raise.
Among others. The tech companies have walked all over copyright law and other IP rights to create their models. The data center infrastructure build-out to support AI development stresses the grid and raises consumer electricity prices. There's a shortage of RAM for use in many devices because the producers have focused on AI chips. We need to consider that criminals and terrorists will try to utilize AI. Etc.,etc. In short, there are lots of reasons for society, through government, to consider regulation of this technology.
Amid all the hype, I've wondered why the "law of diminishing returns" (aka 80/20 rule) wouldn't apply to AI also. At some point, new models will be more and more expensive to create, for incremental gain in performance. Model subscription costs, low now, will go up. There will be business environments where adoption doesn't make sense or is limited.
Do we need to have an AI of choice? I am sure that if you are still working or in school, you do. I am lucky to be retired, and I am not interested in deliberately using AI. Oh, I know that like Trump, it seeps into my life no matter what I do. But I do not have to deliberately use it. I know that AI will almost certainly be a boon in some areas, but I do not like the idea of young kids using it. Research has shown that writing information down helps you to remember it. I fear that in academic life, it will promote a kind of laziness.
I hope that if Dems get into power in the next few years, that they take the time to think about how to regulate AI. Since I am wishing, I also hope that we make changes to the SC, like age-mandated retirements, or retiring after serving for a set period of time. A SC with some younger Justices *might* be more likely to see the need for AI regulation, or even forcing some of these firms to break up(anyone else old enough to remember the “Ma Bell” breakup?) than some of the dinosaurs currently serving on the Court.
There two constraints on AI that aren't discussed.
1. No one knows that this will actually work and not be a bubble. Consider its predecessors: Nuclear power will make electricity too cheap to meter. XML will take over the world. SAAS agents will eliminate software packages--people will just cobble together agents who will do all our work. Machine learning will solve everything. Spoiler alert: none of these things happened. They all hit walls. There no reason why generative AI will not.
2. Generative AI takes enormous amounts of electricity. For AI to do have of what people say, it will need current generation capacity at least double (if they can use electricity more efficiently) to 10 times current output (if they can't).
Where is the electricity going to come from? Right now, it is inflating electricity prices for other consumers. People are already pushing back as their bills have tripled or quadrupled in some markets. We are running into Herbert Stein's Law: anything that can't continue, will stop. This is not sustainable.
New generation is expensive to build, especially non-interruptible power. Microsoft is taking 3 Mile Island out of mothballs. There are a couple of other nuclear facilities that could be revived BUT doing so is wickedly expensive. Coal is not cheap; it is among the most expensive sources of electricity. NIMBYism will limit where coal plants can be built or revived. Electricity from hydro (dams) is being curtailed due to persistent drought in the west. The Trump administration is limiting solar and wind. Where is this energy going to come from. Who is going to pay for this?
Full disclosure: I am a computer programmer who worked for 10 years for a power company in the PJM region. I am now going to give you a history lesson. Aren't you lucky.
First a clarifying statement: Deregulation was not intended to make consumer electricity cheaper. It was designed to make commercial electricity cheaper and hobble state Public Utility Commissions (PUC). It lead to the creation of PJM and other interconnects.
Here's what happened: In the Great Depression, power company were turned into utilities regulated by state PUCs from brittle monopolies that both charged excessive amounts and did not reach into areas where profits were not guaranteed. The leaders were extremely irresponsible as was revealed in news articles and congressional hearings. It was decided that utilities would have a guaranteed rate of return on rates set by a PUC. The rates for household and commercial accounts were very different. Commercial accounts were expected to pay the amortized amount of building new generation, transmission, and distribution. A large commercial enterprise could consume the output of one or more generating stations and an entire transmission network. A household was a tiny blip in terms of demand. As part of the deal, utilities had to ensure reliability which included planning for future growth.
In the post-WWII era, commercial growth was enormous and utilities looked to nuclear to meet the need. As noted above, it was originally thought that nuclear was going to be really cheap. For a lot of reasons, this was very, very, very wrong. And then disaster struck: 3 Mile Island almost melted down and the industry side of the commercial market collapsed in the 1970's and 1980's. The utility I worked for lost a third of its demand as the main industry and its feeder businesses shut down. It had stranded assets (actual term), built for a projected increase in power demand that didn't occur. The costs of these assets could be spread to commercial rate-payers (another actual term) but not households. Commercial rates soared.
Commercial rate-payers started the deregulation push in order to lower their rates. They wanted the ability to buy from utilities with lower rates (fewer stranded assets). Electric Utilities wanted out of building generation. The idea was that merchant plants would spring up and supply the demand due to market signals. At the time this was proposed, this was a stupid idea. We had base load (uninterruptible power) provided by coal, nuclear, and hydro (dams) and peaking plants that ran on natural gas, propane, and jet fuel (really). The amount of electricity produced had to exactly match the amount consumed or the grid would fail. See regional power outages as in the Northeast blackouts in 1965 and 2003. Generating stations cost 100s of millions of dollars to build. Without a guaranteed return, no one would entertain the idea. As I said, stupid.
Then along comes the fracking boom which means it is now practical to build relatively cheap, small gas powered merchant plants. Further, renewable energy became practical and utility-scale battery storage came on line. The angels sang. The biggest issue was NIMBY opposition to transmission lines.
We started contemplating electrifying everything. Naysayers said we would need to triple energy output and how did we plan on doing that--apply photovoltaic cells to everything? Yes, actually. Every parking lot and roof, Windows of large buildings. If so, with increased efficiency, we could do it.
Now the AI boom. It wants the entire output increase that the Electrify Everything deniers felt was impossible. Their power consumption is a true externality. How will this power be produced? What are the environmental implications? Are these merchant plants? Will they be attached to the grid? We are already at the limit of what households and, presumably, non-AI commercial rate-payers will pay for increased transmission capacity. Current rate structure says that all customers pay whether they benefit or not. If AI fails, there will be an enormous amount of excess generation. Who will pay for this? What happens when the Independent Power Producers go bankrupt? It will be the 1970s all over again. The silver lining is that the electrify everything folks will be happy. They are rooting for this which is its own problem.
The largest constraint on AI is its power demands and this may keep it from taking over the world and our jobs. Even if it works, which, you know, is a big if.
AI models can get bigger, but there is diminishing returns. So a model that is ten times bigger isn't ten times better. More like 5% better.
Also, the restrictor plate for AI is people. AI models can exist, but they won't do anything unless they get implemented and people will be slow to do so. Think about driverless cars. The technology is there, but they are more of a novelty at this point. There only reason we drive cars is because we want to. I don't see that changing anytime soon.
Possibly you haven't heard of the many CEOs that are forcing all their developers to use AI by making it a large measurement in yearly reviews and promotions...
The thing that frustrates me about AI adoption is that people assume that just because it CAN answer a question that means that you’ve gained all the knowledge you need from its answer. There are so many nuances and data points and, frankly, real human experience that have so much more impact to businesses than the answers AI can give. In the market analysis and competitive market research spaces, everyone just wants AI to give them a thumbs up. It feels like a box checking exercise where they never intended to actually learn anything. They can say “research was conducted” and keep the board happy. But every time I’ve done an actual human analysis for the same company that used AI the first time, I’ve absolutely proven that these tools are woefully inadequate to pull out all the nuances that might be important for corporate budgets. And yet, AI is faster and cheaper so why wait for a human to give us a better answer that makes us more money and avoid more pitfalls when we can just AI it and have all we need right now with no foresight into what’s actually better for the business? Where I’ve ended up in my industry is that I’m losing money because more companies are using AI, which is losing them money in the future because they half-assed the research process. What a win-win.
Another issue that concerns me. As with the internet, AI is benefiting from immense public investment and tax incentives, not to mention decades long and deep investment in educational institutions, faculty and students that have lead to the capacity needed to develop and deploy AI. But as with the revolution that commercialized the data revolution, a relatively few individuals massively profited.
It kinda reminds me of the industrial revolution. A few oligarchs profited massively from gains in productivity while rank and file workers lived on subsisted on whatever the robber barons could bear to part with.
So it will be with AI. The robber barons of this ‘gilded age’ are already positioning society for neo-feudalism.
I'll miss your Triad next week. Luckily I'm feeding all the old ones into an LLM, so I can generate my own JVL Triad in your absence. Thanks for helping train the model! /s
The aspect of AI that always seems to get skipped is the cost. That substack you linked practically demands we run out right now and get a $20 a month account to an AI so we can save ourselves by learning how to use it. A lot of that post could be viewed as a "concern troll" where they want you to buy in on their thing not because it would benefit them if everybody paid the money, but because they are worried you might fall behind.
But my brain keeps going back to cost. Does anybody believe that $20 a month gets you 4 hours of data center time a day to have your work done?
All those data centers, all the people needed to keep them running, all the electricity needed to drive all those processors and the cooling and all the network infrastructure that makes this all go? How many people do you need paying $20 a month to cover all of those costs?
AI companies have been operating at a loss for years now, in part because they want to give something away for free to get us hooked and in part because the AI slop they have been offering had very limited value because you have to spend as much time correcting the output as it would have taken to do it on your own.
But let's assume Matt Shumer, that somehow this time it has all been perfected and can do all our non-managerial white collar jobs. What will be the cost to access this nirvana? And, remember, nobody wants to break even on this deal. The AI companies need to recoup losses, pay off debts, and make all the VCs even more obscenely rich for this to be at all a success. Oh, and liability. AI companies as a success will be the target of a multitude of lawsuits due to the outright theft of work.
That is all going to require a lot of cash flow.
I feel like a $20 a month account isn't going to cut it... even before most of us end up unemployed and are unable to afford even that luxury.
You ask: "Does anybody believe that $20 a month gets you 4 hours of data center time a day to have your work done?" That is not how it works. You mean 4 hours of real time (clock time) that you as a human experience. During that time, the computers run your queries in a few seconds of processor time, or at most a minute. That is why most responses appear almost instantaneously.
Furthermore, you are only using a tiny fraction of the processors available in the data center. There are up to 500,000 GPUs (computers) in an AI data center. Your query uses only one, or in some cases 8 or 16 for particularly large problems. Internal processing time for a typical query is 0.1 to 1.0 seconds. So, in 4 hours you use 0.002% of the available processors for perhaps 20 seconds. I would, anyway. It takes me a while to type the query, read and think about the response.
I run an LLM on my PC, qwen3-30b-a3b (30 billion parameter version). It too generates responses almost instantaneously, even though it is using the Intel processor, which is far slower than a dedicated AI GPU. The time I spend thinking about, typing, and then reading the responses must be thousands or millions of times longer than the time the computer devotes to the problem. After all, an I9 Intel processor does 100 billion to 1 trillion integer operations per second, far more than you can do in a lifetime. My LLM has 30 billion parameters, but it sorts through them in seconds.
A former FDA attorney I worked with posted this article on LinkedIn the other day - mind blowing stuff. The irony is that during at time that humanity should probably be shifting to spending most of our time on philosophy and real policy, instead we’re spending all of our time on trolling and political gamesmanship. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6155012
Except the AI experts say you aren't going to get ai. You can get a good word prediction program, a slightly better search engine. But you cannot escape the hallucinations. The best you can do is error check your results on certain math results which can alert you if you have corrupted data.
You can easily escape hallucinations. Ask the AI to provide sources for all assertions. It will link to on-line documents that you can examine. See for yourself if the AI has interpreted them correctly. All large AI models now do this. In any case, hallucinations have been greatly reduced in the last few years.
Objections like this are whistling past the graveyard. AI no longer has the problems you think it does.
I wince at the suggestion we don't fully understand LLMs - that they are simply black boxes. Like a random number generator that uses a seed (perhaps the system time in nanoseconds) to calculate a number from the time we press a button. We may not know what the number will be, because we don't know the seed. But we will understand how the functionality works under the hood.
Other than that, totally agree with Ellie's take. AI is going to be 'O.K.'. Wouldn't hurt to have more discussion with an educated leadership, but I question how serious an issue this is, yet. Misinformation will continue to be a problem, but eventually, we will learn when and how something is determined factual in an AI world.
Couple thoughts on this:
1) The best analogy I heard about AI today (particularly LLMs) is that it's like when ancient man discovered fermented beverages. He put a bunch of ingredients into a pot in the corner of the cave, went out hunting for a few weeks, came back, and had magic. He didn't know how the magic worked, he just knew how to make it. Some of the ingredients we put into LLMs may turn out to be toxic to us btw.
2) It's on us to decide what we do with the technology we develop. Nuclear weapons aren't super sophisticated once you understand how they're made. We have limited their ownership by treaty and restricting access to fissile material. AI is easily 100x more dangerous than nukes because the attack vectors are all over the place.
Tech bros need an attitude adjustment and we need to have a real talk with China about our mutual concerns. Not sure I trust Trump admin to handle ANY of this, but hey, a guy can dream.
3) From what I've seen, the tools that exist save time, but don't necessarily replace expert judgment and expertise. Maybe someday the machines will be able to "what if" with me, but until then I'm fairly convinced the world needs me for a bit. Most of the work that the machines replace is administrative and "skut" work where you need a document typed up or a model created. I'm really good with excel, but the real value I bring is the ability to determine what to model and what it tells us when we're done.
The biggest concern here for me is who replaces me? If the junior staff are never hired and never cut their teeth building the expertise, how do we get better? In a lot of ways it resembles what I was doing when we had outsourced staff to India or other markets to turn the crank but not the actual higher-level work. India got progressively more expensive to the point that it didn't make sense to outsource everything.
4) The real cost to run all of these models is WILDLY underreported. ChatGPT loses money incrementally on every prompt. If they just lost money on training and then the steady state was break-even or better I might believe they're a viable company, but that's not the case. The price of the use of their model is way below what it's going to be steady state. THis is before we even know what the longterm cap-ex requirements are for this stuff. The servers and chips aren't going to run forever, they burn out.
If they think they're going to trojan horse their way into Corporate America, they may be surprised. We've already seen this show before with Amazon, MSFT, and others selling at less than cost to then rachet up pricing in the future. If you think the CFOs of American companies are that dumb, well, you might be right, but I wouldn't bet on it.
Their IPOs later this year are going to be fascinating to read.
The reason I think this could all blow up is that there is a real greed problem among the corporate and financial elites. The CEOs all know the big picture suggests that too many layoffs all at once can create problems for everyone, but at the same time they don’t want to be the one caught with too many workers and not enough productivity.
Great Triad.
I totally agree that new technologies can and should be regulated for the benefit of the governed, and yes, the next cohort of Dem presidential candidates need to be considering the macroeconomic issues you raise.
Among others. The tech companies have walked all over copyright law and other IP rights to create their models. The data center infrastructure build-out to support AI development stresses the grid and raises consumer electricity prices. There's a shortage of RAM for use in many devices because the producers have focused on AI chips. We need to consider that criminals and terrorists will try to utilize AI. Etc.,etc. In short, there are lots of reasons for society, through government, to consider regulation of this technology.
Amid all the hype, I've wondered why the "law of diminishing returns" (aka 80/20 rule) wouldn't apply to AI also. At some point, new models will be more and more expensive to create, for incremental gain in performance. Model subscription costs, low now, will go up. There will be business environments where adoption doesn't make sense or is limited.
Do we need to have an AI of choice? I am sure that if you are still working or in school, you do. I am lucky to be retired, and I am not interested in deliberately using AI. Oh, I know that like Trump, it seeps into my life no matter what I do. But I do not have to deliberately use it. I know that AI will almost certainly be a boon in some areas, but I do not like the idea of young kids using it. Research has shown that writing information down helps you to remember it. I fear that in academic life, it will promote a kind of laziness.
I hope that if Dems get into power in the next few years, that they take the time to think about how to regulate AI. Since I am wishing, I also hope that we make changes to the SC, like age-mandated retirements, or retiring after serving for a set period of time. A SC with some younger Justices *might* be more likely to see the need for AI regulation, or even forcing some of these firms to break up(anyone else old enough to remember the “Ma Bell” breakup?) than some of the dinosaurs currently serving on the Court.
My two favorite pessimists disagree with each other! Ed Zitron has a different read of Shumer's piece...
https://bsky.app/profile/edzitron.com/post/3meps7fqw4c2t
There two constraints on AI that aren't discussed.
1. No one knows that this will actually work and not be a bubble. Consider its predecessors: Nuclear power will make electricity too cheap to meter. XML will take over the world. SAAS agents will eliminate software packages--people will just cobble together agents who will do all our work. Machine learning will solve everything. Spoiler alert: none of these things happened. They all hit walls. There no reason why generative AI will not.
2. Generative AI takes enormous amounts of electricity. For AI to do have of what people say, it will need current generation capacity at least double (if they can use electricity more efficiently) to 10 times current output (if they can't).
Where is the electricity going to come from? Right now, it is inflating electricity prices for other consumers. People are already pushing back as their bills have tripled or quadrupled in some markets. We are running into Herbert Stein's Law: anything that can't continue, will stop. This is not sustainable.
New generation is expensive to build, especially non-interruptible power. Microsoft is taking 3 Mile Island out of mothballs. There are a couple of other nuclear facilities that could be revived BUT doing so is wickedly expensive. Coal is not cheap; it is among the most expensive sources of electricity. NIMBYism will limit where coal plants can be built or revived. Electricity from hydro (dams) is being curtailed due to persistent drought in the west. The Trump administration is limiting solar and wind. Where is this energy going to come from. Who is going to pay for this?
Full disclosure: I am a computer programmer who worked for 10 years for a power company in the PJM region. I am now going to give you a history lesson. Aren't you lucky.
First a clarifying statement: Deregulation was not intended to make consumer electricity cheaper. It was designed to make commercial electricity cheaper and hobble state Public Utility Commissions (PUC). It lead to the creation of PJM and other interconnects.
Here's what happened: In the Great Depression, power company were turned into utilities regulated by state PUCs from brittle monopolies that both charged excessive amounts and did not reach into areas where profits were not guaranteed. The leaders were extremely irresponsible as was revealed in news articles and congressional hearings. It was decided that utilities would have a guaranteed rate of return on rates set by a PUC. The rates for household and commercial accounts were very different. Commercial accounts were expected to pay the amortized amount of building new generation, transmission, and distribution. A large commercial enterprise could consume the output of one or more generating stations and an entire transmission network. A household was a tiny blip in terms of demand. As part of the deal, utilities had to ensure reliability which included planning for future growth.
In the post-WWII era, commercial growth was enormous and utilities looked to nuclear to meet the need. As noted above, it was originally thought that nuclear was going to be really cheap. For a lot of reasons, this was very, very, very wrong. And then disaster struck: 3 Mile Island almost melted down and the industry side of the commercial market collapsed in the 1970's and 1980's. The utility I worked for lost a third of its demand as the main industry and its feeder businesses shut down. It had stranded assets (actual term), built for a projected increase in power demand that didn't occur. The costs of these assets could be spread to commercial rate-payers (another actual term) but not households. Commercial rates soared.
Commercial rate-payers started the deregulation push in order to lower their rates. They wanted the ability to buy from utilities with lower rates (fewer stranded assets). Electric Utilities wanted out of building generation. The idea was that merchant plants would spring up and supply the demand due to market signals. At the time this was proposed, this was a stupid idea. We had base load (uninterruptible power) provided by coal, nuclear, and hydro (dams) and peaking plants that ran on natural gas, propane, and jet fuel (really). The amount of electricity produced had to exactly match the amount consumed or the grid would fail. See regional power outages as in the Northeast blackouts in 1965 and 2003. Generating stations cost 100s of millions of dollars to build. Without a guaranteed return, no one would entertain the idea. As I said, stupid.
Then along comes the fracking boom which means it is now practical to build relatively cheap, small gas powered merchant plants. Further, renewable energy became practical and utility-scale battery storage came on line. The angels sang. The biggest issue was NIMBY opposition to transmission lines.
We started contemplating electrifying everything. Naysayers said we would need to triple energy output and how did we plan on doing that--apply photovoltaic cells to everything? Yes, actually. Every parking lot and roof, Windows of large buildings. If so, with increased efficiency, we could do it.
Now the AI boom. It wants the entire output increase that the Electrify Everything deniers felt was impossible. Their power consumption is a true externality. How will this power be produced? What are the environmental implications? Are these merchant plants? Will they be attached to the grid? We are already at the limit of what households and, presumably, non-AI commercial rate-payers will pay for increased transmission capacity. Current rate structure says that all customers pay whether they benefit or not. If AI fails, there will be an enormous amount of excess generation. Who will pay for this? What happens when the Independent Power Producers go bankrupt? It will be the 1970s all over again. The silver lining is that the electrify everything folks will be happy. They are rooting for this which is its own problem.
The largest constraint on AI is its power demands and this may keep it from taking over the world and our jobs. Even if it works, which, you know, is a big if.
AI models can get bigger, but there is diminishing returns. So a model that is ten times bigger isn't ten times better. More like 5% better.
Also, the restrictor plate for AI is people. AI models can exist, but they won't do anything unless they get implemented and people will be slow to do so. Think about driverless cars. The technology is there, but they are more of a novelty at this point. There only reason we drive cars is because we want to. I don't see that changing anytime soon.
Possibly you haven't heard of the many CEOs that are forcing all their developers to use AI by making it a large measurement in yearly reviews and promotions...
AI cannot effectively run a vending machine. Claude tried o twice and failed miserably.
Google AI said Melania had record breaking box office, which a simple Google search proved untrue.
Hopefully we don’t walk into this as blindly as we did the social media/iPhone revolution. Not great analogues to be extrapolated there Bob.
The thing that frustrates me about AI adoption is that people assume that just because it CAN answer a question that means that you’ve gained all the knowledge you need from its answer. There are so many nuances and data points and, frankly, real human experience that have so much more impact to businesses than the answers AI can give. In the market analysis and competitive market research spaces, everyone just wants AI to give them a thumbs up. It feels like a box checking exercise where they never intended to actually learn anything. They can say “research was conducted” and keep the board happy. But every time I’ve done an actual human analysis for the same company that used AI the first time, I’ve absolutely proven that these tools are woefully inadequate to pull out all the nuances that might be important for corporate budgets. And yet, AI is faster and cheaper so why wait for a human to give us a better answer that makes us more money and avoid more pitfalls when we can just AI it and have all we need right now with no foresight into what’s actually better for the business? Where I’ve ended up in my industry is that I’m losing money because more companies are using AI, which is losing them money in the future because they half-assed the research process. What a win-win.
Another issue that concerns me. As with the internet, AI is benefiting from immense public investment and tax incentives, not to mention decades long and deep investment in educational institutions, faculty and students that have lead to the capacity needed to develop and deploy AI. But as with the revolution that commercialized the data revolution, a relatively few individuals massively profited.
It kinda reminds me of the industrial revolution. A few oligarchs profited massively from gains in productivity while rank and file workers lived on subsisted on whatever the robber barons could bear to part with.
So it will be with AI. The robber barons of this ‘gilded age’ are already positioning society for neo-feudalism.
I'll miss your Triad next week. Luckily I'm feeding all the old ones into an LLM, so I can generate my own JVL Triad in your absence. Thanks for helping train the model! /s
The aspect of AI that always seems to get skipped is the cost. That substack you linked practically demands we run out right now and get a $20 a month account to an AI so we can save ourselves by learning how to use it. A lot of that post could be viewed as a "concern troll" where they want you to buy in on their thing not because it would benefit them if everybody paid the money, but because they are worried you might fall behind.
But my brain keeps going back to cost. Does anybody believe that $20 a month gets you 4 hours of data center time a day to have your work done?
All those data centers, all the people needed to keep them running, all the electricity needed to drive all those processors and the cooling and all the network infrastructure that makes this all go? How many people do you need paying $20 a month to cover all of those costs?
AI companies have been operating at a loss for years now, in part because they want to give something away for free to get us hooked and in part because the AI slop they have been offering had very limited value because you have to spend as much time correcting the output as it would have taken to do it on your own.
But let's assume Matt Shumer, that somehow this time it has all been perfected and can do all our non-managerial white collar jobs. What will be the cost to access this nirvana? And, remember, nobody wants to break even on this deal. The AI companies need to recoup losses, pay off debts, and make all the VCs even more obscenely rich for this to be at all a success. Oh, and liability. AI companies as a success will be the target of a multitude of lawsuits due to the outright theft of work.
That is all going to require a lot of cash flow.
I feel like a $20 a month account isn't going to cut it... even before most of us end up unemployed and are unable to afford even that luxury.
You ask: "Does anybody believe that $20 a month gets you 4 hours of data center time a day to have your work done?" That is not how it works. You mean 4 hours of real time (clock time) that you as a human experience. During that time, the computers run your queries in a few seconds of processor time, or at most a minute. That is why most responses appear almost instantaneously.
Furthermore, you are only using a tiny fraction of the processors available in the data center. There are up to 500,000 GPUs (computers) in an AI data center. Your query uses only one, or in some cases 8 or 16 for particularly large problems. Internal processing time for a typical query is 0.1 to 1.0 seconds. So, in 4 hours you use 0.002% of the available processors for perhaps 20 seconds. I would, anyway. It takes me a while to type the query, read and think about the response.
I run an LLM on my PC, qwen3-30b-a3b (30 billion parameter version). It too generates responses almost instantaneously, even though it is using the Intel processor, which is far slower than a dedicated AI GPU. The time I spend thinking about, typing, and then reading the responses must be thousands or millions of times longer than the time the computer devotes to the problem. After all, an I9 Intel processor does 100 billion to 1 trillion integer operations per second, far more than you can do in a lifetime. My LLM has 30 billion parameters, but it sorts through them in seconds.
A former FDA attorney I worked with posted this article on LinkedIn the other day - mind blowing stuff. The irony is that during at time that humanity should probably be shifting to spending most of our time on philosophy and real policy, instead we’re spending all of our time on trolling and political gamesmanship. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6155012
Except the AI experts say you aren't going to get ai. You can get a good word prediction program, a slightly better search engine. But you cannot escape the hallucinations. The best you can do is error check your results on certain math results which can alert you if you have corrupted data.
You can easily escape hallucinations. Ask the AI to provide sources for all assertions. It will link to on-line documents that you can examine. See for yourself if the AI has interpreted them correctly. All large AI models now do this. In any case, hallucinations have been greatly reduced in the last few years.
Objections like this are whistling past the graveyard. AI no longer has the problems you think it does.
I wince at the suggestion we don't fully understand LLMs - that they are simply black boxes. Like a random number generator that uses a seed (perhaps the system time in nanoseconds) to calculate a number from the time we press a button. We may not know what the number will be, because we don't know the seed. But we will understand how the functionality works under the hood.
Other than that, totally agree with Ellie's take. AI is going to be 'O.K.'. Wouldn't hurt to have more discussion with an educated leadership, but I question how serious an issue this is, yet. Misinformation will continue to be a problem, but eventually, we will learn when and how something is determined factual in an AI world.