15 Comments
User's avatar
Jeff Biss's avatar

The threat that AI poses is that people believe the hype and put AI in charge of something that they shouldn't because in a nutshell, their operation cannot be tested as regular software can. For example, AI doesn't present the same output for every set input.

About Nate's "better than human, that requires us to understand how our brain's work. Currently and into the future, AI will only be faster than humans because of a) processor clock speed and (b) essentially limitless memory. They do not think. There is no possibility of GAI until we develop "brains" in circuits and if that ever happens, all that is necessary to provide for defense against their power is to disconnect them from the network.

Expand full comment
Jeff Biss's avatar

HAL in 2001: A Space Odyssey is a good example of a realistic depiction of AI, trained in a different manner, based on a model of how humans learn in a brain-based circuit, than the training currently used on CPUs or GPUs. Consider the fact that 2001 was made in the late 60s when there was no AI and so the fiction could have gone anywhere yet it was scoped properly.

Expand full comment
Susan Brewer's avatar

Listened to the authors of the book, Nate and Eliezor, speaking with Sam Harris, too.

Sonny, thank you for opening this door for us here.

Expand full comment
Martha A.'s avatar

Excellent discussion!

Expand full comment
Michael's avatar

Ex Machina might be a good movie to discuss with JVL and Sarah.

Expand full comment
Tai's avatar

This was an amazing interview. Before AI kills us, we will have an extremely unstable economy. The rich will be richer exponentially while even a college educated software engineer will have a hard time finding work or keeping a job. Walmart just said they will keep headcount flat in the next three years while continuing to grow, expecting AI to do the work. If a full employment market last year resulted in the election of a right winged demagogue, imagine what 10% unemployment and high inflation will get us. I am with Sonny that WE ARE DOOMED.

Expand full comment
A Sarcastic Prophet's avatar

Humanity never gets its act totally together- that's what makes us interesting. Really enjoyed this. My favorite movie to think about AI is Star Trek: The Movie which some say gave us the Borg, but why would AI even bother?

One point of clarification. Apollo 1 wasn't a rocket issue. It was a cabin oxygen issue on the launch pad. Challenger was a rocket part issue on launch (o rings) and Columbia was a reentry issue. Like Titanic sinking, or Chernobyl melting, human hubris along with inefficient manufacturing and lack of oversight equals disaster. Still, the point is taken as to learning from major tragedy. One can ask, how many tragedies with AI are possible? The total elimination of the human population can only happen once.

Expand full comment
Kay Ellen O'Maighe's avatar

Arguably, Columbia was a take-off issue (ice-saturated foam strike @ 500mph) that only became lethally catastrophic on re-entry. Given that shuttle missions had very limited surveillance and zero repair capability (no way to orbit over to the ISS and dock for a patch job, etc.), Columbia, as much as Challenger, was doomed before it left the atmosphere.

Expand full comment
A Sarcastic Prophet's avatar

That's true- I forgot Columbias issue was also a take-off issue. Both crews doomed from take-off. Kind of like humanity and AI maybe?

Expand full comment
Clodene's avatar

This books needs Chernobyl type mini series to really get everyone's attention.

Expand full comment
Andrew's avatar

The notion that superhuman AI is a likelihood or an inevitability belies the reality that AI development is not accelerating, it is moderating. Most technology experiences an S-shaped development curve. AI is evincing a similar path, with each iteration of GPT now less of a leap forward in capability than the last jump between models. AI companies do their best to hide this fact with benchmark tests of limited usefulness. Or, they flaunt how a model aced an academic exam but elide how error-prone the same models are in real-world applications or even in basic arithmetic.

Expand full comment
Random Reader's avatar

Current generation "AI" is no threat. Sure, it can write an essay (with some bullshit, just like many students do), or write and debug a simple computer program. Probably, yes, the current S-curve is flattening. Current high-end APIs (not the free trash) can solve many tricky problems in math and software engineering with at least 80-90% accuracy. But there's a bunch of stuff where the AI just fails, often in stupid ways. And even though the LLMs have improved dramatically in the last 24 months, there's still something missing.

But when one S-curve stalls, there's often another S-curve coming. And the next S-curve? I fear it takes us over the top. We don't even understand how *current* models work (because they're basically grown, not built). And if we ever build something smarter than us, the future is going to get real weird, real quick.

Expand full comment
Andrew's avatar

I understand what you’re saying but I don’t see evidence for your suggestion that another steepening of the AI innovation curve is just around the corner. Sam Altman teased that GPT5 would be as powerful as the Death Star (whatever that means). It was a dud. It will help OpenAI manage its costs and demand but that’s about it. Where is the next major leap in AI innovation going to come from? Where are signs it is imminent? From my vantage point, it looks like we’ve squeezed most of the juice from LLMs already, yet most of the major players in this pace are still building within LLM architecture.

Expand full comment
Chas's avatar

Maybe AI and global climate change can engage in an epic battle to see which one gets the laurels for destroying civilization.

Expand full comment
Jimmy Roe's avatar

So AI is Pandora’s box?

Expand full comment