150 Comments
User's avatar
⭠ Return to thread
Marcio Avillez's avatar

Prof Newport, by conveniently bounding his discussion to how one implementation of a LLM actually works, skirts some issues that are worth considering. The quote he uses from the NYT article is a bit out of context. The authors of that article raise some really relevant points that Prof Newport doesn’t attempt to address in his New Yorker Article. I guess one can believe that any given LLM chatbot implementation is unlikely to massively displace humans from the workforce while still being concerned about other aspects of what we train these models to do. Today we have an attention economy that is incredibly detrimental to reasoned discourse and is eroding the base of our democratic experiment. Most of it fueled by AI-driven outrage/engagement mechanisms. There are things we can be concerned about once these generative AI models get applied in other areas. You are right and I agree w/ you that we need to use our intellect and knowledge to deal with the impacts of these new technologies. As such it’s not enough to decline to prognosticate regarding the impact a new technology may have on society. I think we have the obligation to learn from our history and avoid “predictable” surprises that are likely to have a detrimental impact on our society as best we can. This is an area of responsibility that those of us who bring technological innovations into the world have long skirted.

Expand full comment
Susan Schmerling's avatar

You, like JVL, are confusing technology with technology's societal implications. It's the societal implications of AI that JVL is concerned with; but he assumes wrongly that his fears concerning the implications for society of this latest class of AI applications can be meaningfully compared with an expert's encapsulation of one of the technologies--large language models--that itself undergirds these applications. In doing this, he misreads a description of the technology as a "thesis" about its implications for society. This lack of technical expertise leads JVL to paraphrase the expert's encapsulation of the technology in a way that trivializes it (simple guesswork): then, rightly seeing generative AI's NON-trivial implications for society, he worries that the expert isn't alarmed where he should be. The point I'd hoped to make was that only if we avoid this category mistake can we take a sober look at what the technology can and, crucially, cannot do; only by doing this can we achieve a clear-eyed view of what sectors of society will be disrupted. Only experts can help us avoid the fear that ignorance engenders; we'll need experts in economics, to be sure, but we'll also need to recognize technical expertise in AI for what it is.

Expand full comment