There's a reason "Georgetown comp-sci professor Cal Newport ... is much more sanguine about AI than" you are: as someone who specializes in the field of computer science, he's knowledgable about the history of artifical intelligence, and this knowledge gives him perspective that can't be shared by those who don't share that specialty. Newport's position indicates that he has the perspective to know that AI didn't arrive on the scene last year and has in fact been leading to huge changes in our society ever since its earliest widespread success, the elimination of the assembly line. Technological change has always led to changes in how things are done and to sometimes mass displacement of workers that have in turn led to further social upheaval; think of the various "revolutions" brought about by the printing press, the steam engine, the internal combustion engine, the harnessing of nuclear fission, and now advances in semiconductor technology. So no, we have always conformed to technology rather than the other way around. But technology isn't some alien from another planet; it's a product of our own intellect, the same intellect that allows us to adjust to change when we don't panic. Newport's "thesis"--that "large-language models function ... by guessing the next word based on the preceding words according to the data set they’ve be trained on"--isn't speculation but a matter of his knowledge as a computer scientist. It is this knowledge that allows him to face the situation with comparative equanimity.
Prof Newport, by conveniently bounding his discussion to how one implementation of a LLM actually works, skirts some issues that are worth considering. The quote he uses from the NYT article is a bit out of context. The authors of that article raise some really relevant points that Prof Newport doesn’t attempt to address in his New Yorker Article. I guess one can believe that any given LLM chatbot implementation is unlikely to massively displace humans from the workforce while still being concerned about other aspects of what we train these models to do. Today we have an attention economy that is incredibly detrimental to reasoned discourse and is eroding the base of our democratic experiment. Most of it fueled by AI-driven outrage/engagement mechanisms. There are things we can be concerned about once these generative AI models get applied in other areas. You are right and I agree w/ you that we need to use our intellect and knowledge to deal with the impacts of these new technologies. As such it’s not enough to decline to prognosticate regarding the impact a new technology may have on society. I think we have the obligation to learn from our history and avoid “predictable” surprises that are likely to have a detrimental impact on our society as best we can. This is an area of responsibility that those of us who bring technological innovations into the world have long skirted.
You, like JVL, are confusing technology with technology's societal implications. It's the societal implications of AI that JVL is concerned with; but he assumes wrongly that his fears concerning the implications for society of this latest class of AI applications can be meaningfully compared with an expert's encapsulation of one of the technologies--large language models--that itself undergirds these applications. In doing this, he misreads a description of the technology as a "thesis" about its implications for society. This lack of technical expertise leads JVL to paraphrase the expert's encapsulation of the technology in a way that trivializes it (simple guesswork): then, rightly seeing generative AI's NON-trivial implications for society, he worries that the expert isn't alarmed where he should be. The point I'd hoped to make was that only if we avoid this category mistake can we take a sober look at what the technology can and, crucially, cannot do; only by doing this can we achieve a clear-eyed view of what sectors of society will be disrupted. Only experts can help us avoid the fear that ignorance engenders; we'll need experts in economics, to be sure, but we'll also need to recognize technical expertise in AI for what it is.
There's a reason "Georgetown comp-sci professor Cal Newport ... is much more sanguine about AI than" you are: as someone who specializes in the field of computer science, he's knowledgable about the history of artifical intelligence, and this knowledge gives him perspective that can't be shared by those who don't share that specialty. Newport's position indicates that he has the perspective to know that AI didn't arrive on the scene last year and has in fact been leading to huge changes in our society ever since its earliest widespread success, the elimination of the assembly line. Technological change has always led to changes in how things are done and to sometimes mass displacement of workers that have in turn led to further social upheaval; think of the various "revolutions" brought about by the printing press, the steam engine, the internal combustion engine, the harnessing of nuclear fission, and now advances in semiconductor technology. So no, we have always conformed to technology rather than the other way around. But technology isn't some alien from another planet; it's a product of our own intellect, the same intellect that allows us to adjust to change when we don't panic. Newport's "thesis"--that "large-language models function ... by guessing the next word based on the preceding words according to the data set they’ve be trained on"--isn't speculation but a matter of his knowledge as a computer scientist. It is this knowledge that allows him to face the situation with comparative equanimity.
Prof Newport, by conveniently bounding his discussion to how one implementation of a LLM actually works, skirts some issues that are worth considering. The quote he uses from the NYT article is a bit out of context. The authors of that article raise some really relevant points that Prof Newport doesn’t attempt to address in his New Yorker Article. I guess one can believe that any given LLM chatbot implementation is unlikely to massively displace humans from the workforce while still being concerned about other aspects of what we train these models to do. Today we have an attention economy that is incredibly detrimental to reasoned discourse and is eroding the base of our democratic experiment. Most of it fueled by AI-driven outrage/engagement mechanisms. There are things we can be concerned about once these generative AI models get applied in other areas. You are right and I agree w/ you that we need to use our intellect and knowledge to deal with the impacts of these new technologies. As such it’s not enough to decline to prognosticate regarding the impact a new technology may have on society. I think we have the obligation to learn from our history and avoid “predictable” surprises that are likely to have a detrimental impact on our society as best we can. This is an area of responsibility that those of us who bring technological innovations into the world have long skirted.
You, like JVL, are confusing technology with technology's societal implications. It's the societal implications of AI that JVL is concerned with; but he assumes wrongly that his fears concerning the implications for society of this latest class of AI applications can be meaningfully compared with an expert's encapsulation of one of the technologies--large language models--that itself undergirds these applications. In doing this, he misreads a description of the technology as a "thesis" about its implications for society. This lack of technical expertise leads JVL to paraphrase the expert's encapsulation of the technology in a way that trivializes it (simple guesswork): then, rightly seeing generative AI's NON-trivial implications for society, he worries that the expert isn't alarmed where he should be. The point I'd hoped to make was that only if we avoid this category mistake can we take a sober look at what the technology can and, crucially, cannot do; only by doing this can we achieve a clear-eyed view of what sectors of society will be disrupted. Only experts can help us avoid the fear that ignorance engenders; we'll need experts in economics, to be sure, but we'll also need to recognize technical expertise in AI for what it is.
I always say, "if your job can be replaced by ChatGPT (or any other LLM), it was a job no human should have been bothering to do in the first place."