This is a good look at AI “intelligence” from The Atlantic, which I found very helpful to understanding my unease about how society is getting on the AI bandwagon. It will change the way we think as humans. I’m not adverse to technology advances per se, but this will dumb us down even further than social media has in terms of deep and thoughtful thinking and reacting to the world around us.
What will happen with scripts is what happens with AI/ML that evaluates x-ray films, they will produce a result that a human will then use as a baseline.
With respect to films, the AI/ML looks at pixel values from the perspective of training and assesses probabilities that certain regions are what the training data was, such as cancer. This helps the radiologist because he or she doesn't have to evaluate the entire film, only certain regions.
With respect to scripts, the AI will produce something that a human will have to rework such that it is usable. I doubt that AI will produce much that doesn't demand rework, it will simply act to produce very rough drafts.
I think AI might be appropriate for quickly editing and revising the first draft. But I wouldn't ask it to actually draft the first draft itself. You want to work from an organic product that has the actual tone and spirit of the author. ChatGPT is very limited in tone and spirit. It's a waste to set human authors to the task of refining its products, you're refining garbage. Actually what you should perhaps be cutting down on is editors, not writers. But you're cutting down on writers and hiring editors. I don't understand why this would be done.
A more similar analogy would be that instead of taking X-Rays, you're just going to ask an AI to generate an image of what the cancer should look like. And then the human is set to the task of finding the cancer in the AI generated image completely manually, scanning through the whole, fake image thoroughly and laboriously.
While if AI is editor instead of author, the task much more resembles the example you indicated - the AI is just finding problem areas and bringing them to your attention, if its editing. Just like it is doing in helping the doctor finding problem areas. But imagining the AI as author instead of assistant is nonsense. Honestly I think AI's are best at any task where they're not actually the author of anything. Authoring is perhaps the task they are worst at.
It also makes sometimes baffling errors that no human being would make. Errors that you might not even catch if you don't have knowledge in the expertise or discipline it's randomly grabbing signs from. It may sound plausible to the naive reader, who can easily not catch what is wrong.
What you're implying is a simple grammar checker. AI has no ability to improve because it has no understanding. All it can do is minimize the error between training data and prediction.
AI can only identify pixel sets that have a higher probability of being cancer only because they match training data of known cancers. They save radiologist time.
This is a good look at AI “intelligence” from The Atlantic, which I found very helpful to understanding my unease about how society is getting on the AI bandwagon. It will change the way we think as humans. I’m not adverse to technology advances per se, but this will dumb us down even further than social media has in terms of deep and thoughtful thinking and reacting to the world around us.
….Large language models “are not emotionally intelligent or ‘smart’ in any meaningful or recognizably human sense of the word,” @Tyler_A_Harper writes. Understanding this is essential to avoiding AI’s most corrosive effects:https://www.theatlantic.com/culture/archive/2025/06/artificial-intelligence-illiteracy/683021/?gift=cqZZSctiMQc7w-JMWtVkuOku3y9aM5d95WdfYJKKl-k
What will happen with scripts is what happens with AI/ML that evaluates x-ray films, they will produce a result that a human will then use as a baseline.
With respect to films, the AI/ML looks at pixel values from the perspective of training and assesses probabilities that certain regions are what the training data was, such as cancer. This helps the radiologist because he or she doesn't have to evaluate the entire film, only certain regions.
With respect to scripts, the AI will produce something that a human will have to rework such that it is usable. I doubt that AI will produce much that doesn't demand rework, it will simply act to produce very rough drafts.
I think AI might be appropriate for quickly editing and revising the first draft. But I wouldn't ask it to actually draft the first draft itself. You want to work from an organic product that has the actual tone and spirit of the author. ChatGPT is very limited in tone and spirit. It's a waste to set human authors to the task of refining its products, you're refining garbage. Actually what you should perhaps be cutting down on is editors, not writers. But you're cutting down on writers and hiring editors. I don't understand why this would be done.
A more similar analogy would be that instead of taking X-Rays, you're just going to ask an AI to generate an image of what the cancer should look like. And then the human is set to the task of finding the cancer in the AI generated image completely manually, scanning through the whole, fake image thoroughly and laboriously.
While if AI is editor instead of author, the task much more resembles the example you indicated - the AI is just finding problem areas and bringing them to your attention, if its editing. Just like it is doing in helping the doctor finding problem areas. But imagining the AI as author instead of assistant is nonsense. Honestly I think AI's are best at any task where they're not actually the author of anything. Authoring is perhaps the task they are worst at.
It also makes sometimes baffling errors that no human being would make. Errors that you might not even catch if you don't have knowledge in the expertise or discipline it's randomly grabbing signs from. It may sound plausible to the naive reader, who can easily not catch what is wrong.
What you're implying is a simple grammar checker. AI has no ability to improve because it has no understanding. All it can do is minimize the error between training data and prediction.
AI can only identify pixel sets that have a higher probability of being cancer only because they match training data of known cancers. They save radiologist time.