HHS releasing reports based on non-existent studies adds to (detracts from?) this discussion: I’m all for faster snd broader information but finding “information” drawn by AI from made up non-information scares the hell out of me. How, as a trained researcher, even check that (i.e., all the way down)?!
I know this will make me sound like a dirty commie, but there's a fundamental difference between technologies that offer creative disruption to an industry and LLMs. Beyond the fact that, on average, LLMs seem to serve as military-grade Dunning-Kruger machines, they have the effect of delivering 'creativity' to the wealthy while removing access to wealth by the creative class. This is a very bad thing that likely will only be solved by regulation and aggressive protection of copyrighted works being used to train models.
Something that stands out to me about nearly every AI conversation is a) sure it’s not good now and creating real problems, before moving to b) in the future it will be fine.
Why? Why do people think that? Why is it given so much slack?
However, I do use some machine learning (AI) tools in Premiere Pro, and they are fine. Jon Lovett over at Pod Save America calls it a “block obliterator” when it comes to writing, and I think that is true.
The idea that it’s just being used instead of creating is lame.
HHS releasing reports based on non-existent studies adds to (detracts from?) this discussion: I’m all for faster snd broader information but finding “information” drawn by AI from made up non-information scares the hell out of me. How, as a trained researcher, even check that (i.e., all the way down)?!
I know this will make me sound like a dirty commie, but there's a fundamental difference between technologies that offer creative disruption to an industry and LLMs. Beyond the fact that, on average, LLMs seem to serve as military-grade Dunning-Kruger machines, they have the effect of delivering 'creativity' to the wealthy while removing access to wealth by the creative class. This is a very bad thing that likely will only be solved by regulation and aggressive protection of copyrighted works being used to train models.
Something that stands out to me about nearly every AI conversation is a) sure it’s not good now and creating real problems, before moving to b) in the future it will be fine.
Why? Why do people think that? Why is it given so much slack?
However, I do use some machine learning (AI) tools in Premiere Pro, and they are fine. Jon Lovett over at Pod Save America calls it a “block obliterator” when it comes to writing, and I think that is true.
The idea that it’s just being used instead of creating is lame.