I work with AI daily (my job title has it in the name) and my fears largely mirror yours. I’d add to that the societal disruption caused by a huge shift in labor at a time when society is particularly poorly equipped to deal with that upheaval.
People already believe nonsense, which is to say nothing of nonsense backed by real-time AI-Gen…
I work with AI daily (my job title has it in the name) and my fears largely mirror yours. I’d add to that the societal disruption caused by a huge shift in labor at a time when society is particularly poorly equipped to deal with that upheaval.
People already believe nonsense, which is to say nothing of nonsense backed by real-time AI-Generated video. Which is a shame, because it’s a very double-edged sword. That same power has almost infinite capacity to make our lives better.
Many knowledge workers and the trades for the most part won’t have anything to worry about for the next 5-10 years, but after that it’s anyone’s guess. The world is about to change profoundly and it’s not ready at all for it.
I am a product manager (see my substack link above) and I am terrified what AI will do for this field. I am lucky that I have 7ish years left in my career, and I think I will be able to hang on. But if I was a fresh out, the future is bleak.
At least students are learning to lean on ChatGPT for their university assignments, as the future will be how best to leverage the output of the bots, and dialog box engineering.
It's people faking things to do evil that's the danger, not AI per se. If someone forges a signature, does it really matter if they used AI to do it? AI is making it easier but the responsibility still lies with the criminal or the irresponsible engineer or company.
Oh, no doubt the responsibility always will lie with the human driving, but force multipliers make it easier for individual bad actors to do great harm.
Agreed but we should avoid making laws to control AI as that will just create a reason for the bad guys to avoid using that term. What is and what is not AI will always be debatable. It's all just software tools that allow people to do good or bad.
Until we have AGI, perhaps decades from now, I think it isn't that hard. It is mostly making people responsible for their actions. AGI will bring us a whole different set of problems as we'll be likely to want to give the robots responsibility. Until then, people are in charge and, therefore, bear all responsibility. In short, laws should prevent people from saying, "Not me. The AI did it."
I work with AI daily (my job title has it in the name) and my fears largely mirror yours. I’d add to that the societal disruption caused by a huge shift in labor at a time when society is particularly poorly equipped to deal with that upheaval.
People already believe nonsense, which is to say nothing of nonsense backed by real-time AI-Generated video. Which is a shame, because it’s a very double-edged sword. That same power has almost infinite capacity to make our lives better.
Many knowledge workers and the trades for the most part won’t have anything to worry about for the next 5-10 years, but after that it’s anyone’s guess. The world is about to change profoundly and it’s not ready at all for it.
Prepare for those pod bay doors to not open one day.....
I am a product manager (see my substack link above) and I am terrified what AI will do for this field. I am lucky that I have 7ish years left in my career, and I think I will be able to hang on. But if I was a fresh out, the future is bleak.
At least students are learning to lean on ChatGPT for their university assignments, as the future will be how best to leverage the output of the bots, and dialog box engineering.
No bueno, me thinks.
It's people faking things to do evil that's the danger, not AI per se. If someone forges a signature, does it really matter if they used AI to do it? AI is making it easier but the responsibility still lies with the criminal or the irresponsible engineer or company.
Oh, no doubt the responsibility always will lie with the human driving, but force multipliers make it easier for individual bad actors to do great harm.
Agreed but we should avoid making laws to control AI as that will just create a reason for the bad guys to avoid using that term. What is and what is not AI will always be debatable. It's all just software tools that allow people to do good or bad.
There is most definitely a lot of thought to be out into how to legislate for an AI enabled world. I don’t know what the right answers are.
Until we have AGI, perhaps decades from now, I think it isn't that hard. It is mostly making people responsible for their actions. AGI will bring us a whole different set of problems as we'll be likely to want to give the robots responsibility. Until then, people are in charge and, therefore, bear all responsibility. In short, laws should prevent people from saying, "Not me. The AI did it."