I am not a physicist. I edit and translate papers for physicists. They often talk about models that work with one or two particles, but fail with more complex systems.
The ChatGPT AI program has no actual knowledge of anything. It has no more intelligence than an old fashioned library paper index card system. There are some AI programs th…
I am not a physicist. I edit and translate papers for physicists. They often talk about models that work with one or two particles, but fail with more complex systems.
The ChatGPT AI program has no actual knowledge of anything. It has no more intelligence than an old fashioned library paper index card system. There are some AI programs that attempt to simulate knowledge, and apply logic to problems, but this one does not. At least, it did not when I read about it a few years ago.
Thank you for the context of your comment. I am curious. Is there is a way to prevent bias when doing scientific inquiry, outside mathematics? My knowledge about AI is very limited.
I am not a physicist. I edit and translate papers for physicists. They often talk about models that work with one or two particles, but fail with more complex systems.
The ChatGPT AI program has no actual knowledge of anything. It has no more intelligence than an old fashioned library paper index card system. There are some AI programs that attempt to simulate knowledge, and apply logic to problems, but this one does not. At least, it did not when I read about it a few years ago.
Thank you for the context of your comment. I am curious. Is there is a way to prevent bias when doing scientific inquiry, outside mathematics? My knowledge about AI is very limited.