Discussion about this post

User's avatar
Eleutherios's avatar

"At some point in the future we may reach a point where fake studies constructed by AI will then become cited by other AI and reported in AI-generated news outlets, never touching actual human hands."

I'd bet that's less than ten years down the road.

I'm actually more concerned about human-AI interaction. As people get more comfortable using AI assistants in all manner of tasks, and as these AI assistants output more convincing results more easily, fewer and fewer scientists will notice and/or account for those tools' limitations.

We already have this problem with existing tools. What proportion of scientists are even marginally competent in statistics or experimental design? Yet you see the latest methods spread like Omicron while the tried-and-true methods, when not bypassed, are often misapplied. (Yes, team statisticians can counter the issue, but they are mythical creatures in my neck of academia.) Cutting corners is ALREADY an accepted part of academic culture, and with AI help, those cut corners will take the form of even more malpractice.

This is made substantially worse if the logic paths aren't human-understandable or made available, as many current AI tools seem to be. You don't have to worry about evaluation if nobody can practically evaluate your work. And if you can't evaluate a science paper, isn't it just a religious text?

Lots of mouths are talking about "explainable AI" - which is great - but I doubt that will materialize fast enough to reverse the irresponsible adoption of AI help among scientists. We simply don't have a track record of measured and careful progress.

Expand full comment
Kirsten's avatar

Thanks for this very interesting post. I could tell the first study was ai, but that's because I've read enough studies to recognize the odd phrasing. And if you hadn't had the second one to compare to, maybe I wouldn't have known that was AI if I saw it on the internet. But I wouldn't have trusted the study, I could tell something was off, but I would assume it was poor science not AI. I'm confident more of a lay person then myself would not be able to tell.

I sure hope more people wake up and just don't take any more pharmaceutical products. We'll need to really rely on what we've learned across our life to guide us the rest of the way, because we're getting closer and closer to the time where we can't trust any media. At least many of us have known this for a good long time. It's the young people that are growing up in this environment who will have more trouble discerning.

I talked to someone today who fell outside on a walk last week. After being on the ground for a couple minutes, the police called her on her Apple watch because the watch detected the fast descent! Geez. She's in her 60s.

Expand full comment
6 more comments...

No posts