8 Comments
Apr 3, 2023·edited Apr 3, 2023Liked by Modern Discontent

"At some point in the future we may reach a point where fake studies constructed by AI will then become cited by other AI and reported in AI-generated news outlets, never touching actual human hands."

I'd bet that's less than ten years down the road.

I'm actually more concerned about human-AI interaction. As people get more comfortable using AI assistants in all manner of tasks, and as these AI assistants output more convincing results more easily, fewer and fewer scientists will notice and/or account for those tools' limitations.

We already have this problem with existing tools. What proportion of scientists are even marginally competent in statistics or experimental design? Yet you see the latest methods spread like Omicron while the tried-and-true methods, when not bypassed, are often misapplied. (Yes, team statisticians can counter the issue, but they are mythical creatures in my neck of academia.) Cutting corners is ALREADY an accepted part of academic culture, and with AI help, those cut corners will take the form of even more malpractice.

This is made substantially worse if the logic paths aren't human-understandable or made available, as many current AI tools seem to be. You don't have to worry about evaluation if nobody can practically evaluate your work. And if you can't evaluate a science paper, isn't it just a religious text?

Lots of mouths are talking about "explainable AI" - which is great - but I doubt that will materialize fast enough to reverse the irresponsible adoption of AI help among scientists. We simply don't have a track record of measured and careful progress.

Expand full comment
author

You raise a lot of good points.

I, for one, am one of those people REALLY bad at statistics! I think what goes on in science really isn't any different than what goes on in society where things that seem popular get picked up even if there are issues with the methodology or practicality of the experiment. It's like a TikTok trend but for researchers.

There are serious concerns over how many corners will be cut. I can understand an issue of having AI write your paper but using the information you provide, although to assume that an AI will be able to accurately interpret the results is something else entirely. I think that was the main issue with some of the generated abstracts. It had data but didn't know what the data meant or why certain terms were important so it left vague abstracts.

We'll have to see what comes of it. At the end of the day (and something I should have made more clear) authenticity is needed more now than ever. Being able to tell what you are reading came from an actual human without having to be told that will be greatly needed, but that's made far more difficult with an online-dependent world.

Expand full comment

But the important thing is, you recognize that and you can improve. I'm no stats whiz, either, partly due to my brain but also due to errant training. Regarding the latter, it seems to be the consequence of the overall culture of the biological sciences I'm familiar with. It's one thing to expect grad students and early career researchers to self-study various topics (generally a good thing) and entirely another to de-emphasize foundational knowledge in a curriculum, so that when students do self-study, they miss crucial topics or don't know to study them in sufficient depth. And then what? You end up with generations of scientists who don't understand methods/theory, and (more importantly) who don't understand WHY they should understand those methods/theories.

I harp on training quite a bit because that may be the best hope for course correction before the AI takeover. Labels clearly stating AI's involvement in methods and writing would be a start, but I think it would soon lose meaning as AI involvement becomes undetectable with few consequences for lying. This type of dishonesty is already going on with false author inclusion/exclusion and rigged peer review**, among other things. A well-trained scientist with integrity, however, would be able to interrogate the authors for justification.

But... that takes time and effort. And, far more than today, there will need to be a class of scientist detectives in each field, checking and double checking datasets, questioning authors, and otherwise not working on their own research programs. I'm at a loss for how to incentivize that sort of work outside of making moral arguments. If I've learned anything in academia, though, it's that moral reasoning is to many scientists as garlic is to vampires.

I hope you're right that convincing AI data analysis and interpretation is far down the road, though. I'm quite pessimistic about that at the moment!

**Not to say that all author lists and peer reviewers are dishonest, but rather that some are, and it's impossible to tell which. Just last week a colleague was asked by a team of heavy hitters for a "favorable review ... to help [X's] career" in a prestigious journal. The colleague declined, but others apparently didn't, and the shoddy paper got published.

Expand full comment

Thanks for this very interesting post. I could tell the first study was ai, but that's because I've read enough studies to recognize the odd phrasing. And if you hadn't had the second one to compare to, maybe I wouldn't have known that was AI if I saw it on the internet. But I wouldn't have trusted the study, I could tell something was off, but I would assume it was poor science not AI. I'm confident more of a lay person then myself would not be able to tell.

I sure hope more people wake up and just don't take any more pharmaceutical products. We'll need to really rely on what we've learned across our life to guide us the rest of the way, because we're getting closer and closer to the time where we can't trust any media. At least many of us have known this for a good long time. It's the young people that are growing up in this environment who will have more trouble discerning.

I talked to someone today who fell outside on a walk last week. After being on the ground for a couple minutes, the police called her on her Apple watch because the watch detected the fast descent! Geez. She's in her 60s.

Expand full comment
author

Oh man, in hindsight I realized I should have just posted the generated prompt and then had a poll asking whether it was generated or original! Although I think the poll results can be seen so that may influence responses, or maybe it's me coping for not realizing that would have been a better approach and would have better modeled the study! 😅

I will differ among others and say that pharmaceuticals may have a place- everything has some affect on the body both therapeutic-wise and toxicity-wise. My general issue is that most people will take pharmaceuticals because of what it does (removes pain, treats cancer, etc.) without understanding the scope of the drug's pharmacology. As long as people are informed consumers then I think having them be able to understand and choose the actions they want is what's important.

Keep in mind that most plants that are argued to have therapeutic properties had to go through generations of testing via social Darwinism- those who ate the toxic plants ended up dying, and we'd hope someone was around to note NOT to eat the plant that killed Billy. I think we tend to forget about all of our ancestors who died from mistakenly eating the wrong things so that we have learned to stay away from X or that Y is actually beneficial.

But you are right with younger generations (I guess that includes me?). I think we are far too-reliant on having things told to us rather than spending time figuring things out on our own and struggling through it. Maybe not through eating toxic plants, but at least spending time researching information that has already come out and accumulate information to use in practical manner.

That Apple watch... wow, that's super scary to think that information can be passed so readily and likely without our consent!

Expand full comment

Yeah cuz I was reading I was getting ready to answer the poll, but there wasn't one. 😄. I hadn't thought about it that way, that it's taken many generations to learn about the healing properties of plants.

I actually do agree with you that some pharmaceuticals have their place, I was being a little extreme in the way I phrase that. But more than ever I recognize that I couldn't be certain about the studies that pharmaceuticals are based on, and so the recommendations about those pharmaceuticals are suspect. I guess the key is having a doctor you trust who has experience with them. But I did work out in the garden for 5 hours yesterday and took a Ibuprofen before going to bed so I could walk normally today and not be too sore! 😄. I had a friend who had poison oak really bad, and I offered her my heavy duty cortisone cream but she wouldn't take it - she's a homeopathist and she suffered so much - I think a lot more than need be.

Take good care. 💕

Expand full comment
author

It really wasn't until I read your comment that I stopped and realized I goofed on missing out on that! I may just have a separate exercise and just for fun to see if people can recognize some of the other abstracts. Hopefully others didn't bother to check the article to help keep things blinded! 👀

I think general reticence when it comes to pharmaceuticals is warranted, and many people have had very bad experiences with doctors and healthcare which adds to the issues of trust. I think a general issue is that there's a serious gap in knowledge all the way from research to prescription, and part of what I hope this Substack can do is to explain to people what these things are actually doing and how they work, and where some of the gaps are in knowledge.

I think doctors are far too quick to prescribe a medication to treat a symptom rather than explaining what is causing the symptoms and how the treatment can help. Take Ozempic which is used for Type II diabetes but has the strange effect of aiding in weight loss. That is, until you realize the receptors that Ozempic target are found both in the pancreas (which helps produce insulin) as well as in our brains and CNS which signal the feeling of fullness. Because of that the drug isn't as strange as one would think, but that also leaves open the issue of long-term risks of using the drug given these effects.

Tim Pool would always make remarks about "talking to your doctor", but I always felt that comment was vacuous, because what would exactly would you talk to your doctor about, or what conversation is the right one? Being a patient is essentially being a consumer and so it's imperative that people act as informed consumers for their own health and do work to research information. I think people are too scared of how tedious it can be, but knowledge can really help especially in these times.

I'm glad you got to spend time outside! I should hope to do so as well, but the weather has been wonky which is a shame!

Well, take care and hopefully the gardening doesn't put too much of a strain on you!

Expand full comment

Ask what are synonyms for "artificial"

As taken from https://www.thesaurus.com/browse/artificial:

SYNONYMS FOR artificial

unreal

bogus

counterfeit

ersatz

fabricated

factitious

faked

false

falsie

hyped-up

manufactured

mock

phony

plastic

sham

simulated

specious

spurious

substitute

synthetic

unnatural

Expand full comment