Vinay Prasad's first paragraph in his Substack article calling for de-emphasis of science in medical education:
"Medical education has got it backward. We're front-loading biology and anatomy, then backfilling with the principles of evidence-based medicine. We need to flip this equation, placing evidence-based medicine at the fore, right from the get-go. Patients don't care about the biological mechanisms; they care about what helps them get better, regardless of the underlying science. That's the crux of our argument today."
Whether he used ChatGPT to develop this text or not is, ultimately, immaterial.
Vinay Prasad presented this text as his material in his Substack. And should Vinay Prasad feel tempted to remove this article as a "mistake", he should understand that the Internet never forgets, and that his article has already been committed to at least one Internet archive.
Whether the text is the product of ChatGPT or Vinay Prasad's own creative energies is immaterial. Vinay Prasad presented the text and the ideas therein as his. The moment he published that article he by definition gave his full and unequivocal endorsement to those ideas, embracing those ideas as his own.
He cannot now declaim ownership of them.
Whether the text was composed by Vinay Prasad or ChatGPT, my criticism of that text and the ideas therein stands.
Vinay Prasad either approves of science being foundational to medical education or he does not. ChatGPT assembling the particular text advocating the anti-science position is, ultimately, irrelevant.
I probably should have been more clear in my post. I don't think Prasad's argument was to use ChatGPT to write something to dupe his readers. I was being a bit sarcastic with my Spongebob reference.
Rather, I think he wanted to see how similar ChatGPT would be in its constructed response, to which he seems to suggest it was rather similar. Because he referenced an old article I wanted to take a look and corroborate how similar his old writings, in his own words, were to the simulacrum he prompted ChatGPT to construct.
Given that they are rather similar it does a pretty good job while also inferring that Prasad does hold these beliefs (or at least has written about them in the past). However, I find the American Medicine article to be a much clearer picture, and I would argue that it suggests Prasad has a perspective that I am far more critical of.
To that, I think this paragraph from ChatGPT appears to be a good representation of Prasad's thoughts on the matter:
"Here's the rub: when we instill a deep-seated reverence for biology before introducing the principles of evidence-based medicine, we're setting up our future doctors for cognitive dissonance. It's harder to accept that a treatment doesn't work when it "should" according to biological principles. As a result, evidence that contradicts our understanding of biology is often met with skepticism, even outright rejection. This is a disservice to our patients, who ultimately care about outcomes, not biological plausibility."
I don't think he set out to "dupe" anyone, either.
But here's the thing: he presented the content as his. To come back later and say "I didn't write that" means he lied to his readers. The reason plagiarism is a mortal sin in academia is that it is at its core a lie.
If he's pawning a controversial opinion off on ChatGPT then we again come to the point where he lied.
Whether it is ethically appropriate to use ChatGPT as a writing tool is a question. I won't try to answer it here. However, it is never ethical to claim ownership of an idea on one day and then deny it the next
That's interesting to consider. Irrespective of whether the ideas are his or whether he reveals it was ChatGPT, the fact that readers weren't alerted to his use of ChatGPT can be seen as not providing credit to someone else, almost the same as a using a ghost writer.
In academia they're apparently taking a lot of this seriously. Some places are outright banning any use of AI in construction an article, while others seem to suggest that you must include AI as a coauthor if they have touched the research/article in any way.
In that regard, I can see why this would be a serious problem. I suppose this goes into the idea of whether people should credit ChatGPT in their own work whenever it is used. Otherwise, it can appear disingenuous to readers.
But then how would you consider the ethics of double blind RCTs? Or more so with psychological or other experiments which intentionally dupe the subject?
Say, something like the Milgram experiment. Ironically I don't think it would've given us a truthful result if the subject knew the entire truth of the experiment (i.e. the test subject took part in the experiment thinking he was assisting, but wasn't aware he was the subject itself)
Or an experiment that engages a random stranger with an honest question but with scripted situation presented.
Oh I agree entirely with informed consent for medical tests or anything that could cause physical harm. But even the Helsinki doc has a section for placebo:
"and the patients who receive any intervention less effective than the best proven one, placebo, or no intervention will not be subject to additional risks of serious or irreversible harm as a result of not receiving the best proven intervention."
which basically implies they may also be tricked, provided that the results be available later #36 (i.e. the trickery revealed or whether the test subject took a placebo or not), and with the only condition being to avoid "risks of serious or irreversible harm"
If someone receives the placebo in an RCT, or does not receive the treatment under investigation, he has, by virtue of agreeing to participate in the trial, agreed to accept those outcomes.
Bear in mind the complete text of Article 33:
" The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best proven intervention(s), except in the following circumstances:
Where no proven intervention exists, the use of placebo, or no intervention, is acceptable; or
Where for compelling and scientifically sound methodological reasons the use of any intervention less effective than the best proven one, the use of placebo, or no intervention is necessary to determine the efficacy or safety of an intervention
and the patients who receive any intervention less effective than the best proven one, placebo, or no intervention will not be subject to additional risks of serious or irreversible harm as a result of not receiving the best proven intervention.
Extreme care must be taken to avoid abuse of this option."
Within this requirement, the use of a placebo/no-intervention protocol is predicated on the assessment that such will not place the trial participant in extraordinary risk of death or severe illness as a result. For example, you would not test a new monkeypox vaccine against the Congo Basin clade in an RCT where the alternative was no vaccine. With a ~10% case fatality rate, that would be an unacceptable level of risk.
In all cases, however, there is no "trickery" as the patient is fully informed that they may receive the placebo instead of the therapeutic.
It's only trickery if they consent to participate in the RCT on the assurance they will receive the therapeutic rather than the placebo. Of course, if such assurances are made then the RCT itself is already tainted.
I am utterly against relying on RCT and EBM as the only guiding principle but some of our basic sciences are also corrupt eg virology. Ultimately I think medicine is a personal doctor - patient relationship and discussion which must be sacrosanct and never controlled by protocols and governments and pharma.
What's interesting is that Prasad's argument would go against individualized medicine. To argue that something works for the whole based on RCTs tells nothing of how the individual patient sitting in front of a physician would respond to a treatment. His perspective is just as likely to fall into a pit of cognitive dissonance if you argue that RCTs show a benefit with a drug and the patient being prescribed the medication doesn't experience any benefit. What do you do then? One would assume to investigate, and that's exactly what scientists would be doing so I'm not sure why he's trying to separate the two. This seems more like doctors believing medicine isn't rooted in science or that medicine is by its very nature an interdisciplinary field.
With respect to reorganizing medical education? I believe those are his thoughts, so ChatGPT's response was completely constructed by ChatGPT (as in Prasad did not do any editing afterwards) but the intention and content of ChatGPT's post were similar to his own thoughts.
Compare the American Medicine article I posted above which Prasad references as being an example of his thoughts on the matter and they are very similar, and in fact I would argue that the American Medicine article makes me more critical of Prasad's position since it seems more in line with settled science by way of RCTs and EBM.
> Given that the prompt provided by Prasad was AI-generated
Unless I misread, I think the prompt was from Timothee Olivier and it was almost an essay by itself with careful instructions on how to construct the essay.
In fact I'm not sure if calling it totally AI generated is accurate since the long prompt explicitly instructs to basically summarize and emulate Vinay Prasad older work
He includes the paragraph that he provided ChatGPT. It is rather detailed so it's interesting that ChatGPT was able to capture the "spirit" of Prasad so to speak.
As I told Peter above, the initial part of my prompt was more joking about being bamboozled with the second half looking at the Academic Medicine article he references in particular which appears to be similar to what he stated above. In fact, I would argue that the ideas in the Academic Medicine article make this position appear worse as the examples he provides has serious issues.
It speaks more of "RCTs say these work, so we should use them". It seems to be dismissive of scientific investigation as being centered around hypothetical models that don't provide any benefit to humans. I just find the whole idea of not being curious as to why a drug works to be a strange position for doctors to take, and his argument that a statin may not have the same mechanism of action as previously thought and may be dependent upon various factors is the sort of nuanced thinking doctors should engage in.
Gotcha. Coming back to the topic and to provide my comment about your question previously I think both sides are important and work together in a feedback loop. It reminds me of the infamous (but logical and epistemological) Rumsfeld Matrix:
"as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know."
and I think transitioning between unknown unknowns -> known unknowns -> known knowns and improving the science requires constant feedback with the empirical results including anything unexpected, as well as discovering new unknowns in what was previously thought known, etc and the example of what you say "hypothetical models that don't provide any benefit to humans" might be starting off from the bottom of unknowns of something like how the discovery of Ivermectin came to be by a curious Japanese microbiologist sampling a bunch of different soil samples looking for any potential medical use, but not knowing at all what might be there
Edit: ok that come out discombobulated, but I don't know how else to better elaborate the refining or correcting of existing models and discovery of new models and/or discovery of new variables (e.g. known unknowns) and how those variables are incorporated into models (becoming known knowns). Your discussion also makes me thing of it's possible to do more personalized controlled experimentation instead of population wide RCTs to isolate variables that are applicable to only some people and not others. But that whole chain of events would require noticing what works for someone and continuing from there instead of just letting it go. Over the many years, decde+ I've had to resort to doing this to myself to resolve my own health issues as I got nowhere with the vast majority of mainstream (i.e. HMO, PPO insured) doctors and hospitals
It also comes down to understanding why people respond differently. Modern medicine seems to emphasize some aspect of individualism in medicine, in that some genetic factors or other circumstances may influence how someone responds to a treatment. The whole thing going on with the COVID vaccines and other treatments should be a reminder of this fact, but a reliance on RCTs as a way of treating the whole sounds ridiculous.
This reliance on RCTs is but an elitist thinking escape to a barricade that protects them in their ivory towers. Allowing them to dismiss real world science as they see fit.
I'm not sure if doctors will be out of their jobs, but I think if doctors are going to compare their profession to AI they really have to take a look at how much of medicine is streamlined rather than nuanced and focused on individual care.
Doctors are Already being replace by the suits who run things with "Dr" PhD nurses who have neither the experience nor training to do the jobs, such as running the ICU. Suits have no qualms about eliminating costs. Their job is to create profit for shareholders, not to get patients well.
Vinay Prasad's first paragraph in his Substack article calling for de-emphasis of science in medical education:
"Medical education has got it backward. We're front-loading biology and anatomy, then backfilling with the principles of evidence-based medicine. We need to flip this equation, placing evidence-based medicine at the fore, right from the get-go. Patients don't care about the biological mechanisms; they care about what helps them get better, regardless of the underlying science. That's the crux of our argument today."
https://vinayprasadmdmph.substack.com/p/rethinking-medical-education-evidence
Whether he used ChatGPT to develop this text or not is, ultimately, immaterial.
Vinay Prasad presented this text as his material in his Substack. And should Vinay Prasad feel tempted to remove this article as a "mistake", he should understand that the Internet never forgets, and that his article has already been committed to at least one Internet archive.
https://archive.md/yIQ2V
Whether the text is the product of ChatGPT or Vinay Prasad's own creative energies is immaterial. Vinay Prasad presented the text and the ideas therein as his. The moment he published that article he by definition gave his full and unequivocal endorsement to those ideas, embracing those ideas as his own.
He cannot now declaim ownership of them.
Whether the text was composed by Vinay Prasad or ChatGPT, my criticism of that text and the ideas therein stands.
https://substack.com/profile/42691921-peter-nayland-kust/note/c-16354330
Vinay Prasad either approves of science being foundational to medical education or he does not. ChatGPT assembling the particular text advocating the anti-science position is, ultimately, irrelevant.
I probably should have been more clear in my post. I don't think Prasad's argument was to use ChatGPT to write something to dupe his readers. I was being a bit sarcastic with my Spongebob reference.
Rather, I think he wanted to see how similar ChatGPT would be in its constructed response, to which he seems to suggest it was rather similar. Because he referenced an old article I wanted to take a look and corroborate how similar his old writings, in his own words, were to the simulacrum he prompted ChatGPT to construct.
Given that they are rather similar it does a pretty good job while also inferring that Prasad does hold these beliefs (or at least has written about them in the past). However, I find the American Medicine article to be a much clearer picture, and I would argue that it suggests Prasad has a perspective that I am far more critical of.
To that, I think this paragraph from ChatGPT appears to be a good representation of Prasad's thoughts on the matter:
"Here's the rub: when we instill a deep-seated reverence for biology before introducing the principles of evidence-based medicine, we're setting up our future doctors for cognitive dissonance. It's harder to accept that a treatment doesn't work when it "should" according to biological principles. As a result, evidence that contradicts our understanding of biology is often met with skepticism, even outright rejection. This is a disservice to our patients, who ultimately care about outcomes, not biological plausibility."
I don't think he set out to "dupe" anyone, either.
But here's the thing: he presented the content as his. To come back later and say "I didn't write that" means he lied to his readers. The reason plagiarism is a mortal sin in academia is that it is at its core a lie.
If he's pawning a controversial opinion off on ChatGPT then we again come to the point where he lied.
Whether it is ethically appropriate to use ChatGPT as a writing tool is a question. I won't try to answer it here. However, it is never ethical to claim ownership of an idea on one day and then deny it the next
That's what Vinay Prasad did.
That's interesting to consider. Irrespective of whether the ideas are his or whether he reveals it was ChatGPT, the fact that readers weren't alerted to his use of ChatGPT can be seen as not providing credit to someone else, almost the same as a using a ghost writer.
In academia they're apparently taking a lot of this seriously. Some places are outright banning any use of AI in construction an article, while others seem to suggest that you must include AI as a coauthor if they have touched the research/article in any way.
In that regard, I can see why this would be a serious problem. I suppose this goes into the idea of whether people should credit ChatGPT in their own work whenever it is used. Otherwise, it can appear disingenuous to readers.
But then how would you consider the ethics of double blind RCTs? Or more so with psychological or other experiments which intentionally dupe the subject?
Say, something like the Milgram experiment. Ironically I don't think it would've given us a truthful result if the subject knew the entire truth of the experiment (i.e. the test subject took part in the experiment thinking he was assisting, but wasn't aware he was the subject itself)
Or an experiment that engages a random stranger with an honest question but with scripted situation presented.
Two words resolve the matter in its entirety: "informed consent".
This is the basis of the Declaration of Helsinki regarding medical research.
https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/
We do not have the right to surreptitiously turn our fellow human beings into lab rats.
This the unalterable order of things.
Oh I agree entirely with informed consent for medical tests or anything that could cause physical harm. But even the Helsinki doc has a section for placebo:
"and the patients who receive any intervention less effective than the best proven one, placebo, or no intervention will not be subject to additional risks of serious or irreversible harm as a result of not receiving the best proven intervention."
which basically implies they may also be tricked, provided that the results be available later #36 (i.e. the trickery revealed or whether the test subject took a placebo or not), and with the only condition being to avoid "risks of serious or irreversible harm"
If someone receives the placebo in an RCT, or does not receive the treatment under investigation, he has, by virtue of agreeing to participate in the trial, agreed to accept those outcomes.
Bear in mind the complete text of Article 33:
" The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best proven intervention(s), except in the following circumstances:
Where no proven intervention exists, the use of placebo, or no intervention, is acceptable; or
Where for compelling and scientifically sound methodological reasons the use of any intervention less effective than the best proven one, the use of placebo, or no intervention is necessary to determine the efficacy or safety of an intervention
and the patients who receive any intervention less effective than the best proven one, placebo, or no intervention will not be subject to additional risks of serious or irreversible harm as a result of not receiving the best proven intervention.
Extreme care must be taken to avoid abuse of this option."
Within this requirement, the use of a placebo/no-intervention protocol is predicated on the assessment that such will not place the trial participant in extraordinary risk of death or severe illness as a result. For example, you would not test a new monkeypox vaccine against the Congo Basin clade in an RCT where the alternative was no vaccine. With a ~10% case fatality rate, that would be an unacceptable level of risk.
In all cases, however, there is no "trickery" as the patient is fully informed that they may receive the placebo instead of the therapeutic.
It's only trickery if they consent to participate in the RCT on the assurance they will receive the therapeutic rather than the placebo. Of course, if such assurances are made then the RCT itself is already tainted.
I am utterly against relying on RCT and EBM as the only guiding principle but some of our basic sciences are also corrupt eg virology. Ultimately I think medicine is a personal doctor - patient relationship and discussion which must be sacrosanct and never controlled by protocols and governments and pharma.
What's interesting is that Prasad's argument would go against individualized medicine. To argue that something works for the whole based on RCTs tells nothing of how the individual patient sitting in front of a physician would respond to a treatment. His perspective is just as likely to fall into a pit of cognitive dissonance if you argue that RCTs show a benefit with a drug and the patient being prescribed the medication doesn't experience any benefit. What do you do then? One would assume to investigate, and that's exactly what scientists would be doing so I'm not sure why he's trying to separate the two. This seems more like doctors believing medicine isn't rooted in science or that medicine is by its very nature an interdisciplinary field.
I agree with you. Also, is it really Prasad’s argument or is he just stirring the pot?
With respect to reorganizing medical education? I believe those are his thoughts, so ChatGPT's response was completely constructed by ChatGPT (as in Prasad did not do any editing afterwards) but the intention and content of ChatGPT's post were similar to his own thoughts.
Compare the American Medicine article I posted above which Prasad references as being an example of his thoughts on the matter and they are very similar, and in fact I would argue that the American Medicine article makes me more critical of Prasad's position since it seems more in line with settled science by way of RCTs and EBM.
.
Doctors Are Obedient By Nature.
Therein Lies The Problem.
.
> Given that the prompt provided by Prasad was AI-generated
Unless I misread, I think the prompt was from Timothee Olivier and it was almost an essay by itself with careful instructions on how to construct the essay.
In fact I'm not sure if calling it totally AI generated is accurate since the long prompt explicitly instructs to basically summarize and emulate Vinay Prasad older work
He includes the paragraph that he provided ChatGPT. It is rather detailed so it's interesting that ChatGPT was able to capture the "spirit" of Prasad so to speak.
As I told Peter above, the initial part of my prompt was more joking about being bamboozled with the second half looking at the Academic Medicine article he references in particular which appears to be similar to what he stated above. In fact, I would argue that the ideas in the Academic Medicine article make this position appear worse as the examples he provides has serious issues.
It speaks more of "RCTs say these work, so we should use them". It seems to be dismissive of scientific investigation as being centered around hypothetical models that don't provide any benefit to humans. I just find the whole idea of not being curious as to why a drug works to be a strange position for doctors to take, and his argument that a statin may not have the same mechanism of action as previously thought and may be dependent upon various factors is the sort of nuanced thinking doctors should engage in.
Gotcha. Coming back to the topic and to provide my comment about your question previously I think both sides are important and work together in a feedback loop. It reminds me of the infamous (but logical and epistemological) Rumsfeld Matrix:
"as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know."
and I think transitioning between unknown unknowns -> known unknowns -> known knowns and improving the science requires constant feedback with the empirical results including anything unexpected, as well as discovering new unknowns in what was previously thought known, etc and the example of what you say "hypothetical models that don't provide any benefit to humans" might be starting off from the bottom of unknowns of something like how the discovery of Ivermectin came to be by a curious Japanese microbiologist sampling a bunch of different soil samples looking for any potential medical use, but not knowing at all what might be there
Edit: ok that come out discombobulated, but I don't know how else to better elaborate the refining or correcting of existing models and discovery of new models and/or discovery of new variables (e.g. known unknowns) and how those variables are incorporated into models (becoming known knowns). Your discussion also makes me thing of it's possible to do more personalized controlled experimentation instead of population wide RCTs to isolate variables that are applicable to only some people and not others. But that whole chain of events would require noticing what works for someone and continuing from there instead of just letting it go. Over the many years, decde+ I've had to resort to doing this to myself to resolve my own health issues as I got nowhere with the vast majority of mainstream (i.e. HMO, PPO insured) doctors and hospitals
When we test an hypothesis in a trial, we inevitably only end up testing how a subject performs in that test.
While it may help formulate a plan, we all know what happens to a plan at its first encounter with reality.
It also comes down to understanding why people respond differently. Modern medicine seems to emphasize some aspect of individualism in medicine, in that some genetic factors or other circumstances may influence how someone responds to a treatment. The whole thing going on with the COVID vaccines and other treatments should be a reminder of this fact, but a reliance on RCTs as a way of treating the whole sounds ridiculous.
This reliance on RCTs is but an elitist thinking escape to a barricade that protects them in their ivory towers. Allowing them to dismiss real world science as they see fit.
Chat GPT creating a simulacrum is the part that will destroy us. Disconnect people from their creativity, control the narrative.
People will interact with AI for that medical care and doctors will be outta work like everyone else. Move along, nothing to see here.
I'm not sure if doctors will be out of their jobs, but I think if doctors are going to compare their profession to AI they really have to take a look at how much of medicine is streamlined rather than nuanced and focused on individual care.
Doctors are Already being replace by the suits who run things with "Dr" PhD nurses who have neither the experience nor training to do the jobs, such as running the ICU. Suits have no qualms about eliminating costs. Their job is to create profit for shareholders, not to get patients well.