The Hypocrisy that lies in Bad Science Journalism
The general problem with media selectively choosing which reporting to fact-check and which studies to overlook.
For the past few weeks, I’ve highlighted some instances in which the press will misconstrue new studies with misleading and clickbait headlines.
In particular, I criticized mainstream reports of new research regarding the sugar-substitute xylitol, fish oil supplementation, and alleged niacin-related metabolites being related to an event known as a major adverse cardiovascular event (or MACE).
While the mainstream press ran with the alleged findings from the studies, I took to pointing out critical issues in methodology, including the fact that these sorts of studies never reported on the actual dietary intake of the compounds in question. And even more egregious came the fact that the authors of the xylitol paper noted that any elevated xylitol levels among the participants had to be derived from endogenous production due to the very short half-life of xylitol, and the fact that blood collection of participants was done during a fasting state.
This points to my ever-growing cynicism with the media’s ability to accurately report on studies. Ironic given the fact that many of these outlets have taken it upon themselves to be the arbiters of fact-checking.
But there’s something even more insidious regarding science journalism. Not only are we dealing with an institution that can’t be bothered to investigate the studies that they report on, but there’s also an inherent, reactionary nature to science journalism in which due diligence only occurs when reporting comes from specific media outlets.
In the past week you may have come across some headlines suggesting that “fake meat” was linked to heart disease and death. An example of such headlines can be seen in an article from the Daily Mail below:
As well as one from the New York Post, which seemed to have been edited. Fortunately, you can’t change the URL or the title in search engines so you can still find the original title shown below:
When looking at these titles the initial thing to consider is the emphasis on “vegan fake meats”. Why exactly focus on this phrase in particular?
Part of this may stem from the fact that growing criticisms of meat-based products has led to the boom of plant-based simulacrums, and this coincides with a rising fear that our previous way of eating will be circumvented for allegedly “more sustainable” methods.
In that regard, any study that would infer a negative risk from consuming “vegan fake meats” could easily be worked into a narrative to criticize a push towards plant-based dieting.
However, the study referred to in these articles1 did not focus on fake meats, but rather examined the risk of cardiovascular disease from plant-sourced ultra-processed foods (UPFs) versus plant-sourced non-UPFs. The researchers also compared overall consumption of UPF, including animal-sourced products, for risk of cardiovascular disease.
Let me be clear that this study has a lot of issues. Note that this is one of many studies that derived its study population from the UK’s Biobank- a system comprised of hundreds of thousands of participants with demographic and health-related data intended to examine long-term risk of diseases related to genetic and environmental circumstances.
Although this allows for a very large study population, the use of UK’s Biobank runs into the issue of quantity over quality. Instead of being able to provide robust data, many studies that come from the UK’s Biobank rely on very limited, baseline reports which are then extrapolated to correlate with future health outcomes.
In this case participants included people who filled out a 24-hour food recall survey at least twice over the course of 3 years, with an apparent 4 chances occurring over those years:
Dietary intakes were assessed using a validated web-based, self-administered questionnaire designed to record the consumption of over 200 common food and beverage items in the previous 24 h. This 24-h recall was introduced towards the end of the recruitment period (2009–2010). All participants with a known email address were invited to complete the questionnaire online on four separate occasions between 2011 and 2012.
Remember when people (including myself) reported on the intermittent fasting study and its association with cardiovascular disease a few months ago? Yeah, it seems like this methodology is extremely common across many studies...
One of the biggest issues with using this methodology is that it makes an egregious assumption that people’s dietary and lifestyle habits are completely static. Not only do two days out of four years hardly constitute a good measure of one’s typical diet, but it also doesn’t provide any meaningful indication regarding what one’s diet would seem like over the next few years.
These things are likely to change so often that the only way to capture an appropriate dietary regimen would be to construct routine 24-hour dietary recalls. In this case not only did the dietary recall occur less than a handful of times during the recruitment period, but it also never occurred during the proceeding years. In fact, in many of these studies the actual dietary regimen of the individual at the time of a cardiovascular-related incident is generally not know.
In general, these sorts of studies which utilize a very large population almost result in correlative fishing expeditions in which researchers try to squeeze whatever they can out of the data provided, even though it’s likely to be of poor quality and lacking anything meaningful. It’s one of the reasons I criticized the work from Hazen, et al.’s team as it appears to utilize metabolomics to fish for any biomarker or metabolite to try and link it to an increased risk of MACE.
Crunch the numbers enough and search through enough metabolites and you’re sure to find something that can be made to correlate and lead to scary media headlines and hysteria.
But as it relates to “vegan fake meat” note that the study uses a catchall category for ultra-processed foods. Therefore, it shouldn’t come as a surprise that most ultra-processed foods are also likely to be vegan, including heavily processed breads, confections, baked goods, and even potato chips. For instance, this is the list that the researchers used to categorize foods:
So, it’s obvious that more than just “vegan fake meat” is being looked at here, which begs the question of why outlets would focus on this phrase if not to clickbait or to engage with narrative discourse surrounding the push for more plant-based eating.
This wouldn’t really be a surprise given that the Daily Mail is argued to not be a very credible source. In the case of the New York Post, it’s possible that its more conservative, anti-globalist reader base2 may be keener on a title that emphasizes “fake vegan meat” rather than ultra-processed foods as a whole. In essence, the more conservative/right-leaning aspect of these outlets may lean them towards a more biased, skewed headline that doesn’t tell the full story.
On its own this would be another example of bad science journalism in which a study is oversimplified and given a dose of bias possibly based upon the leanings of the specific outlet. But that’s not the only thing that’s occurring, and it makes for an interesting case of media biases and hypocrisy.
Because within the following days several other, more so-called “left-leaning” outlets have come out raising criticisms of such headlines, including articles from outlets such as Salon and Scientific American which put out their own rebuttals such as the following:
These articles do point out a more nuanced position such as noting that “vegan fake meat” wasn’t the focus of the study, and they raise a few minor criticisms with the research.
Nonetheless, why did these outlets/journalists feel the need to offer a fact-check for this study in particular? Why was it important to point out faults in the interpretation of this new study?
One can only surmise that it’s not the misinterpretation of the study that’s the problem, but likely because of the outlets that were putting out the information. In other words, it’s not the wrong message that’s to blame, but the messenger. Because the Daily Mail and the New York Post may lean more to the right other outlets may have taken to critiquing these reports as a means of “fact-checking” their reporting.
And here lies one of the biggest problems in science journalism. For all intents and purposes any study is open to scrutiny. We would hope that journalists offer objective, unbiased, and analytical perspectives on any study that they report on. News outlets are one of the main ways in which the public comes across science, and so we would hope that this would encourage more informative and educational dissemination of science.
But that tends not to be the case. Instead, it appears that outlets are willing to put out articles that either agree or subvert some established narrative. They seem more prone to fact-check misinformation from alleged ideological enemies and accepting of misinformation from people aligned with their own ideas.
Worst yet, they seem to forego endeavors to analyze studies so long as other outlets are doing the same. It’s the fallacy of plurality- just because many outlets are reporting one way on a study doesn’t mean their reporting is accurate, and just because many outlets are reporting incorrectly it doesn’t mean that you are not culpable for any misinformation you report and culpable of offering corrections.
Again, why was it so necessary to fact-check this study in particular? And while Salon offers up a critique against the “vegan fake meat” headlines it also published an article agreeing with the xylitol narrative3:
Of course, we should bear in mind that these articles are written by different journalist and so issues in reporting from one journalist isn’t the responsibility of another journalist.
But once again that isn’t the main problem that I have. The problem is the inconsistency in criticizing any study’s pitfalls. It’s the inconsistency in being objective when reviewing studies. It’s the inconsistency in taking some studies at face value while doing due diligence to raise issues with other studies. It’s the fact that science journalists seem keener on reacting to the dissemination of misinformation from allegedly ideological opponents while ignoring misinformation from their same camp. It’s more about influencing the public’s perception that it is providing them accurate information.
And although these outlets point out that there’s more to the study than “vegan fake meat” they still failed to criticize the collection of dietary recall surveys and how unlikely these limited surveys are in predicting future cardiovascular outcomes. Again, something as ever-changing as one’s diet would need routine surveys to get a better understanding of an individual’s diet.
Most science journalism is bad science journalism
It’s been part of my prerogative to point out how terrible the media is at covering science. They don’t seem to read the studies that they cover, and instead rely on secondary or tertiary sources for much of their reporting. And this problem extends to every outlet including conservative, progressive, mainstream, and independent journalists. We are not only inundated with bad science but also inundated with bad journalists and reporters that run the gamut from all political leanings.
The case of this “vegan fake meat” is an interesting case to highlight, not just due to the bad study itself but due to the ways in which some outlets will react to other people’s reports and feel the need to fact-check after journalists incorrectly report on studies. It’s the fact that some outlets seemed to provide a parochial, biased perspective on an otherwise flawed study that spurred outlets to react in kind with fact-checking.
From what I can see most media outlets do a poor job of covering any study, and that leaves us in serious deficit of receiving genuinely good, objective science.
As the public shouldn’t we expect constant fact-checking of any reporting irrespective of the political bias or subjective nature of the reporting?
Where’s the fact-checking regarding the xylitol study which has been reported on extensively by outlets sans the fact that the researchers themselves mention that any elevated xylitol levels had to be derived from endogenous xylitol production and not dietary consumption?
Instead of fact-checking one another journalists should do well to fact-check the very thing that they set out to report on, and that is the studies themselves.
And until we ask for more unbiased, objective science journalism that doesn’t default to partisan, clickbait headlines we will continue to get the same shoddy work that we have become so acclimated to.
Ask for better, ask for objectivity, and ask for less biases in science journalism.
If you enjoyed this post and other works please consider supporting me through a paid Substack subscription or through my Ko-fi. Any bit helps, and it encourages independent creators and journalists such as myself to provide work outside of the mainstream narrative.
Fernanda Rauber, Maria Laura da Costa Louzada, Kiara Chang, Inge Huybrechts, Marc J. Gunter, Carlos Augusto Monteiro, Eszter P. Vamos, Renata Bertazzi Levy. Implications of food ultra-processing on cardiovascular risk considering plant origin foods: an analysis of the UK Biobank cohort, The Lancet Regional Health - Europe, 2024, 100948, ISSN 2666-7762, https://doi.org/10.1016/j.lanepe.2024.100948
Note that I’m not actually using this as a personal point of contention. This is merely me pointing out the juxtaposition between one group of reporters and their possible political leanings vs the political leanings of those who see it fit to fact-check these reportings.
It should be noted that New York Post also had an article on the xylitol study. This again affirms the idea that outlets, irrespective of political leanings and biases, are likely to poorly report on studies.
The problem may be more serious than just a clickbait title or misformulated abstract here and there. Published papers tend to be the source of “approved” medical knowledge. The healthcare community don’t have time, resources or capacity to experiment on the human body, so they naturally derive their working knowledge from the canon of “peer-reviewed” truths. Most of them may even be unaware of the traps in methodology. They are simply buying the headlines in the blind and apply them in their work. They may even not read the conclusions - why should they if they found 250 studies on “drug xx improving condition yy”. The numbers and desired effects will win over curiosity, professional due diligence or time constraints.
In the long run, the HCWs community will grow its own “knowledge” based on catch phrases and metrics of papers, without ever checking on their credibility. The patient will disappear (has already disappeared?) from the equation. How dare you not to improve if doctors x, y, and z wrote clearly that you should? Either you do not follow the recommendations (an easy one) or you are genetically broken (an easy one) or there is something else wrong with you (an easy one). The doctor(s) will feel relieved of the responsibility for actually seeing the patient and solving his puzzle.
This growing fake knowledge will eventually find its way back into medical handbooks - which are basically compilations of small-size articles on selected subjects, written by a narrow group of authors. Reports from the front lines will drown under the burden of routine statements with nice graphics, “justified” by big names in the medical journal world.
Patients will naturally slide into avoiding medical consultation because of its uselessness, cost, waste of time, and irrelevance to their actual condition (already happening). The gap will grow. Theory has been increasing exponentially, while the actual, real-world experience is becoming an outcast. The solution? Obvious: increase funding of research (free money, anyway, taken from taxpayers’ pockets), which will more increase the publication load, and the spiral will continue…
I enjoyed your post, but maybe not for the right reasons. I spent quite a bit of time 15 years ago, give or take, looking into misleading nutrition-related studies. I subscribed to a service that would let me "rent" paywalled research reports, and I went at it. What a mess. Even with my limited background, I could spot all kinds of problems to which the peer reviewers seemed to be immune. So yes, I'm chuckling now as you dig into these things.
One interesting analysis that I came across back in those days was that researchers, while bound by certain rules when publishing their reports, were not similarly bound when writing press releases about those reports. So they could play by the rules within the reports (never mind the quality of the research), but they could pretty much say what they really wanted to prove (and didn't) in the press release. The "science journalists", of course, would read the press releases and report on that.
Going back 25 years, I remember coming across vegan fake meat. I was a vegan back then, and I looked at the ingredients in some of these things I came across on store shelves, and I was very disappointed. I made vegan replacements from scratch, like nut milks and cheeses, not really attempting to imitate the dairy versions, but fake vegan meat was not on my menu, either before or after reading the labels.