30 Comments

Thank you for a great post. This tenacity and attention to detail is why I read all your posts. (and Brian's)

I totally agree on going beyond the abstract. I find a significant number of abstracts to be purposely misleading. This might be because of the antivax nature of my interests, as studies that objectively demonstrate anti-covid-vax results might state a pro-vax conclusion that allows them to be published.

I usually read the title, abstract, figures and usually the whole article. If the article mentions vaccinated vs unvaccinated control group my interest is piqued and I always look at all data tables.

There were a couple of articles that contained data that allowed some unintended (by authors) conclusions and such things are always newsworthy.

I also often struggle with graphs or figures, which use abbreviations that I cannot understand, or do not explain what they are actually displaying etc. I am glad that I am not alone. I thought I was.

I do suffer from attention deficit and am constantly distracted by people and that is terrible for attentive article reading, so I struggle in this department but try to at least make sure I understood what the article is saying. Sometimes I feel to be intentionally being confused.

This is where I am grateful to people who dig extra deeply into articles, like you do. It is very refreshing.

The M&M section is something that I usually ignore due to not having lab knowledge.

Thank you for an amazing post

Expand full comment

I should also include that abbreviations tend to be used at least once in a study before the abbreviation becomes the dominate form used, so sometimes it helps to ctrl+f and find the abbreviation to see where it was first mentioned.

Expand full comment

I think many people have to keep in mind the fact that studies (and really science as a whole) are going more in the ways of trying to seem catchy and appealing to the public more than it is concerned about being steadfast in producing proper results and evidence. Like the quote I included most people will read title, abstract, then conclusion/discussion and think they have a paper figured out when there's likely to be a ton of information missed along with a ton of needed context.

Graphs and figures are easy to be misinterpreted based on scaling and manipulation tactics. One example is the UK information used to argue OAS with a graph looking at the presence of anti-S and anti-N antibodies among the population. Combining those two trend lines showed a giant bump in anti-S antibodies post-vaccination with what appeared to be a stagnant anti-N line leading people to assume that was an explanation of OAS occurring. In reality, the anti-N line actually doubled but the scaling was stretched due to the anti-S curve and made it appear like it flatlined.

Graphs are similar to the methods in that they may be indicative of different techniques which means that some time needs to be spend figuring out how things are measured, so a corroboration between figures and methods may be needed which then makes it even more technical.

I'm not sure if I have anything but I do tend to wander a lot when reading a paper since I may spend time focused on specific section and need to walk to think about what I read. It's become really bad with podcasts and audiobooks in which a 2 or 4 hour podcast may take a day or two since I keep pausing and having to think about what was read.

Well anyways I do appreciate the shoutout and I hope it provides some broad information and perspective on posts.

Expand full comment

I appreciate learning more about the approach you use when reading journal articles. A few additional things I have learned over the years writing, reading, and reviewing hundreds of articles are:

1. Always look at articles in so-called prestigious journals with an extra dose of skepticism. We as a society have been brainwashed into thinking that if you publish in Nature or Science or one of the big medical journals that this is the best work possible. In reality, much of the time, the work is published there because the authors have the necessary connections to the invitation-only club that those journals represent. There are certainly some good articles published in those journals, but a lot of them are just fluff or have already been shown in less prestigious journals. This is similar to how a lot of people will say that if a story is written in the NYT or WaPo, it must certainly be true.

2. Word choice. Sometimes authors describe their results using adjectives that are inconsistent with the results. That can lead to making bad results look less bad or neutral results look better or worse depending upon the preference of the authors.

3. Statistics. There are so many statistical measures available to examine the results and determine statistical significance. This allows authors to game the system by choosing the ones that show the results that they are looking for. If there are gaps in the data, that also raises red flags.

Expand full comment

Great post. I would add to this the importance of the supplementary indexes where sometimes key results are hidden w/o comment.

Also Conflict of Interest and Funding sections

Expand full comment

Ah, well you see just like supplementary indexes, conflict of interest and funding sections I seemed to have forgotten them!

They are important, but I would also argue that unless a study has an egregious pivot in tone or questionable methodologies with the intent of being constructed to fail the conflict of interest may be more of a validation of the above issues. That is to say, a poor study will appear like a poor study irrespective of the conflict of interest. The COI is likely an explanation for the poor study but hopefully there are some biases and clues sprinkled into a study that helps elucidate that something is afoot.

Or maybe I'm just giving a long explanation as to why I tend to forget to check those sections! The supplementary information has been something I've definitely been trying to get better at looking at and Brian tends to point out information within the supplementary material that provides additional context to studies.

Expand full comment

Yes, conflicts of interest and funding. I started paying serious attention to this the past few years.

Expand full comment

There is a lot. That’s why discussion is so very important.

Expand full comment

Abstracts have severe character/word limits which leads to very precise wording which can often be misleading (see Brian’s example in his comment). I personally want to understand the materials/methods and I want to compare the data to the purported conclusions. Ask yourself very basic questions when going through M&M: why are they using these cells, why are they measuring this signal and what is the timeframe being used for data collection, what are they using as controls (do using these controls make sense ... what else could be going on).

Also be aware that a lot of the statistical work is outsourced so there can be some misunderstanding between researchers & statisticians.

But yes, don’t be afraid to give it a go - we all have to start somewhere! Honestly I’m amazed at some of the stupid things I’ve been asked to do in the lab, umm... this is not proving your point!!

Expand full comment

That is very true that they have character limits and I should have included a remark on that so thank you for pointing that out!

It's becoming more important to understand why cell lines are used and Joomi Kim's post certainly highlights how researchers may think they are studying one cell line and getting results when in reality they were looking at the completely wrong type of cells essentially implicating a ton of cancer research.

I can see the outsourcing of statistical work. Some of the mathematical modeling done for some of these studies require high levels of math that I doubt researchers or doctors are aware of.

And at the end of the day an earnest attempt is a good attempt. You never know your baseline threshold of being able to read and understand a study until you try it!

Expand full comment

What I would say on "Methods" is that they are indispensable if human subjects are involved. There is often no other part of the paper that actually reveals "what the study is showing." So it's different than a purely molecular study where methods is usually just a list of specialist techniques, and often a black box that prevents identifying whatever the authors might have done to distort their own results.

"Title → Abstract → Introduction → Discussion → Results → Methods" well, oops, haha. I only start with abstract if it's an old paper and I want to rule out actually reading it. Otherwise straight to methods or supplemental, then discussion (as a better version of the abstract), and then Results. Hate introductions with my life.

Expand full comment

I happen to be looking at an example, revisiting one of the stupid Wrammert OAS papers because I saw the data re-presented in a surprising way. So this is a paper where I would say that there is no possible better place for a first-time reader to start than the Methods, otherwise you are totally poisoned in interpreting anything else in the paper because you don't realize the stupid timepoints for blood draws https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4770855/

Abstract: "We undertook an in-depth study of the B cell response to the pandemic 2009 H1N1 vaccine over consecutive years" MISLEADING

Methods: "To monitor the vaccine response we obtained PBMCs and sera on the day of vaccination and at the time points with the peak of the plasmablast response (day 5, 6, or 7), and serological response (between day 14-21)." ACCURATE

Expand full comment

I'll try to spend some time looking at the study! It does seem rather long so it may be a bit intimidating for newbies however it does point out a really interesting inconsistency in methodology.

Expand full comment

Oh no, I wouldn't recommend wasting any time reading the study beyond the methods. Blood draws too early. Study = garbage.

Expand full comment

But what if I'm into that sort of abuse? Joking aside, it probably is worth reading on the basis of it being a poor study anyways just to provide a perspective on how NOT to organize a study.

Expand full comment

That is true with clinical trials. I think with clinical trials I do tend to gravitate towards the methods, but if it's some cell line or something of that sort it does get technical. Having to understand flow cytometry or ELISA or HPLC-MS may require extensive knowledge.

The abstract is good if I don't have access to the paper and may need to find other "methods" to get a paper. However, I have gone to the point where I may just hit a snag and decide to directly go the alternative route.

And what do you mean about introductions? You don't like hearing about that thing that happened in December of 2019 over and over again? I did come across some study where the introduction started with something along the lines of, "as the sun began to rise on a distant city in Wuhan, China." I'm like is this a paper or is someone trying to write a script for a Disney+ series?

Expand full comment

The retracted Efimenko paper asserts standard tropes I've seen many times at the FLCCC: "We are really concerned about this problem because the patients may start taking or demanding this medication from their physicians, which can potentially be harmful." Nonsense. Billions of doses of the drug have been taken, with a safety profile better than aspirin.

"This misrepresentation of the study may lead to a huge public health problem, since Ivermectin is a medication that is not FDA approved for COVID treatment, and currently has proven to be ineffective in clinical trials, which are truly the gold standard to evaluate the efficacy of a medication." Again, nonsense. This is all part of the effort to scare practitioners and patients, to centralize the control of medical practice with federal authorities. In a pandemic, a legitimate doctor uses their experience and training to figure out treatments. Observational trials have a roughly equivalent value as RCTs under the circumstances, and studies have shown the results tend to be in the same direction as RCTs. This is Pharma propaganda.

Expand full comment

My inclusion of that retraction wasn't intended to be based on the merit of said retraction. I generally stay away from assertion the effectiveness of Ivermectin myself while arguing that there doesn't appear to be any standout safety concerns and that doctors should be allowed to prescribe what they find best without having regulatory agencies step in.

The main reason I included it was that, at the time, the only part of that study available was the abstract. No full paper, no results section, no discussion, so all that people were going off of was the abstract itself, and that's where my criticisms lie. We should be careful in asserting studies if we haven't had the ability to examine them in depth. It's one thing if people posted this study and said "this seems interesting, but let's wait for the full paper". Instead, they took it at face value purely on the abstract alone leading to the eventual drama that came about between the pro and anti-Ivermectin sides.

Expand full comment

I remember reading an article not long ago where the abstract was completely misleading when one got to the discussion and conclusion. Of course, people were quoting the abstract which if I remember correctly supported the injections but the abstract was completely at odds with the results and conclusions. I wasn’t the only one who noted this which I appreciated. I thought I was hallucinating or just plain stupid.

Expand full comment

It's bound to happen and it's likely that they took something rather nuanced and just gave a broad generalization that actually presented a different interpretation. It also could be designed in a way that, as you stated, completely depends on the ignorance of the reader to not spend time looking a little deeper. Sometimes intuition can be helpful. If it seems like something doesn't seem right just make sure that the need to conform doesn't overtake the nagging feeling that something isn't quite being presented properly.

Expand full comment

The paper alleging that sperm counts and sperm motility return to normal by six months post-injection was one such paper, as I recall.

I am a newb re reading papers which is bad on my part because I should have been better trained, but at least now I finally am learning how to read beyond the abstracts. Anyway, seeing that discordance was appalling, although unfortunately, at this juncture, not surprising.

Expand full comment

That could be the one I am thinking of. Honestly, I read so many papers these days it’s hard to keep them filed in my brain. : )

Expand full comment

Yes. I need a tutorial on how to maintain/manage all the papers. Physically and also in my brain.

Expand full comment

I do appreciate the shoutout Igor! I glanced over the study quickly and was going to give my 2 cents in your post. I do think that your interpretation of the 48 hour remark may be incorrect. I think the remark was made as some time-dependent factor for circulating mRNA such that more days post-vaccination may be indicative of lower levels of mRNA. The results section makes some comment about not detecting circulating mRNA by the 48-hour mark but there doesn't seem to be evidence of these results which doesn't help much. It appears to be a quick study that doesn't provide some additional figures or data for their work.

Expand full comment

My interpretation is kind of like this: "it is safe... particularly after 48 hours" ... so before 48 hours it may not be safe

Expand full comment

Ah, OK I think the issue may be this sentence:

"But it is ominously qualified with “particularly beyond 48 hours after vaccination”, plainly meaning that breastfeeding 48 hours past vaccination is NOT safe."

I think rather than past you may have intended to mean breastfeeding up two 48 hours post-vaccination as the initial sentence may be interpreted as meaning that it may worsen after the 48 hour mark.

But I also have a very poor grasp of grammar and see a ton of issues in my own writing after I publish so take what I say with a grain of salt. 🤷‍♂️

Expand full comment

You are right! Will change right now

Expand full comment

Cool, I think most people may have understood what you meant but just in case some may not have. I left a comment but I do believe this study just raises more questions than it answers which is a bit frustrating to say the least.

Expand full comment

One recommendation I have for the layperson wanting to read studies and research papers is to build up fluency and familiarity with statistics and statistical methods. Becoming comfortable with things like confidence intervals goes a long way towards being more comfortable in critically thinking about the material.

The original COVID inoculation trial write-ups are an excellent example. Pfizer's 95% efficacy for the original monovalent shot was predicated on an incidence of infection in the control group that was not only statistically insignificant within the study population (<200 out of ~17,000), but was a fraction of the prevailing infection rate in the country overall -- going by the numbers even back then one of the best ways to avoid COVID was to be in the control group of a Pfizer study.

The better one is with statistics the more accessible the details become, in my view.

Expand full comment