10 Comments
Jul 7, 2023·edited Jul 7, 2023Liked by Modern Discontent

"Profound" is often thrown in as either the default (already demonstrated) definition of OAS or to describe obtained results. It's a weird tic and might involve "profound" not being translated or understood properly.

It's always possible to "demonstrate OAS" in rodents, especially if you game the setup (3 injections lol). It's like demonstrating the natural forces of levitation by putting a hamster in a wire harness, essentially.

Expand full comment
author

I just thought it was a bit funny that they used that word. It's like they wanted to use the word significant but didn't want people to associate it with statistical significance so they used profound. It's like one of those cooking shows where they just try to come up with different adjectives to describe the same flavors.

This really did seem like a study that stepped back. It was different than the Science one that at least used people and used a transgenic animal model for the epitope part of the study. The setup really does beg the question of if the study was doomed to just spell out OAS from the start.

Expand full comment
founding
Jul 8, 2023Liked by Modern Discontent

Thanks for another well thought out post. Just another apples to oranges comparison 🤦‍♀️

Expand full comment
Jul 7, 2023Liked by Modern Discontent

My understanding of the argument around looking at the N response is that in breakthrough cases, “OAS” produced a strong enough memory response in vaccinated individuals that the body never had the chance to produce a novel immune response to N.

Which then got dropped because that’s not a very compelling argument against the vaccine.

Expand full comment
founding
Jul 8, 2023Liked by Modern Discontent

Indeed that was the argument, but the data was simply seroprevalence in UK blood donors- there was absolutely no information on vaccination or infection status (or when they happened in relation to each other - you NEED a second exposure to observe the elusive OAS). Over time most blood donors had developed S antibodies via infection or vaccination. Over time more and more blood donors were developing N antibodies from a natural infection. The whole can of worms Alex Berenson opened with that post is nothing but egregious misinterpretation.

Expand full comment
author

That was at least one of my steelman arguments for the nucleocapsid results, but an even more simple argument was that the graphs people were looking at suffered from inflation of the y-axis due to the S antibodies.

As Clarisse mentioned, the seroprevalence data looked only at anti-S and anti-N antibodies in the UK population. Because you had a mix of people being vaccinated or naturally infected you would obviously get the huge spike in anti-S antibodies in the population. In contrast, N would only come from infection, and so as more people got vaccinated the y-axis of this graph stretched out to accommodate the growing percentage of anti-S people in the population, but the same growth won't happen with anti-N levels. It was wrongly assumed that this meant that no one was forming anti-N antibodies, which was explained by OAS which didn't make any sense.

But if you actually look at the graph (Figure 3 in the link below) you'll actually see that the number of anti-N antibodies doubled between the start and end of the graph.

https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1027511/Vaccine-surveillance-report-week-42.pdf

But this was argued to have been "no" or "minimal" change, when it really doubled.

Because this data was used to lay the foundations of OAS, it's rather disingenuous if people just gave it up because it actually didn't fit the narrative.

Expand full comment
Jul 7, 2023Liked by Modern Discontent

In a universe so large, that anything possible is probable. Designing studies to prove a possibility should reap some expected results. At what point will we see studies that are unbiased? An impossible question maybe?

Expand full comment
author

Most science studies are, unfortunately, inherently biased. It sort of becomes the incentive of the reader to not just take a paper's word at face value, but see if the conclusion actually matches the data, or if there is anything wonky. This is something that I had to learn about and one of the reasons why reading studies takes a lot longer than just Abstract-only reading. This is also something Heather Heying has talked about in the Darkhorse Podcast. It's not uncommon for a study to make erroneous conclusions that don't match what they found in the actual study.

Expand full comment
Jul 7, 2023Liked by Modern Discontent

Just a simpleton writing here, but the absence of much better and much more rigorous studies analyzing all sorts of things related to all of the jabs- from J and J, AZ, Moderna and Pfizer, kind of automatically alludes to pre-cooked results at the few that are published.

Expand full comment
author

It works both ways. There's certainly a lot of questionable data that comes out, but it becomes the burden of those of us who read such studies to see what faults these studies have. It's one of the reasons why we need to take more care into looking at these studies and seeing if there are any issues before we make grandiose proclamations.

Expand full comment