8 Comments
Jun 18, 2023Liked by Modern Discontent

Where I worked, un-jabbed health professionals with religious exemptions or medical exemptions had to be tested weekly in order to be allowed to remain at work, (until all those PCR tests with expiring EUA got used up and/or that mandate was relaxed). Something similar might partly account for high rate of testing at Cleveland clinic. We swabbed our own noses and brought it to the lab. Half of the time, I had to swab at home and drive the sample to the lab, to keep the weekly schedule. Lots of other employees tested like mad with each respiratory infection to determine if it was COVID. Oddly, I had no symptoms which might have induced any test for respiratory infection from March 2020 to Sept 2022 when I departed, just that string of compulsory PCR weekly tests.

Expand full comment
author

There's plenty of reasons for the degree of testing, all of which the authors don't provide information to. The only thing the propensity definition used tells us is that the categorization is based on # of tests/ time working at the clinic during the pandemic, but they don't provide any indication for the cutoffs for each group, so that doesn't help.

Where I used to work we had a "test to stay" protocol, so we had to get tested routinely unless someone got vaccinated, and it was pretty excessive prior to that.

The 0 propensity really sticks out, and I'm curious if it just means that these are people who were hired during the time of the study so they just weren't there long enough to need testing. It certainly is a big issue that the propensity start point appears to be whenever COVID was argued to have started up until the end of the study's time period, so really anything can explain the propensity results. It's just unfortunate they didn't provide more clarity on this, so it's more like, "the data says this, but how the data was collected and categorized is not provided."

Expand full comment
Jun 19, 2023Liked by Modern Discontent

Another source of testing data issues would be the travelling cohorts of health care professionals. After awhile my former employer decided to stop using travelers, and they recruited from the Philippines.

The validity of the testing itself is so questionable. There's a reason the EUA ran out on those tests. Also there were reasons that the tests continued to be used long after it was announced that the EUA would run out, and production had been stopped.

What interests me more than the COVID "guessing" are claims that higher proportion of the jabbed than unjabbed are getting hospitalized overall, once the jab is said to have lost "efficacy". That's a different endpoint to study and perhaps more consequential than the early guessing of who had or did not have COVID, using symptoms, and then the questionable lab tests.

In order to get published, people will always have to softpedal on the data or issues which underscore that unjabbed people were quarantined, and discriminated against, and fired for false reasons.

Expand full comment
author

Really any data collection done here is up for questioning. I also thought that the difference in testing may also be related to staff positions. If nurses, doctors, technicians, janitorial staff and other cleaning-related positions, are all considered as being employed there I can see why doctors or nurses may need to test more since they are likely to see more patients.

To your third paragraph, one point that Brian Mowrey made is that we have to consider that far more people are vaccinated than unvaccinated, so our data is likely to be biased from the mere fact that those of us who are unvaccinated are in the minority. Consider in this study, where we have to consider the shuffling of employees based on whether they got the bivalent booster. If they got it, they were moved to the "up-to-date" category. If they get sick then that counts as an "up-to-date, infected" status. But what if they didn't get the bivalent booster? If they got sick then they would be "not up-to-date, infected." I'm not sure how much shuffling occurred here, but it's important to consider that a loss in the NUTD group may lead to undercounting and the move into the UTD may lead to overcounting.

Overall, I think this study was more along the lines of, "we have data, just publish something with it." I think this happens quite often when it comes to correlative studies such as these ones. There's a lot to question with the methodology, and as I told Ivo Bakota below there's better evidence to use about why these vaccines are not good without having to resort to the strange extrapolations we are seeing here.

Expand full comment
Jun 18, 2023·edited Jun 18, 2023Liked by Modern Discontent

Thanks for pointing out the nuance in the study. I agree most people don’t read the studies and just take either the abstract or the opinion they read here on substack as gospel. I noticed many of the same things you did when I actually read it in full last night. All I could conclude after reading the study was being “up-to-date” provided no measurable benefit.

Maybe if I bothered looking at their adjusted odds ratio in detail I might be convinced that more jabs = more risk of being infected, but they didn’t adjust for the things I was interested in which are pretty much the same things you pointed out in your post. This isn’t a criticism of their methods, the adjustments I would have liked to see may not have been possible depending on the actual data they had available. The study is very transparent in regards to what they found and the limitations of their findings.

Expand full comment
author

So they do provided an adjusted hazard ratio, but I think these sorts of studies aren't really what can provide us much details. I think people choose these studies because there's a ton of ambiguities, and so, if I were to be cynical, it may just be easier to interpret the study in a way that may argue one thing and hope that people don't look deeper. I think what stands out is that the other information is rather obviously displayed- just look at the other two figures that are included and interpret those findings, but to stop, or to even jump past those figures to go to the table with the hazard ratios is a bit egregious in my opinion.

If someone wanted to make an argument over the detriment over more vaccines, then the IgG4 route may be a better option. There's a lot there that I haven't parsed yet (mainly why the IgG4 conversion is occurring, and what factors are biasing people towards that conversion), but this would at least be the one I would choose, not to squeeze something out of a study that otherwise already provides a clearer explanation.

I think I've just become frustrated at how easily people can just become misled. A lot of these tactics are what the mainstream press does, and yet I've seen it appearing in both independent and alternative media. We can't complain about "fact checkers" fact-checking when we do stuff like this. It's also hypocritical because our criticisms of the media and public health officials are a form of fact-checking. In another life fact checking would have been making sure the information that gets out is accurate, by way of peer review or secondary assessments, so just because the name itself gets a bad rep doesn't mean we throw out the whole concept.

To that point, and what may pertain to this current issue, is that someone may report on this study, use certain figures and tables, and then others report similar remarks using the same images. This sort of speaks of the self-referential aspect of Substack. Each point itself (similar reporting; similar use of figures and tables) doesn't say much, and together they don't add substantial confirmation, but the fact that I've seen several instances where the same argument is being made over a study, only for everyone to use the same figures, tells me that there's probably not a whole lot of opening the study and looking at it going on. I've even seen someone post about the "more jabs, more infections" in their title but then post the propensity figure in their body.

Of all the figures, why use that one? That one clearly doesn't say anything about being "up-to-date" or "not up-to-date", so why use that figure? Or did someone just think that they can make any comment, and then use any figure they want without realizing what each figure details?

Sorry, I guess I'm using this more to vent. It becomes hard when most studies may take hours or days to look at and parse, and to see people just look at a study (like you said, likely going only on the title and abstract) and just make conclusions within minutes does a disservice to readers and just creates more room for critics to mock us.

Expand full comment

.

What Was Mao.

Is What Is Now.

.

Expand full comment

.

Pro Tip:

Mock Them.

That Is The Second Rule

In Beating Them.

The First Rule Being

Never Make Allowances For Them.

.

Expand full comment