Haha I was going to ask where this was! It’s usually the most jargon filled & poorly written section. Rarely do they explain why the methods they’ve chosen were used, what the limitations are, or why alternative approaches were rejected.
There'll usually be some little comment that goes, "these other people tried this so we're going to try it as well". Unless some assay or test is considered to be standard they generally cite some other paper and their methods, but that usually requires an understanding of whether just carrying over a method to the current study would be appropriate.
The graphs and figures section is also usually pretty hard.
My main personal frustration is this: I have limited time, perhaps 2-4 hours a day. I wan to find something interesting, that I could write about truthfully to inform, entertain and engage my readers.I like articles that could lead to unexpected conclusions.
So I have to read articles, twitter posts, news items etc. When looking at science articles I am not sure if the article has any amazing juicy material worth reporting on. For example (made up)
"Covid-19 incidence trends in prediabetic population of Kansas City, MO"
Such an article may be total dreck, or it may contain explosive findings. When I start the article out, I do not know. The abstract also does not help because it will say that the "vaccine is safe and effective" and abstracts are often highly misleading (even for more boring matters). So I have to look at the article to figure out if I should spend 2, 5, or 30 minutes. The risk is waste of time or missing a super amazing material.
To help myself decide which way to go, I often jump to figures and graphs to see what kind of data it provides and go from there. Any mention of "unvaccinated" is usually a good indicator that the article needs to be explored.
When I write, I always imagine a smart and nasty fact checker standing over my shoulder, ready to notice any mistakes and misrepresent my post by playing the mistakes up. So I am careful to qualify when I am not sure and write as transparently as I can.
Also I am mindful of wasting my readers time. If I waste 1 minute of each person who opens my post, I would end up wasting roughly ten 24-hour person-days total. So I end up deleting a lot of extraneous stuff.
I also give up about 2 out of 3 article ideas because my ideas were wrong, the results may be uninteresting, etc. Also, I try to have one idea, or at most 1.5 ideas in a post to avoid distraction. For example, yesterday, in my comparison of two breast milk studoes, I also wanted to discuss Gorski's critique of the 2022 breast milk study (and I am being generous to Gorski here), but decided against it to avoid idea overloading.
I loved the "How I read articles" post and I think that every substacker reporting on science needs to at least read it.
You and Brian play a very important role here.
Re: press. Sadly, their job is not to inform us, their job is serve their owners and "influence" us as desired, so my expectations are low. I wish that every journalist reads your "how to read articles" post and at least makes an effort to go beyond the headline and the last paragraph of the abstract. A tall order for people who chose journalism due to their inability to do science, I know.
Honestly, and this may be a bit disheartening, but you sometimes never know until you get a bit deeper into a study or have spent time reading it only to find out that it may not be very useful. For my Anthology Series I cite many articles, and I'll also say I generally don't read through them fully unless they were studies (I skim through literature reviews and check specific sections to provide background information) and in general I may only cite 1/2 -> 2/3 of the studies I actually read. So a citation of 30ish articles may have meant I read/skimmed through 40-50. However, I will also say that I do have a problem where I feel the need to include articles even if I end up beating a dead horse.
I definitely understand the issue in having longer posts. I'm pretty sure my growth would be much better if I stuck to the email size limit and shortened my posts. However, if It comes at a cost of including figures and additional material I generally go for it. I think there's an issue in which our attention spans have declined over time to the point that we expect information within the Twitter 2-minute time limit. I think many readers would rather spend time reading 10 five minute posts rather than 5 10 minute posts. God forbid it becomes 20 minutes!
However, in many cases some of the details can get lost and so the information may be boiled down into the bare essentials and a reader may not get the whole scope. So now we may think one thing is occurring based on a report rather than what an actual study entails.
And given the fact that readers aren't actually checking the study for themselves this becomes a serious issue, because then that means that they may only go as far as to see what we present. If we end up missing key points or necessary context then someone may take what we say and run with the idea.
It's a few things I try to consider, although I can personally say that my writing is certainly wordy and I can cut down a few paragraphs.
It is funny you mention the boring/uninteresting studies, since the studies and topics that get funding and researched are influenced on the public's perception on what may be deemed interesting. Take the idea that the vaccines wouldn't have been rolled out to the degree that they were if the public didn't push to have a vaccine.
And again, it really is hard when the game requires that one find interesting articles to write about. My recent one on pumpkins didn't get a lot of attention and I get why that happens. People want constant coverage of COVID, and in many cases people want coverage that may insight some of their fears and anxieties over what's happening.
To put it bluntly the fear economy drives a lot of the attention Substack posts get, and that's why I've been rethinking how I want to tackle my Substack.
I'd much rather have some posts on COVID with some other posts mixed in if it means people aren't just seeing negativity all the time, even if it means it doesn't get as much attention. However, if fear porn is all that people want then I may just reconsider my longevity. I'd rather dissuade that on the basis of science and I'd rather gauge people so that they instill that innate precautionary principle, but I still am aware that many people may want to be told what to think or how to feel and I'll state that isn't my prerogative (not to say it's anyone else's as well, but it is something I think about when I see my Inbox or Recommendation feed).
I do appreciate that you enjoyed that post. It was put together somewhat haphazardly because I wanted more examples for the results but then that meant diving through all of my tabs and trying to scour through them even more and that would be a hassle.
The press gets their business from reporting on news and not reading them so they're unfortunately incentivized away from doing actual journalist work. It's more important to push things out there than it is to make sure that what is being reported on is accurate and genuine.
Anyways, long post Igor so apologies for making my comment too long. Like I said, I should probably work on shortening responses!
You do you, your articles are great. Yes, the Internet greatly shortened attention spans and I almost have ADHD, I have hard times focusing on anything that is not extremely interesting. Like filling out vendor forms for my company is something I dread doing, but I like writing substack posts.
I read them out loud too just to get a feel on how they sound, throw out whole paragraphs in an attempt to make them shorter and cover one thing.
Re: fear porn. I am of an opinion that Covid is the 21st century plague and together with vaccines, that help it spread and reinfect people endlessly, we will be seeing further increases in mortality on top of the increases that we are seeing. I actually try to hold myself from writing more fear porn for many reasons
- I realize I may be wrong
- My readers have varied interests (covid, vaccine, wef, censorship etc) and so do I
Overall when I want to learn some meta-idea about science I visit your or Brian's substack.
Igor, I am so grateful for your 2 to 4 hours a day. You enlighten is all with with your diligence. That goes for most of the folks I read on substack who actually take the time to research. I'm not interested in the one jerk articles. I can find them all over MSM.
I take a similar approach as far as not wasting readers' time. Though in some cases, the study is here, it says this, so the reader is probably going to see what it says somewhere else - in that case I should provide my assessment, even or especially if in the end I don't think the study should be taken as too important (as with The Virus Shuts Down Kids Lungs Study, The Not-an Imprinting Study). There's value in showing that findings are weak.
If there were an accurate interpretation scoring system, you would probably be in top 5% of study reviewers compared strictly to the folks with PhDs. Most of them can't parse details very well. To dunk on Mobeen, I had to close out of his most recent review because he misinterpreted case/control proportions and incorrectly double-counted comorbidities. He almost always makes a few errors. I commend him for his work overall; and would note that this is probably *why* most experts don't wade beyond the abstract. That's where they get tripped up. Reading comprehension isn't something that they are actually trained for (appropriately, research papers are used for the Reading SAT; most experts probably would do very badly).
Thank you! I agree that many people take conclusions from articles and run with them or do not make sure they count things correctly. I remember that I messed up big once too.
I have not found a way to write something to plug your Michel Goldman article. My dog asked all his twitter followers to repost it. (I no longer have a twitter account) I want to write something with a punch. Your article is doing amazing and is getting a lot of comments -- it is the most heart-wrenching, but dryly stated, piece with incredible persuasiveness.
I am familiar with the difficulty in highlighting others' work. I don't want to turn my email feed into a "homework assignment." So there's lots of great stuff that I don't end up sharing since I had nothing to add to it. You either have to make highlighting stuff a daily thing, so people tune in just for that, or not do it at all.
The homework assignment comment is pretty funny. I have, on occasions, used the recommended section to see if there's something that seems to have been reported improperly. I suppose I can be argued to be engaging in "fact-checking", but I'd rather provide people with evidence contrary to something occurring if the original argument either completely botched their interpretation or didn't even bother to examine the evidence. Although I can certainly see that as being akin to policing so I can understand the concerns. 🤷♂️
At the end of the day the issue isn't whether one messes up, but whether one corrects their mistakes. I think far too often there's a good deal of hubris that prevents people from wanting to correct mistakes or to seem incompetent. Journalists already fail at properly correcting for their misgivings due to their own arrogance. I think we should encourage people to make corrections rather than pretend that everything they report is always accurate.
I think it's a matter of what exactly readers get from our posts that I try to consider. I would hope that many of our posts are informative and provide information that people can carry on into other things that they see and apply their newfound knowledge. However it's hard to figure out to what extent that would be happening. Also, at the same time we are concerned about wasting reader's time I wonder what other time they may be spending on things (not to say we should be telling readers what to read, of course!), such that if one educational post that takes 20-30 minutes to read gets overlooked for 5 or 6 vacuous posts or something on Twitter or other social media platforms, would we consider that to be a waste of time?
I generally hope that many of our readers look at what we write and learn a little something making the read worth the effort.
As to Mobeen I'm generally of the mindset that as long as people make an earnest attempt and correct when necessary then that should be encouraged, however I haven't watched him enough to see how often these mistakes happen. I did view some of his livestream of the ADE study and I do think he misinterpreted a few pieces such as Casirivimab not neutralizing Omicron but leading to ADE which really wouldn't make sense if Casirivimab can't even bind to form the antigen/antibody complex needed for ADE to occur. He also mentioned about Sotrovimab not causing ADE maybe due to the modifications to the Fc region that extends the half-life. That argument doesn't work since the study wasn't a time-dependent study (although it could be inferred that the serial dilution would be a way of mimicking the half-life of the antibodies). I would instead argue that the conformational change of the spike post-Sotrovimab binding may just prevent interactions between the spike and the ACEII receptor.
I work in academia (in the humanities) and I dove into several studies for a project I was working on with race, class, and Covid -- mostly social science-type stuff. The biggest handicap for me is that I am not formally trained in statistical analysis, so I cannot fully understand some of the methodological choices. I have thought about taking a course in statistics through a university extension, simply because my interest lie in policy issues... Now, my biggest frustration with the literature is the mismatch between the results and the conclusion. To give you an example, many of the studies I read show that class was a bigger factor than race in terms of predicting bad COVID outcomes; in most studies, race ceased to be relevant once you controlled for income. Despite these findings, the conclusion read something akin to "the takeaway of this study is that we need to work to bring down systemic racism and white supremacy." What?! Because I work in academia, I know that academic journals will demand ideological conformity (this project on covid, race, and class ended up in journal purgatory because it drew the wrong conclusions), but there is something truly shocking about seeing a paper that deals with data outright contradict the results to fulfill the dogma du jour. I noticed something similar with vaccine papers; regardless of the results, the conclusions always highlighted the importance of vaccination against Covid.
"The decrease in US life expectancy was highly racialized: whereas the largest decreases in 2020 occurred among non-Hispanic (NH) American Indian/Alaska Native, Hispanic, NH Black, and NH Asian populations, in 2021 the largest decreases occurred in the NH White population."
"Over the two-year period between 2019 and 2021, US NH American Indian/Alaska Native, Hispanic, and NH Black populations experienced the largest losses in life expectancy, reflecting the ongoing legacy of systemic racism"
Statistics are VERY difficult. I'm glad that my social science degree actually included a statistics course, which of course was removed for people who may not want to go into research. The issue is that regression and predictive models can be so overwhelming with all of the variables and different equations that it really makes it confusing in now people figure out all of these. As Clarisse commented in one of my posts it's likely that most researchers just end up outsourcing their data to other people to figure out.
Those random inclusions in studies can be very concerning, especially if those variables were accounted for. I'd assume there was some argument about income being associated with racial disparities, and therefore although income and class was the greatest predictor income itself is related to race ergo racism is making people worse from COVID?
I've joked that the demographics section for some of these COVID studies now includes a Latinx group. I think Moderna's clinical trials included it and it seemed so strange. It made me wonder how long that's been going on in studies.
Plus, statistics are fine, if you understand what's actually being measured.... And what's not being measured. ie measuring antibody titers vs actual ability to inhibit infection. Way too many studies not measuring what they don't want to know, these days.
Ain't that the truth! I feel like there are so many really good and necessary questions to be asked .... but we are constantly bombarded with mindless dribble. I am kinda giddy excited to see the Hindawi/Wiley retraction that's coming out tomorrow (hat tip to Dr. Malone). There are so many issues with peer review, and to have that held up as such a high standard, it's really just laughable ... or should I be crying. Researchers are busy and SUPRISE, they don't necessarily do a good job critiquing another's work.
Editors are only human so they're likely to fall into the same biases and quick glosses that we are all willing to. It's only made worse for them since that's their profession but it's not too surprising to think that people would rush to publish and not spend time taking apart a study. Peer reviewers are highly unlikely to replicate studies before signing off on studies as well so that leaves a lot of issues.
That's why it's important to look at results within the given context. I'll admit that I may have trouble with that and still figuring that out but it's why it's a good reason to be a little careful before extrapolating too much before reading through a paper thoroughly.
Was there some tenuous association made by the researchers by citing some other work that made that assertion? That happens often even within the hard sciences as well.
Not that I can remember; my memory of reading the papers is that the body was objective and data-driven, while the conclusion was ideologically-driven. I cannot say that every paper was like this, but there was definitely a pattern.
I think it tends to be the case that the discussion is where researchers start extrapolating and hypothesizing on future works or ideas so I suppose they just threw some thoughts out. However, a reader may mistake those thoughts as being grounded in evidence and may argue that such a notion may be indicative of an actual feature of the phenomenon (to put it one way).
I would suggest reporters, journalists,actually read through the entire publication. Keep a dictionary, medical dictionary, too, by your side. Read and think re the tables and figures included. Not easy.
It can be very hard, especially if it's a subject that one is not familiar with. A lot of the cell biology jargon really goes over my head and that makes it difficult since that requires a ton of background research. Unfortunately, there's a general mantra of "first to report" that pervades how studies are reported.
Journalists and reporters make wide-sweeping statements without consideration of the shortcomings or limitations of the study results. I suspect their headlines and superficial treatments of findings is deliberate - clickbait and to add to the propaganda.
It's a rush to be the first reporter than means that information will be glossed over. It's a shame that we blame journalists for their clickbait titles when in reality many people gravitate towards those titles to begin with. I think when more people provide push away from wanting/clicking on clickbait articles then journalists will react accordingly. However I doubt how likely that would be to be honest.
I also question how qualified reporters are to decipher scientific studies? I have a background in psychology and assessment, with some courses in statistical analysis. I worked on some pharma and other medical studies in the past. While I can often tease out faulty study methodology, I need help to decipher studies that use different statistical analyses. I've also noted, as have others, that the abstracts and conclusions don't always align with the results. I'm inferring reporters have a limited background in reading research publications. Are they just repeating the conclusions? Press release from pharma? Honest question.
What I assume happens is that journalists rush to skim over studies usually looking over easy places such as the abstract and conclusion to confirm their biases then report while picking out a few pieces from the results or methods just to set up what the study was. Then when one person reports on it everyone else reports on the first reporting, and then the study becomes established by virtue of many people reporting on said study even if the conclusions may be wrong.
Then, if you are the one person who post something critical of the study or provides additional context then you may be running foul to the newly established narrative even if it was done under false pretenses. So one person goes against the grain, other people cite the 10 other reports that validate the study when in reality the study may have been seriously misreported on, and thus the inaccuracies prevail.
It's highly unlikely that may journalists actually have a background in the field they report hence why it's hard for them to disseminate information to the public aside from absolutes or really banal coverage.
Helpful would be an official "study reading forum." So like researchreadinggate as opposed to resarchgate. I am not sure if there are any unofficial ones. But this would be a place where you are reading a study, have a question, post it, maybe get an answer. The following example will show how this can't possibly go wrong:
BOBINOMAHA:
Hi, on this chart, does bla bla bla example question?
In seriousness, a good forum needs a certain percentage of humanity to be interacting with the subject; anything too esoteric and the forum turns into a backwater where the same three contributors answer the same questions with the same text snippets and it's random chance whether their approach is actually good or not. Likewise, reputation scoring (to weed out the SUEINSTLOUISes) probably needs a certain volume to work.
Well, Brian I suppose you should make sure other people can weigh in then! 😉
In all seriousness it is a serious issue if discussion of a study just gets hijacked by some narrative. That's at least why I would consider it for paid members since that may help to curate the discussion to be specifically around the paper.
But engagement is really difficult. When I started doing my topics post I pretty much got no responses after the 1st or 2nd month of trying it so that backfired quite a bit. It would be difficult if only a select few people respond, however I suppose at the same time I wouldn't know how much engagement I would get if I don't make any attempt. 🤷♂️
It's more of a thought for now and I'd like to see people weigh in and see if it would be a viable option.
I wish journalists with no scientific background would not report on studies, and realize they don't know enough to assess the validity of a study. If journalists report on a study, they need to get help from someone who can understand the methods used and can understand statistics if there is statistical analysis involved.
Unfortunately they are more focused on pushing out reports than they are in diving into studies. It's frustrating but I would hope that people view the journalists with some hesitation, and hopefully independent journalists encourage that deeper dive into papers.
The fact that most reported don't even link or provide the title to the paper they cover is abysmal reporting in my opinion.
Off-topic, but since you're into those "molecule" things and what not you might be better at reviewing this one than me. Was brought up by Merogenomics youtube - https://www.mdpi.com/1420-3049/27/17/5405/htm
Oh boy, I believe natto was in that Japanese study about masking that was circulated. Was there anything in particular that was of interest? Generally I prefer the drug/supplemental aspect of molecules but I can take a gander.
So I've been told... but in regards to that masking study it was just funny that the researchers had people eat natto and gave them natto breath to check for the bacteria.
Were you considering writing about nattokinase? It seems like an interesting topic.
So, I would want to pin down the reason it is working to make things really exciting. Like what is the motif that nattokinase recognizes, where is it on the spike protein, does this imply easy escape (or that Omicron already escapes it). But I don't think there is any knowledge to pull from here. So I'm not sure if reporting on it fits in the Unglossed brand.
On first glance it appears to be a serine protease which means that the catalytic site makes use of the serine residue to cleave amino acids likely via nucleophilic attack of the amide bonds to chop up peptides. The selectivity is certainly something of concern, however one of the assays in the supplemental material shows that incubation of GAPDH, an enzyme that's part of glycolysis with Nattokinase led to cleavage due to the disappearance of the GAPDH band.
Therefore, it may not be very selective. They do state this:
"The protease specificity of nattokinase would be low, because GAPDH, a housekeeping protein, was also degraded simultaneously in the in-vitro evaluation of nattokinase mixed with cell lysate (Supplemental Figure; Figure S2). On the other hand, when added to cells, it does not show any effect on cell viability and is expected to act as a protective agent on the cell surface. Further analysis of the degradation products of nattokinase using mass spectrometry is needed for understanding the proteolysis effects."
However I would want to see further evidence of other enzymes/proteins that may be targeted. The mention of mass spectrometry is important since that would at least show which if there are specific amino acid motifs that are targeted. However, so far it doesn't appear to be super selective on a cursory glance. I haven't found any evidence of the cleavage site it targets so far. I would assume that these features would at least consider it to hold up unless extensive removal of those cleavage site motifs occurs.
I did find this article. I skimmed it and they looked at 3 residues in the enzyme and gave mutations and saw how that affected enzyme kinetics. However, it doesn't quite explain which sites it preferred so it may require some extrapolation.
The vaccine shedding thing is something I'm still very hesitant about since I'd need more knowledge on that mechanism of aerosolizing. The study design (if it was the one about the families and the IgG antibodies) seemed really strange to me and it seems strange how spike could be released in such a manner, although I suppose an examination of the lungs of vaccinated individuals may provide some insight.
That would be very helpful, and I certainly tend to miss that section! But I think that sometimes also requires additional research as funding is generally assumed to have a bias but may not be explicitly noticed.
As an academic, finding out who the editors are (ie their funding and previous publications etc), is all part of the submission process, so a good investigative reporter should be doing exactly that too, and making it clear for the reader. tbh, I think the whole journalistic learning environment needs a complete overhaul.
The problem is that the time it takes to do that investigating is time away from pushing out articles. Mainstream outlets are far too busy focusing on quantity rather than quality. In a more cautious world we wouldn't have reporting on studies immediately after they are released, but maybe a few days/a week in between the release and the reporting that provides ample time to accurately report.
I would really like to do the book club style study. The problem for me is that I read a lot of abstracts and read a lot of content from doctors, etc and almost always they are written as if we all know what they are talking about. For instance, here is an article snippet by Dr. Peter McCullough which is in english but I really don't know the 'rules' for reading this. Most of his intro's are impossible to read or understand and he never gives links. Anyway, I have been writing software for 35+ years and I can tell you it's no different when I start talking about code, even in a general way, there is just no background in what I do for most people. I suspect this is much the same for medical studies for the average person.
A medical study of United Kingdom healthcare workers who had already had COVID-19 and then received the vaccine found that they suffered higher rates of side effects than the average population. Rachel K. Raw, et al., Previous COVID-19 infection but not Long-COVID-19 is associated with increased adverse events following BNT162b2/Pfizer vaccination, medRxiv (preprint), (last visited June 21, 2021).
I wonder if it's based on his background and how he publishes papers. He does have some citations, but I wonder if it's the design of his website that mashes things together. He cites an author along with the citation right after but then there's also the title of the paper with a link and then the embedded article. That's the issue with the excerpt you provided since it shows the authors (Raw, et. al.) followed by the citation (previous covid-19 infection.... last visited June 21,2021) and then follows with the title and then the article. It's all a bit too jumbled and makes it difficult to find where his actual thoughts are on studies.
Most backgrounds for a study will be found in the introduction but that is heavily dependent upon the researchers. They may provide very little background in which case sometimes it's a good idea to check their links for additional background, or you may have to end up doing separate research. That's the problem with studies outside of someone's field since it may presume that the study is being written for peers rather than the general or scientific public.
Yep, that makes sense. I normally will follow links in web pages, his Telegram page is all text so that's more of a problem for me than the web page is. All good points.
If you choose to pursue it, I think this idea of a teaching/discussing how to work with Study's would be great.
I'll consider it. I was thinking possibly for paid members as it'll create a more cohesive environment that can focus solely on the study but I would like to see the response to it. In general, it's hard to get people to respond unless it's some sort of dramatic, clickbait post that isn't necessarily conducive to having open dialogue and discourse.
I haven't followed Dr. McCollough much nor have I used Telegram so I don't have any frame of reference but I can see how this may be more of a fault in how the webpage is designed. I assume he would prefer to be able to include footnotes such as how Substack allows them and that may be more helpful to him.
If it's for paid members I think it would have more serious interaction but then sometimes, to get more paid members it might be worth doing a dual or triple type study with an introduction for any particular study for anyone and more in depth discussion for paid.
Telegram is more of a super instant messenger, many post by phone and others post links to full web pages. I only pointed to him to illustrate that much of what people see is in a smaller, twitter like posting. It's more of a time thing: get something out and let people look into it and that's all good.
As of now I'm thinking that, if I were to go through with this idea, I would post the study along with other material that may help provide background information, give it until the end of the week and then have another post for people to discuss the article. Maybe try this once or twice a month? It would be along with other paid posts I would post. It is a bit frustrating in figuring out exactly what would work best to encourage more paid members so I'm trying to test the waters and see.
If it's similar to Twitter I'll say that Twitter threads can be pretty obnoxious. The character limit just seems difficult to navigate in that setting.
Wow. I too have been thinking about this subject for some time. I tried look up YouTube videos but many of them were useless. They either had too much info or not enough.
The most difficult part of the studies is understanding the numbers...I want to be able to tell if they are off, be able to spot errors, & have the ability to do some of the math myself
I would love to get better acquainted with confidence intervals & hazard ratios lol
OF COURSE I forgot to include materials and methods! I'm not sure how to fix this but I may just include another poll question.
Haha I was going to ask where this was! It’s usually the most jargon filled & poorly written section. Rarely do they explain why the methods they’ve chosen were used, what the limitations are, or why alternative approaches were rejected.
There'll usually be some little comment that goes, "these other people tried this so we're going to try it as well". Unless some assay or test is considered to be standard they generally cite some other paper and their methods, but that usually requires an understanding of whether just carrying over a method to the current study would be appropriate.
The graphs and figures section is also usually pretty hard.
My main personal frustration is this: I have limited time, perhaps 2-4 hours a day. I wan to find something interesting, that I could write about truthfully to inform, entertain and engage my readers.I like articles that could lead to unexpected conclusions.
So I have to read articles, twitter posts, news items etc. When looking at science articles I am not sure if the article has any amazing juicy material worth reporting on. For example (made up)
"Covid-19 incidence trends in prediabetic population of Kansas City, MO"
Such an article may be total dreck, or it may contain explosive findings. When I start the article out, I do not know. The abstract also does not help because it will say that the "vaccine is safe and effective" and abstracts are often highly misleading (even for more boring matters). So I have to look at the article to figure out if I should spend 2, 5, or 30 minutes. The risk is waste of time or missing a super amazing material.
To help myself decide which way to go, I often jump to figures and graphs to see what kind of data it provides and go from there. Any mention of "unvaccinated" is usually a good indicator that the article needs to be explored.
When I write, I always imagine a smart and nasty fact checker standing over my shoulder, ready to notice any mistakes and misrepresent my post by playing the mistakes up. So I am careful to qualify when I am not sure and write as transparently as I can.
Also I am mindful of wasting my readers time. If I waste 1 minute of each person who opens my post, I would end up wasting roughly ten 24-hour person-days total. So I end up deleting a lot of extraneous stuff.
I also give up about 2 out of 3 article ideas because my ideas were wrong, the results may be uninteresting, etc. Also, I try to have one idea, or at most 1.5 ideas in a post to avoid distraction. For example, yesterday, in my comparison of two breast milk studoes, I also wanted to discuss Gorski's critique of the 2022 breast milk study (and I am being generous to Gorski here), but decided against it to avoid idea overloading.
I loved the "How I read articles" post and I think that every substacker reporting on science needs to at least read it.
You and Brian play a very important role here.
Re: press. Sadly, their job is not to inform us, their job is serve their owners and "influence" us as desired, so my expectations are low. I wish that every journalist reads your "how to read articles" post and at least makes an effort to go beyond the headline and the last paragraph of the abstract. A tall order for people who chose journalism due to their inability to do science, I know.
Honestly, and this may be a bit disheartening, but you sometimes never know until you get a bit deeper into a study or have spent time reading it only to find out that it may not be very useful. For my Anthology Series I cite many articles, and I'll also say I generally don't read through them fully unless they were studies (I skim through literature reviews and check specific sections to provide background information) and in general I may only cite 1/2 -> 2/3 of the studies I actually read. So a citation of 30ish articles may have meant I read/skimmed through 40-50. However, I will also say that I do have a problem where I feel the need to include articles even if I end up beating a dead horse.
I definitely understand the issue in having longer posts. I'm pretty sure my growth would be much better if I stuck to the email size limit and shortened my posts. However, if It comes at a cost of including figures and additional material I generally go for it. I think there's an issue in which our attention spans have declined over time to the point that we expect information within the Twitter 2-minute time limit. I think many readers would rather spend time reading 10 five minute posts rather than 5 10 minute posts. God forbid it becomes 20 minutes!
However, in many cases some of the details can get lost and so the information may be boiled down into the bare essentials and a reader may not get the whole scope. So now we may think one thing is occurring based on a report rather than what an actual study entails.
And given the fact that readers aren't actually checking the study for themselves this becomes a serious issue, because then that means that they may only go as far as to see what we present. If we end up missing key points or necessary context then someone may take what we say and run with the idea.
It's a few things I try to consider, although I can personally say that my writing is certainly wordy and I can cut down a few paragraphs.
It is funny you mention the boring/uninteresting studies, since the studies and topics that get funding and researched are influenced on the public's perception on what may be deemed interesting. Take the idea that the vaccines wouldn't have been rolled out to the degree that they were if the public didn't push to have a vaccine.
And again, it really is hard when the game requires that one find interesting articles to write about. My recent one on pumpkins didn't get a lot of attention and I get why that happens. People want constant coverage of COVID, and in many cases people want coverage that may insight some of their fears and anxieties over what's happening.
To put it bluntly the fear economy drives a lot of the attention Substack posts get, and that's why I've been rethinking how I want to tackle my Substack.
I'd much rather have some posts on COVID with some other posts mixed in if it means people aren't just seeing negativity all the time, even if it means it doesn't get as much attention. However, if fear porn is all that people want then I may just reconsider my longevity. I'd rather dissuade that on the basis of science and I'd rather gauge people so that they instill that innate precautionary principle, but I still am aware that many people may want to be told what to think or how to feel and I'll state that isn't my prerogative (not to say it's anyone else's as well, but it is something I think about when I see my Inbox or Recommendation feed).
I do appreciate that you enjoyed that post. It was put together somewhat haphazardly because I wanted more examples for the results but then that meant diving through all of my tabs and trying to scour through them even more and that would be a hassle.
The press gets their business from reporting on news and not reading them so they're unfortunately incentivized away from doing actual journalist work. It's more important to push things out there than it is to make sure that what is being reported on is accurate and genuine.
Anyways, long post Igor so apologies for making my comment too long. Like I said, I should probably work on shortening responses!
You do you, your articles are great. Yes, the Internet greatly shortened attention spans and I almost have ADHD, I have hard times focusing on anything that is not extremely interesting. Like filling out vendor forms for my company is something I dread doing, but I like writing substack posts.
I read them out loud too just to get a feel on how they sound, throw out whole paragraphs in an attempt to make them shorter and cover one thing.
Re: fear porn. I am of an opinion that Covid is the 21st century plague and together with vaccines, that help it spread and reinfect people endlessly, we will be seeing further increases in mortality on top of the increases that we are seeing. I actually try to hold myself from writing more fear porn for many reasons
- I realize I may be wrong
- My readers have varied interests (covid, vaccine, wef, censorship etc) and so do I
Overall when I want to learn some meta-idea about science I visit your or Brian's substack.
Igor, I am so grateful for your 2 to 4 hours a day. You enlighten is all with with your diligence. That goes for most of the folks I read on substack who actually take the time to research. I'm not interested in the one jerk articles. I can find them all over MSM.
I take a similar approach as far as not wasting readers' time. Though in some cases, the study is here, it says this, so the reader is probably going to see what it says somewhere else - in that case I should provide my assessment, even or especially if in the end I don't think the study should be taken as too important (as with The Virus Shuts Down Kids Lungs Study, The Not-an Imprinting Study). There's value in showing that findings are weak.
If there were an accurate interpretation scoring system, you would probably be in top 5% of study reviewers compared strictly to the folks with PhDs. Most of them can't parse details very well. To dunk on Mobeen, I had to close out of his most recent review because he misinterpreted case/control proportions and incorrectly double-counted comorbidities. He almost always makes a few errors. I commend him for his work overall; and would note that this is probably *why* most experts don't wade beyond the abstract. That's where they get tripped up. Reading comprehension isn't something that they are actually trained for (appropriately, research papers are used for the Reading SAT; most experts probably would do very badly).
Thank you! I agree that many people take conclusions from articles and run with them or do not make sure they count things correctly. I remember that I messed up big once too.
I have not found a way to write something to plug your Michel Goldman article. My dog asked all his twitter followers to repost it. (I no longer have a twitter account) I want to write something with a punch. Your article is doing amazing and is getting a lot of comments -- it is the most heart-wrenching, but dryly stated, piece with incredible persuasiveness.
I am familiar with the difficulty in highlighting others' work. I don't want to turn my email feed into a "homework assignment." So there's lots of great stuff that I don't end up sharing since I had nothing to add to it. You either have to make highlighting stuff a daily thing, so people tune in just for that, or not do it at all.
The homework assignment comment is pretty funny. I have, on occasions, used the recommended section to see if there's something that seems to have been reported improperly. I suppose I can be argued to be engaging in "fact-checking", but I'd rather provide people with evidence contrary to something occurring if the original argument either completely botched their interpretation or didn't even bother to examine the evidence. Although I can certainly see that as being akin to policing so I can understand the concerns. 🤷♂️
Exactly!!!
At the end of the day the issue isn't whether one messes up, but whether one corrects their mistakes. I think far too often there's a good deal of hubris that prevents people from wanting to correct mistakes or to seem incompetent. Journalists already fail at properly correcting for their misgivings due to their own arrogance. I think we should encourage people to make corrections rather than pretend that everything they report is always accurate.
100%
I think it's a matter of what exactly readers get from our posts that I try to consider. I would hope that many of our posts are informative and provide information that people can carry on into other things that they see and apply their newfound knowledge. However it's hard to figure out to what extent that would be happening. Also, at the same time we are concerned about wasting reader's time I wonder what other time they may be spending on things (not to say we should be telling readers what to read, of course!), such that if one educational post that takes 20-30 minutes to read gets overlooked for 5 or 6 vacuous posts or something on Twitter or other social media platforms, would we consider that to be a waste of time?
I generally hope that many of our readers look at what we write and learn a little something making the read worth the effort.
As to Mobeen I'm generally of the mindset that as long as people make an earnest attempt and correct when necessary then that should be encouraged, however I haven't watched him enough to see how often these mistakes happen. I did view some of his livestream of the ADE study and I do think he misinterpreted a few pieces such as Casirivimab not neutralizing Omicron but leading to ADE which really wouldn't make sense if Casirivimab can't even bind to form the antigen/antibody complex needed for ADE to occur. He also mentioned about Sotrovimab not causing ADE maybe due to the modifications to the Fc region that extends the half-life. That argument doesn't work since the study wasn't a time-dependent study (although it could be inferred that the serial dilution would be a way of mimicking the half-life of the antibodies). I would instead argue that the conformational change of the spike post-Sotrovimab binding may just prevent interactions between the spike and the ACEII receptor.
I work in academia (in the humanities) and I dove into several studies for a project I was working on with race, class, and Covid -- mostly social science-type stuff. The biggest handicap for me is that I am not formally trained in statistical analysis, so I cannot fully understand some of the methodological choices. I have thought about taking a course in statistics through a university extension, simply because my interest lie in policy issues... Now, my biggest frustration with the literature is the mismatch between the results and the conclusion. To give you an example, many of the studies I read show that class was a bigger factor than race in terms of predicting bad COVID outcomes; in most studies, race ceased to be relevant once you controlled for income. Despite these findings, the conclusion read something akin to "the takeaway of this study is that we need to work to bring down systemic racism and white supremacy." What?! Because I work in academia, I know that academic journals will demand ideological conformity (this project on covid, race, and class ended up in journal purgatory because it drew the wrong conclusions), but there is something truly shocking about seeing a paper that deals with data outright contradict the results to fulfill the dogma du jour. I noticed something similar with vaccine papers; regardless of the results, the conclusions always highlighted the importance of vaccination against Covid.
"The decrease in US life expectancy was highly racialized: whereas the largest decreases in 2020 occurred among non-Hispanic (NH) American Indian/Alaska Native, Hispanic, NH Black, and NH Asian populations, in 2021 the largest decreases occurred in the NH White population."
"Over the two-year period between 2019 and 2021, US NH American Indian/Alaska Native, Hispanic, and NH Black populations experienced the largest losses in life expectancy, reflecting the ongoing legacy of systemic racism"
https://www.medrxiv.org/content/10.1101/2022.04.05.22273393v4
That's SO like legacies - here one day, gone tomorrow. Fleeting. Like the wind.
"Because the pandemic is ongoing, morbidity and mortality data continue to evolve as the
SARS-CoV-2 virus makes its way throughout different regions of the United States. For
example, nationwide statistics for the period between May and August of 2020 show that the
proportion of Black and White decedents decreased (from 20.3% to 17.4% and 56.9% to 51.5%,
respectively), while the percentage of Latino/Hispanic deaths jumped from 16.3% to 26.4%.1 Yet
by December 2020, the percentage of Latino deaths reverted down to 19.2 %. For Blacks and
Whites, the latest data indicate that 56% of all deaths belong to White subjects and 18.5% to
Blacks. What might one rationally deduce from these numbers, taken in isolation? Nothing,
unless one is willing to subscribe to the dubious assumption that rapidly changing COVID-19
morbidity and mortality data measure rapidly fluctuating levels of racism in the U.S."
Indeed! I made the same point in my unpublishable paper. I will share it once I get back from teaching. That you for sharing that link, BTW.
That'd be an interesting read!
Statistics are VERY difficult. I'm glad that my social science degree actually included a statistics course, which of course was removed for people who may not want to go into research. The issue is that regression and predictive models can be so overwhelming with all of the variables and different equations that it really makes it confusing in now people figure out all of these. As Clarisse commented in one of my posts it's likely that most researchers just end up outsourcing their data to other people to figure out.
Those random inclusions in studies can be very concerning, especially if those variables were accounted for. I'd assume there was some argument about income being associated with racial disparities, and therefore although income and class was the greatest predictor income itself is related to race ergo racism is making people worse from COVID?
I've joked that the demographics section for some of these COVID studies now includes a Latinx group. I think Moderna's clinical trials included it and it seemed so strange. It made me wonder how long that's been going on in studies.
Plus, statistics are fine, if you understand what's actually being measured.... And what's not being measured. ie measuring antibody titers vs actual ability to inhibit infection. Way too many studies not measuring what they don't want to know, these days.
Ain't that the truth! I feel like there are so many really good and necessary questions to be asked .... but we are constantly bombarded with mindless dribble. I am kinda giddy excited to see the Hindawi/Wiley retraction that's coming out tomorrow (hat tip to Dr. Malone). There are so many issues with peer review, and to have that held up as such a high standard, it's really just laughable ... or should I be crying. Researchers are busy and SUPRISE, they don't necessarily do a good job critiquing another's work.
Laughter, crying same thing at times
Editors are only human so they're likely to fall into the same biases and quick glosses that we are all willing to. It's only made worse for them since that's their profession but it's not too surprising to think that people would rush to publish and not spend time taking apart a study. Peer reviewers are highly unlikely to replicate studies before signing off on studies as well so that leaves a lot of issues.
That's why it's important to look at results within the given context. I'll admit that I may have trouble with that and still figuring that out but it's why it's a good reason to be a little careful before extrapolating too much before reading through a paper thoroughly.
It's so easy to automatically confirm my own bias and not look further
I wish the papers had even attempted to make a connection between race and income!
Was there some tenuous association made by the researchers by citing some other work that made that assertion? That happens often even within the hard sciences as well.
Not that I can remember; my memory of reading the papers is that the body was objective and data-driven, while the conclusion was ideologically-driven. I cannot say that every paper was like this, but there was definitely a pattern.
I think it tends to be the case that the discussion is where researchers start extrapolating and hypothesizing on future works or ideas so I suppose they just threw some thoughts out. However, a reader may mistake those thoughts as being grounded in evidence and may argue that such a notion may be indicative of an actual feature of the phenomenon (to put it one way).
100% everything you said...forget my comment bc I would like to piggy back on this. My response is not nearly as sharp.
I would suggest reporters, journalists,actually read through the entire publication. Keep a dictionary, medical dictionary, too, by your side. Read and think re the tables and figures included. Not easy.
It can be very hard, especially if it's a subject that one is not familiar with. A lot of the cell biology jargon really goes over my head and that makes it difficult since that requires a ton of background research. Unfortunately, there's a general mantra of "first to report" that pervades how studies are reported.
More than 100,000 views in 24 hours, approaching 1,000 comments
"Safe and Effective - A Second Opinion" documentary
oraclefilms.com/safeandeffective
Interesting. If I get the time I'll try watching it but thanks for the link!
It's very good. Please do.
Journalists and reporters make wide-sweeping statements without consideration of the shortcomings or limitations of the study results. I suspect their headlines and superficial treatments of findings is deliberate - clickbait and to add to the propaganda.
It's a rush to be the first reporter than means that information will be glossed over. It's a shame that we blame journalists for their clickbait titles when in reality many people gravitate towards those titles to begin with. I think when more people provide push away from wanting/clicking on clickbait articles then journalists will react accordingly. However I doubt how likely that would be to be honest.
I also question how qualified reporters are to decipher scientific studies? I have a background in psychology and assessment, with some courses in statistical analysis. I worked on some pharma and other medical studies in the past. While I can often tease out faulty study methodology, I need help to decipher studies that use different statistical analyses. I've also noted, as have others, that the abstracts and conclusions don't always align with the results. I'm inferring reporters have a limited background in reading research publications. Are they just repeating the conclusions? Press release from pharma? Honest question.
What I assume happens is that journalists rush to skim over studies usually looking over easy places such as the abstract and conclusion to confirm their biases then report while picking out a few pieces from the results or methods just to set up what the study was. Then when one person reports on it everyone else reports on the first reporting, and then the study becomes established by virtue of many people reporting on said study even if the conclusions may be wrong.
Then, if you are the one person who post something critical of the study or provides additional context then you may be running foul to the newly established narrative even if it was done under false pretenses. So one person goes against the grain, other people cite the 10 other reports that validate the study when in reality the study may have been seriously misreported on, and thus the inaccuracies prevail.
It's highly unlikely that may journalists actually have a background in the field they report hence why it's hard for them to disseminate information to the public aside from absolutes or really banal coverage.
Helpful would be an official "study reading forum." So like researchreadinggate as opposed to resarchgate. I am not sure if there are any unofficial ones. But this would be a place where you are reading a study, have a question, post it, maybe get an answer. The following example will show how this can't possibly go wrong:
BOBINOMAHA:
Hi, on this chart, does bla bla bla example question?
SUEINSTLOUIS:
VIRUSES DON'T EXIST BOB!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
In seriousness, a good forum needs a certain percentage of humanity to be interacting with the subject; anything too esoteric and the forum turns into a backwater where the same three contributors answer the same questions with the same text snippets and it's random chance whether their approach is actually good or not. Likewise, reputation scoring (to weed out the SUEINSTLOUISes) probably needs a certain volume to work.
Well, Brian I suppose you should make sure other people can weigh in then! 😉
In all seriousness it is a serious issue if discussion of a study just gets hijacked by some narrative. That's at least why I would consider it for paid members since that may help to curate the discussion to be specifically around the paper.
But engagement is really difficult. When I started doing my topics post I pretty much got no responses after the 1st or 2nd month of trying it so that backfired quite a bit. It would be difficult if only a select few people respond, however I suppose at the same time I wouldn't know how much engagement I would get if I don't make any attempt. 🤷♂️
It's more of a thought for now and I'd like to see people weigh in and see if it would be a viable option.
I wish journalists with no scientific background would not report on studies, and realize they don't know enough to assess the validity of a study. If journalists report on a study, they need to get help from someone who can understand the methods used and can understand statistics if there is statistical analysis involved.
Unfortunately they are more focused on pushing out reports than they are in diving into studies. It's frustrating but I would hope that people view the journalists with some hesitation, and hopefully independent journalists encourage that deeper dive into papers.
The fact that most reported don't even link or provide the title to the paper they cover is abysmal reporting in my opinion.
Off-topic, but since you're into those "molecule" things and what not you might be better at reviewing this one than me. Was brought up by Merogenomics youtube - https://www.mdpi.com/1420-3049/27/17/5405/htm
I do see why this would be an interesting enzyme to look into if blood clots are a serious concern.
Oh boy, I believe natto was in that Japanese study about masking that was circulated. Was there anything in particular that was of interest? Generally I prefer the drug/supplemental aspect of molecules but I can take a gander.
Well, ivermectin comes from a soil bacteria too, you know...
So I've been told... but in regards to that masking study it was just funny that the researchers had people eat natto and gave them natto breath to check for the bacteria.
Were you considering writing about nattokinase? It seems like an interesting topic.
So, I would want to pin down the reason it is working to make things really exciting. Like what is the motif that nattokinase recognizes, where is it on the spike protein, does this imply easy escape (or that Omicron already escapes it). But I don't think there is any knowledge to pull from here. So I'm not sure if reporting on it fits in the Unglossed brand.
On first glance it appears to be a serine protease which means that the catalytic site makes use of the serine residue to cleave amino acids likely via nucleophilic attack of the amide bonds to chop up peptides. The selectivity is certainly something of concern, however one of the assays in the supplemental material shows that incubation of GAPDH, an enzyme that's part of glycolysis with Nattokinase led to cleavage due to the disappearance of the GAPDH band.
Therefore, it may not be very selective. They do state this:
"The protease specificity of nattokinase would be low, because GAPDH, a housekeeping protein, was also degraded simultaneously in the in-vitro evaluation of nattokinase mixed with cell lysate (Supplemental Figure; Figure S2). On the other hand, when added to cells, it does not show any effect on cell viability and is expected to act as a protective agent on the cell surface. Further analysis of the degradation products of nattokinase using mass spectrometry is needed for understanding the proteolysis effects."
However I would want to see further evidence of other enzymes/proteins that may be targeted. The mention of mass spectrometry is important since that would at least show which if there are specific amino acid motifs that are targeted. However, so far it doesn't appear to be super selective on a cursory glance. I haven't found any evidence of the cleavage site it targets so far. I would assume that these features would at least consider it to hold up unless extensive removal of those cleavage site motifs occurs.
So...ambiguous at best right now?
I did find this article. I skimmed it and they looked at 3 residues in the enzyme and gave mutations and saw how that affected enzyme kinetics. However, it doesn't quite explain which sites it preferred so it may require some extrapolation.
https://pubmed.ncbi.nlm.nih.gov/17673485/
I liked that Igor mentioned you yesterday when he was comparing the recent lactation/vac shedding study to the one from a year or so ago
The vaccine shedding thing is something I'm still very hesitant about since I'd need more knowledge on that mechanism of aerosolizing. The study design (if it was the one about the families and the IgG antibodies) seemed really strange to me and it seems strange how spike could be released in such a manner, although I suppose an examination of the lungs of vaccinated individuals may provide some insight.
I wrote two articles about shedding of mRNA LNPs in breast milk:
First: https://igorchudov.substack.com/p/jama-vaccine-shedding-in-breast-milk
Second: https://igorchudov.substack.com/p/bill-gates-funded-scientists-found
Both mention your post about reading articles, the second even more so
Conflicts of interest are the issues reporters should always include in their writing.
That would be very helpful, and I certainly tend to miss that section! But I think that sometimes also requires additional research as funding is generally assumed to have a bias but may not be explicitly noticed.
As an academic, finding out who the editors are (ie their funding and previous publications etc), is all part of the submission process, so a good investigative reporter should be doing exactly that too, and making it clear for the reader. tbh, I think the whole journalistic learning environment needs a complete overhaul.
The problem is that the time it takes to do that investigating is time away from pushing out articles. Mainstream outlets are far too busy focusing on quantity rather than quality. In a more cautious world we wouldn't have reporting on studies immediately after they are released, but maybe a few days/a week in between the release and the reporting that provides ample time to accurately report.
I would really like to do the book club style study. The problem for me is that I read a lot of abstracts and read a lot of content from doctors, etc and almost always they are written as if we all know what they are talking about. For instance, here is an article snippet by Dr. Peter McCullough which is in english but I really don't know the 'rules' for reading this. Most of his intro's are impossible to read or understand and he never gives links. Anyway, I have been writing software for 35+ years and I can tell you it's no different when I start talking about code, even in a general way, there is just no background in what I do for most people. I suspect this is much the same for medical studies for the average person.
https://www.americaoutloud.com/risks-of-vaccines-for-those-recovered-from-covid-19-krammer-raw-mathioudakis/
```
A medical study of United Kingdom healthcare workers who had already had COVID-19 and then received the vaccine found that they suffered higher rates of side effects than the average population. Rachel K. Raw, et al., Previous COVID-19 infection but not Long-COVID-19 is associated with increased adverse events following BNT162b2/Pfizer vaccination, medRxiv (preprint), (last visited June 21, 2021).
```
I wonder if it's based on his background and how he publishes papers. He does have some citations, but I wonder if it's the design of his website that mashes things together. He cites an author along with the citation right after but then there's also the title of the paper with a link and then the embedded article. That's the issue with the excerpt you provided since it shows the authors (Raw, et. al.) followed by the citation (previous covid-19 infection.... last visited June 21,2021) and then follows with the title and then the article. It's all a bit too jumbled and makes it difficult to find where his actual thoughts are on studies.
Most backgrounds for a study will be found in the introduction but that is heavily dependent upon the researchers. They may provide very little background in which case sometimes it's a good idea to check their links for additional background, or you may have to end up doing separate research. That's the problem with studies outside of someone's field since it may presume that the study is being written for peers rather than the general or scientific public.
Yep, that makes sense. I normally will follow links in web pages, his Telegram page is all text so that's more of a problem for me than the web page is. All good points.
If you choose to pursue it, I think this idea of a teaching/discussing how to work with Study's would be great.
I'll consider it. I was thinking possibly for paid members as it'll create a more cohesive environment that can focus solely on the study but I would like to see the response to it. In general, it's hard to get people to respond unless it's some sort of dramatic, clickbait post that isn't necessarily conducive to having open dialogue and discourse.
I haven't followed Dr. McCollough much nor have I used Telegram so I don't have any frame of reference but I can see how this may be more of a fault in how the webpage is designed. I assume he would prefer to be able to include footnotes such as how Substack allows them and that may be more helpful to him.
If it's for paid members I think it would have more serious interaction but then sometimes, to get more paid members it might be worth doing a dual or triple type study with an introduction for any particular study for anyone and more in depth discussion for paid.
Telegram is more of a super instant messenger, many post by phone and others post links to full web pages. I only pointed to him to illustrate that much of what people see is in a smaller, twitter like posting. It's more of a time thing: get something out and let people look into it and that's all good.
As of now I'm thinking that, if I were to go through with this idea, I would post the study along with other material that may help provide background information, give it until the end of the week and then have another post for people to discuss the article. Maybe try this once or twice a month? It would be along with other paid posts I would post. It is a bit frustrating in figuring out exactly what would work best to encourage more paid members so I'm trying to test the waters and see.
If it's similar to Twitter I'll say that Twitter threads can be pretty obnoxious. The character limit just seems difficult to navigate in that setting.
Wow. I too have been thinking about this subject for some time. I tried look up YouTube videos but many of them were useless. They either had too much info or not enough.
The most difficult part of the studies is understanding the numbers...I want to be able to tell if they are off, be able to spot errors, & have the ability to do some of the math myself
I would love to get better acquainted with confidence intervals & hazard ratios lol
A club is a great idea