Well I read the full text of the PDF. Not the abstract, the full text.
Remember this is an unreviewed early release and the final version of this paper may differ from the version we’re seeing now, once the reviewers comments are taken into account. There’s even a possibility that the paper will be rejected for publication, if it’s thought to be seriously flawed. That’s how the review process works.
If I were asked to review this paper (I wouldn’t be, because I have no expertise in this field- but I am regularly asked to review papers in my area of expertise, so I’m familiar with the process), here’s the major issues that I’d have and that I’d want the authors to more fully address:
1. They state that they ran their test on 371 stored preCOVID blood samples, all of which should have tested negative. Only 369 were negative, meaning there were 4 false positives. That’s a 10.1% false positive rate. (Realize this is a newly developed test and performance characteristics are unknown prior to use in this study.). If you extrapolate that to their 3300 person sample, there could 330 false positive test results in their study sample. There were only 50 positive test results in their study population. The number of false positives should be accounted for by their calculated test specificity- however I question the statistics they used to calculate test specificity. They tested several groups with “known” COVID status, but most of those groups were small (~30 people) with no false positives. In the only large sample, we see the 10.1% false negative rate. I’d ask the authors to address and clarify this.
2. It’s unclear to me if the new test being used in this study is testing for IgG or IgM or both. I think it’s both, but this should be clearly stated by the authors. It matters because IgG and IgM antibodies typically are present at different phases of infection, with IgG typically present later than IgM. It would also be helpful for the authors to included in the discussion what is known about the kinetics of the immune responses of various immunoglobulin classes in COVID19 infection. For example, IgM positivity is seen 3-60 days post infection and IgG positivity starting around days 7-14. This information may be unknown but if it is, it would still be helpful to the readers understanding of the paper to include that information. FYI these things don’t matter to the fundamental hypothesis of the paper but it is useful information that the authors have of important public health implications and could be easily addressed in a few sentences so IMO including the info is better than not including it.
3. Sample bias. Did the authors inadvertently recruit cases more likely to test positive than the general population? The test subjects were recruited from FB and were not representative of the overall county demographics (more young white women, less Asians and Hispanics and elderly) and the authors attempted to correct for that statistically. However their corrections may not have corrected adequately. Keeping in mind that FB selects for circles of people with common interests, did information about this study’s availability get shared early and widely amongst a group that was more likely to test COVID positive- for example, groups of people who had been ill and couldn’t get tested otherwise? Or was the study info shared widely amongst ER nurses, hoping to learn if they were positive and didn’t need to worry as much if PPE was in short supply. Additionally, the study was set up such that each participant could bring one child, and around 800 children were enrolled. This means that of the 3300 people in the study, 1600 (800 children plus 800 parents) were from the same household. This is half the study sample. Since COVID19 is highly contagious, it would be expected that most positives would expose others in their household and if you test multiple people in a household, you would expect more positives than if you exclusively tested people from separate households if your sample was already biased towards including positives. The authors also asked test subjects about previous COVID symptoms, so it would be nice if they included info as to the number of positive samples in the subset of study participants who had previous COVID symptoms vs the subset who didn’t report previous COVID symptoms. Positive tests in the latter group are more relevant to the study’s conclusions (that there’s a high rate of inapparent infection) than positive tests in the former group.
Before people express exasperation that I’m not “being positive” or that I don’t want “good news,” neither of those things are revelvant to critically reading a scientific article. The question is whether the authors’ data supports their conclusions.
I’d be quite interested to see the final peer reviewed version of this paper. I think the authors chose a quick expedient way to recruit test subjects, which makes sense given the desire to get results out ASAP in the midst of a pandemic. But in doing so, they may have introduced significant bias to their study.