Thoughts on economics and liberty

Notes on the Santa Clara serological study – IFR 0.17%

The study:

“The raw prevalence of antibodies to SARS-CoV-2 in our sample was 1.5%.”

Given the limitations of such tests at such low levels of prevalence, I’m considering ignoring this study.

Paper: Concerns with that Stanford study of coronavirus prevalence

It’s perfectly plausible (in the 95% CI sense) that the shocking prevalence rates published in the study are mostly, or even entirely, due to false positives.


At the time of this writing, NYC has about 9000 recorded coronavirus deaths. Multiply by 600 and you get 5.4 million. OK, I don’t think 5.4 million New Yorkers have been exposed to coronavirus. New York only has 8.4 million people total! [Source]


Selection bias. As Rushton wrote, it could be that people who’d had coronavirus symptoms were more likely to avail themselves of a free test. [Source]

Volunteer surveys are a type of convenience sample. Santa Clara investigators used targeted Facebook ads to recruit participants to visit drive-thru test sites. Quotas were established per zip code to limit overrepresentation. [Source]

further: “Recruiting through Facebook likely attracted people with COVID-19–like

symptoms who wanted to be tested, boosting the apparent positive rate.” [Source] The study also had relatively few participants from low-income and minority populations, meaning the statistical adjustments the researchers made could be way off. “I think the authors of the paper owe us all an apology,” wrote Columbia University statistician and political scientist Andrew Gelman in an online commentary. The numbers “were essentially the product of a statistical error.” [Source]

My question: what about children?


One of the issues outside experts raised with a widely criticized study out of Santa Clara County, Calif., last week was that the researchers recruited participants through Facebook ads. They say this approach could have drawn people who thought they had been infected and wanted confirmation — and wouldn’t necessarily lead to a sampling that represented the county’s population as a whole. Instead, it could have resulted in infection rates that were misleadingly high.

“It biases your sample very much toward people who want to be tested, who might suspect they’ve had it, and that can lead you to overestimate the number of people who have actually been exposed,” said William Hanage, an epidemiologist at Harvard’s T.H. Chan School of Public Health. [Source]



Because the absolute numbers of positive tests were so small, false positives may have been nearly as common as real infections. [Source]

  • refutation by authors: “Bhattacharya says he is preparing an appendix that addresses the criticisms. But, he says, “The argument that the test is not specific enough to detect real positives is deeply flawed.” [Source]


But when the team that led the Santa Clara County study reported that the case count could be 50 to 85 times higher than the number of confirmed cases, that raised skepticism. If that were truly the case, then local hospitals would have been far more inundated with patients than they had been, outside researchers argued. Other estimates have pegged total cases as 10 to 20 times higher than confirmed cases. [Source]





A California serology study of 3300 people released last week in a preprint also drew strong criticisms. The lead authors of the study, Jay Bhattacharya and Eran Bendavid, who study health policy at Stanford University, worked with colleagues to recruit the residents of Santa Clara county through ads on Facebook. Fifty antibody tests were positive—about 1.5%. But after adjusting the statistics to better reflect the county’s demographics, the researchers concluded that between 2.49% and 4.16% of the county’s residents had likely been infected. That suggests, they say, that the real number of infections was as many as 80,000. That’s more than 50 times as many as viral gene tests had confirmed and implies a low fatality rate—a reason to consider whether strict lockdowns are worthwhile, argue Bendavid and co-author John Ioannidis, who studies public health at Stanford.

On the day the preprint posted, co-author Andrew Bogan—a biotech investor with a biophysics Ph.D.—published an op-ed in The Wall Street Journal asking, “If policy makers were aware from the outset that the Covid-19 death toll would be closer to that of seasonal flu … would they have risked tens of millions of jobs and livelihoods?” The op-ed did not initially disclose his role in the study. [Source]



Take, for example, the study conducted in Santa Clara, California. Researchers at Stanford put out ads on Facebook, asking people to volunteer to participate.

“The problem is there are people who will think, ‘Oh, yeah, I had this nasty flu, or cough, or whatever, and I think I had it.’ And if you said to them, ‘Would you like to get tested?’ They would say, ‘Abso-frickin-lutely!’” said Marm Kilpatrick, a professor at the University of California at Santa Cruz who studies infectious diseases. Conversely, people who felt totally healthy could be less inclined to participate. “So there’s a differential excitement to go get tested, and if that leads to the first group being at a higher chance of being participants in the study, then you’ve just totally blown your estimates.”

Contrary to Kilpatrick’s concern, Dr. Jay Bhattacharya, senior author of the Santa Clara study, said in an email that while “volunteer bias is certainly a potential problem in any survey that recruits participants the way we did … in our study, the evidence points in the direction of healthy volunteer bias” because people in “wealthier and healthier” ZIP codes signed up faster. Bhattacharya said his team made adjustments in its calculations to represent the county properly by ZIP code, race and sex, and thinks it is “still likely underestimating prevalence because of healthy volunteer bias.” [Source]




Sanjeev Sabhlok

View more posts from this author

Leave a Reply

Your email address will not be published. Required fields are marked *