It’s all over the news. “Hundreds of thousands in L.A. County may have been infected with coronavirus, study finds.” “More L.A. County Residents Likely Infected With Coronavirus Than Thought, Study Finds.” The fact that a study has concluded that 4.1% of people in Los Angeles have antibodies to SARS-CoV-2 is noteworthy, but, among the epidemiological community, the response involves more consternation than fascination. The problem is that, despite the headlines, the authors have not published a study. They put out a press release with a result and no way to evaluate it.
The release of research papers as preprints before they are peer-reviewed is not new and has its detractors, but at least those studies are available. The decision by the lead author of the LA study to publish research results as a press release takes this to a new and dangerous level, by not even bothering to provide a paper at all. That means there is no way for the research community to react to the study. How can the lay media possibly vet a story that epidemiologists can’t even evaluate? What we can discern raises some red flags.
In an effort to understand this study, I contacted USC to try to get more information about the methods used. I got the following response. “We don’t have additional information to share at this point. When the full study comes out at a later date, I would be happy to share that.” In what world is it OK to release the results of a research study when the methods are confidential?
As noted in my previous post, one of the most difficult challenges for this study is getting a truly random sample. The only information we have on how the data were collected is that the authors used a proprietary database from a market research firm. No information is provided on the database, how many people were contacted, or what percent of them participated. Proprietary is not a term that is comforting to see in a research paper. That suggests that, even if they did publish the methods, we wouldn’t really know what they did. So, we have no idea what the sample represents and, even when they do explain it, we might not know.
Often, a look at the research team and their previous publications can provide some confidence in the study quality, but, in this case, it is not particularly reassuring. The lead on the study is a health economist who has no training in epidemiology and his only remotely epidemiological publications are a Wall Street Journal op-ed on the need for seroprevalence studies and a book chapter called, “Does Health Insurance Make You Fat?”.
The rest of the research team includes two more health economists from Stanford. (I don’t know why one health economist wasn’t enough, but maybe I should feel qualified to investigate the macroeconomic impact of fiscal stimulus in the context of the pandemic.) Again, not much epidemiology from these two (one was co-author of the Insurance/obesity book chapter), but on March 24, the two of them published an op-ed in the Wall Street Journal entitled, “Is the Coronavirus as Deadly as They Say?”. A week later, they conducted a seroprevalence survey in Santa Clara County. Just ten days later, they released the results of that study as a preprint, which concluded, hold on to your seat, the coronavirus is not as deadly as they say. I do not mean to suggest that they fabricated data, but it should be noted that they also used a proprietary marketing tool to assemble a sample, this time from Facebook. The same blind faith in market research methods suggests a lack of experience in epidemiology. Somewhere in this is a joke about how many economists it takes to screw up an epidemiological study.
They do have the director of the Sports Medicine Research and Testing Laboratory, who ran a study looking for SARS-CoV-2 infections in Major League Baseball players, so they have the laboratory chops for this, but in MLB players you are dealing with people who lose their job if they don’t participate, so he can’t help with the epidemiology.
In fact, there is only one real epidemiologist on the LA team, a physician/epidemiologist from the County Department of Public Health. His presence on the team is the one thing giving it credibility but the intense spotlight that the lead author has focused on himself makes it clear that this lone epidemiologist was not responsible for the decision to publish the results without publishing the study. In fact, I hear through the grapevine that there is a paper out, but the journal won’t allow any details about the study to be published. Given that the top medical journals are allowing preprints on this issue, and that two of the authors already released a preprint on the Santa Clara survey, it will be interesting to know what journal believes it is OK to pre-release study results, but not methods.
There is no question that seroprevalence studies such as this will play a key role informing policy decisions as we re-open our economy and it may well be that the rates of infection are as high as those presented in the LA study, but the devil is in the details and, for the moment, no one is allowed to see the devil behind the curtains. As a result, we have no way of knowing what the data splattering the internet actually mean.
There is a reason that science moves forward at a slow, methodical pace. The review and vetting of information are critical to the scientific process. I understand the unprecedented sense of urgency surrounding the pandemic but releasing numbers that cannot be evaluated will never advance understanding. Once numbers are out in the corona-sphere, they are impossible to reel back in. To be clear, I am sure the authors are all intelligent and thoughtful and their intentions are good, but the decision to circumvent that process is deeply disturbing and should be harshly condemned. The only thing promoted by this approach is confusion and confusion in a time of crisis is a recipe for disaster.