Early COVID-19 research is riddled with poor methods and low-quality results; a problem for science that the pandemic worsened but did not create

By | February 23, 2024

Early in the COVID-19 pandemic, researchers filled journals with studies on the then-novel coronavirus. Many publications have streamlined the peer review process for COVID-19 articles while keeping acceptance rates relatively high. The assumption was that policymakers and the public would be able to identify valid and useful research from the vast amount of rapidly disseminating information.

But when I reviewed 74 COVID-19 articles published in 2020 in the top 15 general public health journals listed on Google Scholar, I found that many of these studies used low-quality methods. Several other reviews of studies published in medical journals also showed that much of the early COVID-19 research used poor research methods.

Some of these articles have been cited many times. For example, the most cited public health publication listed on Google Scholar used data from a sample of 1,120 people, mostly well-educated young women, collected from social media over three days. Findings based on a small self-selected convenience sample may not be generalizable to a broader population. Because the researchers conducted more than 500 analyzes of the data, most statistically significant results are likely chance events. However, this work has been cited more than 11,000 times.

A highly cited article means that many people have mentioned it in their own work. However, large numbers of citations do not have a strong connection with research quality, as researchers and journals can manipulate these metrics. High citation of low-quality research further undermines public trust in science by increasing the likelihood that weak evidence will be used to inform policy.

Methodology is important

I am a public health researcher with a long-standing interest in research quality and integrity. This interest lies in the belief that science helps solve important social and public health problems. Unlike the anti-science movement that spreads misinformation about successful public health measures such as vaccines, I believe rational critique is fundamental to science.

The quality and integrity of research depends largely on its methods. Any study design must have certain characteristics in order to provide valid and useful information.

For example, researchers have known for decades that in studies evaluating the effectiveness of an intervention, a control group is necessary to know whether any observed effects can be attributed to the intervention.

Systematic reviews that combine data from existing studies should explain how researchers determined which studies to include, how they assessed their quality, how they extracted data, and how they preregistered their protocols. These features are necessary to ensure that the review covers all available evidence and tells the reader which is worth considering and which is not.

Certain types of studies, such as one-time surveys of convenience samples that are not representative of the target population, collect and analyze data in ways that do not allow researchers to determine whether a variable causes a particular outcome.

All study designs have standards that researchers can refer to. But adhering to standards slows down research. Having a control group doubles the amount of data that needs to be collected, and it takes more time to identify and comprehensively examine every study on a topic than to superficially examine some of them. Representative samples are more difficult to create than convenience samples, and collecting data at two points requires more work than collecting data all at once.

Studies comparing COVID-19 articles to non-COVID-19 articles published in the same journals found that COVID-19 articles tended to have lower quality methods and were less likely to meet reporting standards than non-COVID-19 articles. COVID-19 articles rarely had predetermined hypotheses and plans for how they would report their findings or analyze their data. This meant that there were no safeguards against sifting through the data to find “statistically significant” results that could be selectively reported.

Such methodological issues were likely overlooked in the significantly shortened peer review process of COVID-19 articles. One study estimated the average time from submission to acceptance for 686 articles on COVID-19 to be 13 days, compared to 110 days for 539 articles in the same journals pre-pandemic. In my study, I found that two online journals that published a very high volume of methodologically weak COVID-19 articles had a peer review process of approximately three weeks.

Publish or perish culture

These quality control issues existed before the COVID-19 pandemic. The pandemic has pushed them into overdrive.

Journals tend to favor positive, “new” findings, that is, results that show a statistical relationship between variables and supposedly identify something previously unknown. Because the pandemic is new in many ways, it has provided an opportunity for some researchers to make bold claims about how Covid-19 will spread, what its effects on mental health will be, how it can be prevented and how it can be treated.

Pek çok araştırmacı, kariyerlerini ilerletmek amacıyla makale yayınlama konusunda baskı hissediyor.  <a href=South_agency/E+ via Getty Images” data-src=”https://s.yimg.com/ny/api/res/1.2/rEXM5db075wdJoWesKTYyA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MA–/https://media.zenfs.com/en/the_conversation_us_articles_815/33af65ff633d7fc 9dcd70f8a159c9537″/>

Academics have operated on a publish-or-perish incentive system for decades; The number of articles they publish here is part of the criteria used to evaluate employment, promotion, and tenure. The flood of COVID-19 information of mixed quality has provided an opportunity to increase publication numbers and increase citation metrics as journals seek out and quickly review COVID-19 articles that are more likely to be cited than non-COVID-19 articles.

Online publishing has also contributed to the deterioration of research quality. Traditional academic publishing was limited in the amount of articles it could produce because journals were typically packaged into a printed, physical document produced once a month. In contrast, some today’s online megajournals publish thousands of articles per month. Low-quality studies rejected by reputable journals may still find a publishing outlet that will be happy to publish them for a fee.

healthy criticism

Criticizing the quality of published research is fraught with risks. It could be misinterpreted as adding fuel to the raging fire of anti-science. My answer is that a critical and rational approach to knowledge production is in fact fundamental to the application of science and the functioning of an open society that can solve complex problems such as a worldwide pandemic.

The publication of large amounts of misinformation under the guise of science during a pandemic results in the suppression of true and useful information. At worst, this can lead to poor public health practices and policies.

Science done right produces knowledge that allows researchers and policymakers to better understand the world and test ideas about how to improve it. This involves critically examining the quality of a study’s designs, statistical methods, reproducibility, and transparency; not how many times that work has been cited or tweeted about.

Science relies on a slow, thoughtful, and meticulous approach to collecting, analyzing, and presenting data, especially if it aims to provide information to enact effective public health policies. Similarly, thoughtful and rigorous peer review is unlikely to occur in articles that appear in print only three weeks after they were first submitted for review. Disciplines that reward quantity of research over quality are also less likely to maintain scientific integrity during crises.

Titiz bilim, aceleyi değil, dikkatli düşünmeyi ve dikkati gerektirir.  <a href=Assembly/Stone via Getty Images” data-src=”https://s.yimg.com/ny/api/res/1.2/AuDX1plnz1dFOUknNczTJw–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY3Nw–/https://media.zenfs.com/en/the_conversation_us_articles_815/193c0f78378 f4eaeb646d09395f78125″/>

Public health draws heavily from disciplines experiencing replication crises, such as psychology, biomedical science, and biology. The incentive structure, study designs, and analytical methods are similar to these disciplines in that there is no emphasis on transparent methods and duplication. Many public health studies on COVID-19 show that it suffers from similar low-quality methods.

Reexamining how the discipline rewards its scholars and evaluates its scholarship could help it better prepare for the next public health crisis.

This article is republished from The Conversation, an independent, nonprofit news organization providing facts and analysis to help you understand our complex world.

Written by: Dennis M. Gorman, Texas A&M University.

Read more:

Dennis M. Gorman does not work for, consult for, own shares in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond his academic duties.

Leave a Reply

Your email address will not be published. Required fields are marked *