Thursday, May 5, 2022

Social desirability bias leads some unvaccinated individuals to claim they are vaccinated; conventional survey studies thus likely overestimate vaccination coverage because of misreporting

Overestimation of COVID-19 Vaccination Coverage in Population Surveys Due to Social Desirability Bias: Results of an Experimental Methods Study in Germany. Felix Wolter et al. Socius: Sociological Research for a Dynamic World, May 4, 2022. https://doi.org/10.1177/23780231221094749

Abstract: In Germany, studies have shown that official coronavirus disease 2019 (COVID-19) vaccination coverage estimated using data collected directly from vaccination centers, hospitals, and physicians is lower than that calculated using surveys of the general population. Public debate has since centered on whether the official statistics are failing to capture the actual vaccination coverage. The authors argue that the topic of one’s COVID-19 vaccination status is sensitive in times of a pandemic and that estimates based on surveys are biased by social desirability. The authors investigate this conjecture using an experimental method called the item count technique, which provides respondents with the opportunity to answer in an anonymous setting. Estimates obtained using the item count technique are compared with those obtained using the conventional method of asking directly. Results show that social desirability bias leads some unvaccinated individuals to claim they are vaccinated. Conventional survey studies thus likely overestimate vaccination coverage because of misreporting by survey respondents.

Keywords: COVID-19, vaccine coverage, sensitive topics, social desirability, item count technique

Different methods tend to produce different results. The fact that COVID-19 vaccination coverage estimates differ depending on the method of data collection should come as no surprise to those in the scientific community. But such discrepancies are troublesome not only because they make it more difficult to develop prognoses and plan policy but also because they can undercut trust in governments and institutions, which is already at a premium in many regions in the ongoing pandemic.8

Our analysis of the use of survey data in estimating vaccine coverage underlines those difficulties: although surveys may be useful in the context of many other types of vaccines, we argue that the topic of COVID-19 vaccination in late 2021 is too sensitive to rely solely on survey data for coverage rates. Although it cannot be ruled out that official statistics based on reporting by hospitals and physicians are not also biased (perhaps because of incomplete or erroneous reporting), we showed in this investigation that COVID-19 vaccination coverage on the basis of survey data is likely biased upward by social desirability. Providing individuals with an anonymous way to report their unvaccinated status resulted in an estimated vaccination coverage that was significantly lower than the one based on the conventional method of DQ.

However, there are some important limitations to note. First, this article is written within the context of Germany in the fall of 2021. The local situation may change in the future, and it may be that the topic of COVID-19 vaccination is not as sensitive in other parts of the world. In some countries, vaccination coverage is extremely high. For example, in Portugal and Singapore, nearly 90 percent of adults are fully vaccinated as of November 2021. This means that in those countries, the question of vaccine status is likely much less sensitive. After all, as Tourangeau and Yan (2007:860) noted, “a question about voting is not sensitive for a respondent who voted,” so a question about one’s vaccination status is likely not sensitive for someone who is vaccinated.

Another limitation of the study is that we cannot compare our survey estimates with the survey estimates of the RKI or any other institute, because our survey was a nonprobability sample of a particular group of Internet users. However, the randomized experimental design means that we can indeed compare estimates between groups (DQ and ICT) within the study. This also entails that although we have an unbiased estimate of the sample treatment effect (with high internal validity resulting from a strict experimental setup), we have no guarantee that the population treatment effect is the same. This relates both to the German population and to other countries. As we noted earlier, question sensitivity probably varies across different populations, and hence the treatment effect of applying ICT to survey questions on COVID-19 vaccination is likely to also vary. Moreover, we cannot make any statements about the validity of estimates on the basis of data collected directly from health officials and workers, such as those collected in the RKI’s Digital Vaccine Coverage Monitor. It is possible that incomplete data transferred from vaccination centers, hospitals, and physicians may lead to other forms of bias (especially if the quality of the reporting is dependent on other unobserved factors).

Another issue is statistical power. If we follow the advice of Blair, Coppock, and Moor (2020), then our sample size implies a power of less than 80 percent for a given DQ-ICT difference of 10 percentage points. This is a notorious problem of ICT procedures, which always come with highly inflated standard errors compared with conventional estimates. With respect to future studies, we strongly recommend ensuring sufficiently large sample sizes on one hand and using more advanced ICT setups that help boost the statistical efficiency of the estimates on the other hand (for some propositions, see Aronow et al. 2015).

More work is needed to discover further sources of bias (e.g., reaching nonnative speakers in surveys) to get a better idea of the true vaccination coverage. Until then, discrepancies in statistics will continue to exist for well-known reasons such as sampling error, survey biases, systematic under- and overreporting by health organizations, and others. The ultimate goal should be to work toward understanding as many sources of bias and inaccuracy as possible in order to provide the general public with honest and transparent information and avoid confusion and the potential for misrepresentation of statistics.

No comments:

Post a Comment