Saturday, May 2, 2020

Politicians change the topics they address as they are informed of polls; also, exposure to public opinion research alters politicians' substantive positions in the direction of majority opinion

Does Public Opinion Affect Political Speech? Anselm Hager, Hanno Hilbig. American Journal of Political Science, May 1 2020. https://doi.org/10.1111/ajps.12516

Abstract: Does public opinion affect political speech? Of particular interest is whether public opinion affects (i) what topics politicians address and (ii) what positions they endorse. We present evidence from Germany where the government was recently forced to declassify its public opinion research, allowing us to link the content of the research to subsequent speeches. Our causal identification strategy exploits the exogenous timing of the research's dissemination to cabinet members within a window of a few days. We find that exposure to public opinion research leads politicians to markedly change their speech. First, we show that linguistic similarity between political speech and public opinion research increases significantly after reports are passed on to the cabinet, suggesting that politicians change the topics they address. Second, we demonstrate that exposure to public opinion research alters politicians' substantive positions in the direction of majority opinion.

Discussion

This article has provided novel text‐analytic evidence to assess whether public opinion affects political speech. Drawing on evidence from Germany, we found that politicians change their speech markedly when exposed to public opinion research. Not only does their speech become more similar to the language used in the public opinion reports—a finding that points toward agenda setting—but they also adjust their substantive positions to the public's preferences expressed in the reports. The evidence thus brings clarity to one mechanism through which politicians connect with voters: speech.
Before reflecting on the substantive implications of the findings, two words of caution are in order. First, this article assessed agenda setting using cosine similarity, whereas substantive responsiveness was assessed using human coding. The latter measure, given that it was coded by humans, is less controversial. Still, reducing nuanced public opinion reports and speeches—both of which seldom touch on just one topic—to a one‐dimensional agreement‐scale is not without problems. Politicians may, for example, agree with some points made in an opinion report, while disagreeing with others, which likely creates measurement error. Regarding the former a critic might object that cosine similarity does not reflect a true change in agenda, but merely small rhetorical adjustments. We have tried to address this concern by showing that (i) similarity increases are driven by substantively meaningful words and (ii) speechwriters do not plagiarize from the reports. Still, we must again caution that we merely detect agenda setting along the intensive margin—a result of the local RD design.
Second, our attempt to make a causal argument deserves critical scrutiny. The consistent finding that cosine similarity and substantive agreement increase right after reports are given to cabinet members makes a causal interpretation intuitive. Yet, a skeptic might say that the observed changes are the product of a general shift in rhetoric. Although we do not believe that a few days should bring about such changes (indeed, the placebo and permutation tests paint a different picture), the criticism showcases the need to look at rhetoric in a more dynamic setting. Future research could help model such changes with greater clarity, perhaps by benchmarking political speech to speech in the media. Relatedly, a skeptic might also quibble that the finding is tautological (i.e., politicians writing the survey questions and timing the dissemination of the research). We believe that the qualitative and quantitative evidence rule out this possibility. The pronounced coefficients do underline that elected officials react to public opinion research. At a minimum, our study thus provides descriptive evidence that cabinet members systematically conduct public opinion research and subsequently change their speech.
Having discussed these caveats, we want to briefly reflect on how our research may be expanded. If German politicians, indeed, adjust their speeches to public opinion, this bears important insights for the study of representative democracy. In times of increasing polarization, a potential follow‐up question is whether citizens perceive such adjustments as deceptive or manipulative. The beginning of the 2000s saw U.S. pundits lament that American politicians were more interested in responding to public opinion than in crafting their own agenda. Such “finger‐in‐the‐wind” responsiveness was portrayed as a symbol for elected officials' lack of courage (Medvic and Dulio 2004). In the German context, Angela Merkel has been described in The New Yorker as “the quiet German”—a politician who silently panders to public opinion (Packer 2014). Similarly, leaked cables show that U.S. diplomats labeled the chancellor “Teflon‐Merkel.” Merkel's rhetoric, so the story goes, allows her to sidestep political controversy (Waterfield 2010). Sophisticated public opinion data are a double‐edged sword. On the one hand, relying too heavily on public opinion research can turn political speech into a science that sidesteps truthful dialogue. On the other hand, knowledge about public preferences spurs responsiveness. These considerations showcase the need to further explore the relationship between elite speech and public opinion, mapping more fully the ways in which government officials use and interpret public preferences.

No comments:

Post a Comment