Monday, March 1, 2021

Communicating extreme forecasts... Scientists mention uncertainty far more frequently than non-scientists; thus, the bias in media toward coverage of non-scientific voices may be 'anti-uncertainty', not 'anti-science'

Apocalypse now? Communicating extreme forecasts. David C. Rode; Paul S. Fischbeck. International Journal of Global Warming, 2021 Vol.23 No.2, pp.191 - 211. DOI: 10.1504/IJGW.2021.112896

Abstract: Apocalyptic forecasts are unique. They have, by definition, no prior history and are observed only in their failure. As a result, they fit poorly with our mental models for evaluating and using them. However, they are made with some frequency in the context of climate change. We review a set of forecasts involving catastrophic climate change-related scenarios and make several observations about the characteristics of those forecasts. We find that mentioning uncertainty results in a smaller online presence for apocalyptic forecasts. However, scientists mention uncertainty far more frequently than non-scientists. Thus, the bias in media toward coverage of non-scientific voices may be 'anti-uncertainty', not 'anti-science'. Also, the desire among many climate change scientists to portray unanimity may enhance the perceived seriousness of the potential consequences of climate catastrophes, but paradoxically undermine their credibility in doing so. We explore strategies for communicating extreme forecasts that are mindful of these results.

Keywords: apocalypse; climate change; communication; extreme event; forecast; forecasting; global warming; media; policy; prediction; risk; risk communication; uncertainty.


5 Implications for policy and risk communication

Uncertainty is a core challenge for climate change science. It can undermine public engagement (Budescu et al., 2012) and form a barrier to public mobilisation (Pidgeon and Fischhoff, 2011). Our findings in this paper support these results and suggest that the exclusion of uncertainty from communication of apocalyptic climate-related forecasts can increase the visibility of the forecasts. However, the increased visibility comes at the cost of emphasising the voices of speakers without a scientific background. But focusing only on the quantity of communications, and not the ‘weight’ attached to them, neglects the important role that their credibility plays in establishing trust.

Trust (in subject-matter authorities and in climate research) influences perceived risk (Visschers, 2018; Siegrist and Cvetkovich, 2000). The impact of that trust is significant. Although belief in the existence of climate change remains strong, belief that its risks have been exaggerated has grown (Wang and Kim, 2018; Poortinga et al., 2011; Whitmarsh, 2011). Gaps have also emerged between belief in climate change and estimates of the seriousness of its impact (Markowitz and Guckian, 2018). To the extent that failed predictions damage that trust, the public’s perception of climate-related risk is altered. If the underlying purpose of making apocalyptic predictions is to recommend action, and if the predictions fail to materialise, the wisdom of the recommendations based on those predictions may be called into question. Climate science’s perceived value is thereby diminished. If the perceived value (or the certainty of that value) is diminished, policy action is harder to achieve.

It is not simply the presence of uncertainty that is an impediment, it is that communications characterised by ‘hype and alarmism’ also undermine trust (Howe et al., 2019; O’Neill and Nicholson-Cole, 2009). The continual failure of the predictions to materialise may be seen to validate the public’s belief that such claims are in fact exaggerated. Although such beliefs may be the result of outcome bias (Baron and Hershey, 1988), recent evidence has also suggested that certain commonly accepted scientific predictions may indeed be exaggerated (Lewis and Curry, 2018). The model of belief we presented in Subsection 2.4 demonstrates that observing only failures will inevitably result in a reduction in subjective beliefs about apocalyptic risks. To build trust, any forecasts made must be ‘scientific’ – that is, able to be observed both incorrect and correct (Green and Armstrong, 2007). Under such circumstances, they should also incorporate clear statements acknowledging uncertainty, as doing so may work to increase trust (Joslyn and LeClerc, 2016). It is important to provide settings where the audience can ‘calibrate’ its beliefs. “A climate forecast can only be evaluated and potentially falsified if it provides a quantitative range of uncertainty” [Allen et al., (2013), p.243]. The acknowledgement of the uncertainty should include both worst-case and best-case outcomes (Howe et al., 2019).

One key to increasing credibility is to build up a series of shorter, simpler (non-apocalyptic) predictions (Nemet, 2009). Instead of predicting solely an apocalyptic event 50 years out, offer a series of contingent forecasts of shorter characteristic time (Byerly, 2000) that lead toward the ultimate event. Communications about climate change – and especially climate change-related predictions – should emphasise areas of the science that are less extreme in outcome, but more tangibly certain in likelihood (Howe et al., 2019). This implies, inter alia, that compound forecasts of events and the consequences of events should be separated. The goal may even be to exploit an outcome bias in decision making by moving from small- to large-scale predictions. By establishing a successful track record of smaller-scale predictions, validated with ex post evaluations of forecast accuracy, the public may be more inclined to increase its trust of the larger-scale predictions – even when such predictions are inherently less certain. This approach has been advocated directly by Pielke (2008) and Fildes and Kourentzes (2011) and supports the climate prediction efforts of Meehl et al. (2014), Smith et al. (2019), and others. To that end, we propose four concrete steps that can be taken to improve the usefulness of extreme climate forecasts. First, the authors of the forthcoming Sixth Assessment Report of the IPCC should be encouraged to tone down ‘deadline-ism’ (Asayama et al., 2019). Forecasters should make an effort to influence the interpretation of their forecasts; for example, by correcting media reporting of them. The sequential releases of the IPCC’s Assessment Reports, for example, should consider calling out particularly erroneous or incomplete interpretations of statements from previous Assessment Reports.

Second, given the extensive evidence about the limited forecasting abilities of

individual experts (Tetlock, 2005), forecasters should give more weight to the unique

ability of markets to serve as efficient aggregators of belief in lieu of negotiated

univocality. So-called prediction markets have a strong track record (Wolfers and

Zitzewitz, 2004). Although they have been suggested multiple times for climate

change-related subjects (Lucas and Mormann, 2019; Vandenbergh et al., 2014), they

have almost never been used. An exception is the finding that pricing in weather financial

derivatives is consistent with the output of climate models of temperature (Schlenker and

Taylor, 2019).

Third, efforts to provide reliable mid-term predictions should be encouraged. The

multi-year and decadal prediction work of Smith et al. (2019) and Meehl et al. (2014) is

in this direction. But what should (also) be developed are repeated and sequential

forecasts in order to facilitate learning about the forecasting process itself. That is, not

just how current climate forecasting models perform in hindcasts, but how previous

climate forecasts have performed (and hopefully improved) over time. Efforts to

determine the limits of predictability are also important (Meehl et al., 2014) and should

be studied in conjunction with the evaluation of forecast performance over time.

Fourth, extreme caution should be used in extrapolating from forecasts of climate events (e.g., temperature or CO2 levels) to their social and physical consequences (famine, flooding, etc.) without the careful modelling of mitigation and adaptation efforts and other feedback mechanisms. While there have been notable successes in predicting certain climate characteristics, such as surface temperature (Smith et al., 2019), the ability to tie such predictions to quantitative forecasts of consequences is more limited. The efforts to model damages as part of determining the social cost of carbon (such as with the DICE, PAGE, and FUND integrated assessment models) are a start but are subject to extreme levels of parameter sensitivity (Wang et al., 2019); uncertainty should be reflected in any forecasts of apocalyptic forecasts of climate change consequences. Scientists are often encouraged to ‘think big’, especially in policy applications. What we are suggesting here is that climate policy analysis could benefit from thinking ‘small’. That is, from focusing on the lower-level building blocks that go into making larger-scale predictions. One means by which to build public support for a complex idea like climate change is to demonstrate to the public that our understanding of the building blocks of that science are solid, that we are calibrated as to the accuracy of the building block forecasts, and that we understand how lower-level uncertainty propagates through to probabilistic uncertainty in the higher-level forecasts of events and consequences.

No comments:

Post a Comment