Wednesday, April 8, 2020

Quality of service provision (effectiveness, rule of law, regulatory quality, and absence of corruption) is correlated with happiness whereas kind of democracy and government spending is not

Happiness and the Quality of Government John F. Helliwell, Haifang Huang, Shun Wang. NBER Working Paper No. 26840, March 2020. https://www.nber.org/papers/w26840

Abstract: This chapter uses happiness data to assess the quality of government. Our happiness data are drawn from the Gallup World Poll, starting in 2005 and extending to 2017 or 2018. In our analysis of the panel of more than 150 countries and generally over 1,500 national-level observations, we show that government delivery quality is significantly correlated with national happiness, but democratic quality is not. We also analyze other quality of government indicators. Confidence in government is correlated with happiness, however forms of democracy and government spending seem not. We further discuss three channels (including peace and conflict, trust, and inequality) whereby quality of government and happiness are linked. We finally summarize what has been learned about how government policies could be formed to improve citizens’ happiness.


Social network size is a negative predictor of incivility; Twitter users who have built larger networks and gained positive responses from others are less likely to use uncivil language

Effects of Social Grooming on Incivility in COVID-19. Bumsoo Kim. Cyberpsychology, Behavior, and Social Networking, Apr 8 2020. https://doi.org/10.1089/cyber.2020.0201

Abstract: This study implements a computer-assisted content analysis to identify which social grooming factors reduce social media users' incivility when commenting or posting about the COVID-19 situation in South Korea. In addition, this study conducts semantic network analysis to interpret qualitatively how people express their thoughts. The findings suggest that social network size is a negative predictor of incivility. Moreover, Twitter users who have built larger networks and gained positive responses from others are less likely to use uncivil language. Lastly, linguistic choice among users is different depending on the size of their social network.

Discussion

The findings of this study imply that social network size is a negative predictor of incivility. Twitter users with a larger network size tend to use fewer uncivil remarks when they have more positive responses from others. Distinctive linguistic differences were identified in the two network maps conditional upon network size: users with a smaller network size (below the mean score) tend to use uncivil words. These findings point out one specific conclusion: incivility is still observed on social media, but social grooming might be a good solution to reduce it. Given that prior studies reveal internal and contextual factors that generate incivility,5,11 the findings of this study are unique by highlighting that social network size can be an important predictor of social media users' language selection.
With respect to the role of social grooming, we need to ponder deeply the implicit meanings behind increased social network size, specifically the number of friends and number of followers. Compared to users who have larger networks, those with smaller social media networks have fewer opportunities to understand diverse viewpoints.11 Given that larger networks are positively related to rationality, tolerance, and knowledge,30 users with a smaller network size could react aggressively in response to contextual news topics and divergent opinions.6 In this sense, emotional, emphatic, and uncivil remarks were identified in the smaller network size map.
Another important point in the findings is that contextual factors such as authors, news topics, and news sources can be implicit indicators of using uncivil remarks among users.5 When Twitter users talk about COVID-19 issues, they use “Wuhan pneumonia,” despite the fact that the Korean government discouraged the use of this term due to discrimination issues. With respect to users' language choice, the contextual factors cannot be overridden. According to the literature on incivility, partisan media also use uncivil remarks about political issues/characters, especially in the context of political elections, which strongly encourages audiences to use uncivil language.11 Like contentious political contexts, COVID-19 is not just a health-related issue, it also generates numerous political conflicts. In particular, many non-legacy media outlets use the COVID-19 issue politically to emphasize a linguistic combination by using the specific location name with the disease together, which could generate individuals' antipathy toward the location.
In spite of the noteworthy findings, this study's focus on Twitter limits the generalizability of the findings to other social media platforms. In addition, even though valid bags of words were used in this study, nuanced uncivil sentences and connotative/metaphorical remarks are hard to capture. Lastly, given that many Tweets also contain visual images, it would also be worthwhile for future researchers to investigate visual sentiment analysis that often combines textual and visual information to predict what content most affects people's opinions through social media.
Based on the findings of this study, it is suggested that social grooming with wider and more diverse social networks is an important predictor of incivility reduction. Methodologically, a combination analysis of textual and visual information or other advanced analytics (e.g., visual sentiment analysis, popularity prediction, virality prediction) might be possible options for future studies. As communication spheres become complex, we must continue to keep tracking linguistic patterns and influential factors in terms of morality and civility.

We find that cognitive effort as measured by response times increases by 40% when payments are very high; performance, on the other hand, improves very mildly or not at all as incentives increase

Cognitive Biases: Mistakes or Missing Stakes? Benjamin Enke, Uri Gneezy, Brian Hall, David Martin, Vadim Nelidov, Theo Offerman, Jeroen van de Ven. CESifo working paper 8168/2020. March 2020. https://www.ifo.de/DocDL/cesifo1_wp8168.pdf

Abstract: Despite decades of research on heuristics and biases, empirical evidence on the effect of large incentives – as present in relevant economic decisions – on cognitive biases is scant. This paper tests the effect of incentives on four widely documented biases: base rate neglect, anchoring, failure of contingent thinking, and intuitive reasoning in the Cognitive Reflection Test. In preregistered laboratory experiments with 1,236 college students in Nairobi, we implement three incentive levels: no incentives, standard lab payments, and very high incentives that increase the stakes by a factor of 100 to more than a monthly income. We find that cognitive effort as measured by response times increases by 40% with very high stakes. Performance, on the other hand, improves very mildly or not at all as incentives increase, with the largest improvements due to a reduced reliance on intuitions. In none of the tasks are very high stakes sufficient to debias participants, or come even close to doing so. These results contrast with expert predictions that forecast larger performance improvements.

Keywords: cognitive biases, incentives
JEL-Codes: D010