Saturday, February 15, 2020

My English sounds better than yours: Second-language learners perceive their own accent as better than that of their peers

My English sounds better than yours: Second-language learners perceive their own accent as better than that of their peers. Holger Mitterer, Nikola Anna Eger, Eva Reinisch. PLOS February 7, 2020. https://doi.org/10.1371/journal.pone.0227643

Abstract: Second language (L2) learners are often aware of the typical pronunciation errors that speakers of their native language make, yet often persist in making these errors themselves. We hypothesised that L2 learners may perceive their own accent as closer to the target language than the accent of other learners, due to frequent exposure to their own productions. This was tested by recording 24 female native speakers of German producing 60 sentences. The same participants later rated these recordings for accentedness. Importantly, the recordings had been altered to sound male so that participants were unaware of their own productions in the to-be-rated samples. We found evidence supporting our hypothesis: participants rated their own altered voice, which they did not recognize as their own, as being closer to a native speaker than that of other learners. This finding suggests that objective feedback may be crucial in fostering L2 acquisition and reduce fossilization of erroneous patterns.

Environmental influence due to idiosyncratic experiences is dominant in the variation of intensity of aesthetic appraisal; genetic factors played a moderate role (heritability 26-41%)

Bignardi, Giacomo, Luca F. Ticini, Dirk Smit, and Tinca J. Polderman. 2020. “Domain-specific and Domain-general Genetic and Environmental Effects on the Intensity of Visual Aesthetic Appraisal.” PsyArXiv. February 7. doi:10.31234/osf.io/79nbq

Abstract: Visual aesthetic experiences are universally shared and uniquely diversified components of every human culture. The contribution of genetic and environmental factors to variation in aesthetic appraisal has rarely been examined. Here, we analysed variation in the intensity of aesthetic appraisal in 558 monozygotic and 216 dizygotic same sex adult twin pairs when they were presented with three kinds of visual stimuli: abstract objects, sceneries, and faces. We estimated twin resemblance and heritability for the three stimuli types, as well as a shared genetic factor between the three stimuli types. Genetic factors played a moderate role in the variation of intensity of aesthetic appraisal (heritability 26 to 41%). Both shared and unique underlying genetic factors significantly accounted for domain-general and domain-specific differences. Our findings are the first to show the extent to which variation in the intensity of aesthetic experiences result from the contribution of genetic and environmental factors.



Neuroticism robustly increases general dissatisfaction with welfare state programmes, yet they also appear to need these programmes more

Taking social policy personally: How does neuroticism affect welfare state attitudes? Markus Tepe  Pieter Vanhuysse. Social Policy & Administration, February 5 2020 https://doi.org/10.1111/spol.12568

Abstract: The role of the “Big Five” personality traits in driving welfare state attitudes has received scant attention in social policy research. Yet neuroticism in particular—a disposition to stress, worry, and get nervous easily—is theoretically likely to be an important driver of welfare attitudes precisely because welfare states deliver social “security” and “safety” nets. Using cross‐sectional data from the German Socio‐Economic Panel, we study three distinct attitude types (dissatisfaction with the social security system, feelings of personal financial insecurity, and preferences for state provision) and multiple social need contexts (including unemployment, ill health, old age, and nursing care). Controlling for established explanations such as self‐interest, partisanship, and socialization, neuroticism does not systematically affect support for state provision. But it robustly increases general dissatisfaction with social security, as well as financial insecurity across various need contexts. Neurotic people are thus less happy with welfare state programmes across the board, yet they also appear to need these programmes more. This trait may be an important deeper layer driving other social attitudes.




Attenuation of Deviant Sexual Fantasy across the Lifespan in U.S. Adult Nonoffending Males

Attenuation of Deviant Sexual Fantasy across the Lifespan in U.S. Adult Males. Tiffany A. Harvey, Elizabeth L. Jeglic. Psychiatry, Psychology and Law, Feb 13 2020, https://doi.org/10.1080/13218719.2020.1719376

Abstract: Deviant sexual fantasy is identified as a risk factor for sexual offending, yet no study has examined deviant sexual fantasy across the lifespan in nonoffending adult males. To bridge this gap, this study examined the frequencies of normative and deviant sexual fantasies among 318 nonoffending adult males in the United States. Participants were recruited via Amazon Mechanical Turk™. Participants took two inventories that assessed demographics and types of sexual fantasies. Normality tests, means tests, Kruskal–Wallis 1-way analyses of variance (ANOVAs), binary logistic regressions, and odds ratio post hoc analyses were conducted. Deviant sexual fantasies progressively declined across all three age groups, while normative sexual fantasy did not. Results suggest that deviant sexual fantasy changes across the lifespan. Applicability of the findings to applied settings, such as sexually violent predator evaluations, is discussed. Limitations and future considerations are addressed.

Key words: Deviant, fantasy, interest, lifespan, males, nonoffending, normative, sexual, sexual offending, United States

The Economic Consequences of Increasing Sleep Among the Urban Poor: Offering high-quality naps at the workplace increased productivity, cognition, psychological well-being, and patience

The Economic Consequences of Increasing Sleep Among the Urban Poor. Pedro Bessone, Gautam Rao, Frank Schilbach, Heather Schofield, Mattie Toma. NBER Working Paper No. 26746, February 2020. https://www.nber.org/papers/w26746

Abstract: This paper measures sleep among the urban poor in India and estimates the economic returns to increased sleep. Adults in Chennai have strikingly low quantity and quality of sleep relative to typical guidelines: despite spending 8 hours in bed, they achieve only 5.6 hours per night of sleep, with 32 awakenings per night. A three-week treatment providing information, encouragement, and sleep-related items increased sleep quantity by 27 minutes per night without improving sleep quality. Increased night sleep had no detectable effects on cognition, productivity, decision-making, or psychological and physical well-being, and led to small decreases in labor supply and thus earnings. In contrast, offering high-quality naps at the workplace increased productivity, cognition, psychological well-being, and patience. Taken together, the returns to increased night sleep are low, at least at the low-quality levels typically available in home environments in Chennai. We find suggestive evidence that higher-quality sleep improves important economic and psychological outcomes.

Supplementary materials for this paper: randomized controlled trials registry entry https://www.socialscienceregistry.org/trials/2494


Friday, February 14, 2020

Financial Robo-Analysts collectively produce a more balanced distribution of buy, hold, & sell recommendations than do human analysts, they seem less subject to behavioral biases & conflicts of interest

Coleman, Braiden and Merkley, Kenneth J. and Pacelli, Joseph, Man versus Machine: A Comparison of Robo-Analyst and Traditional Research Analyst Investment Recommendations (January 6, 2020). Available at SSRN: http://dx.doi.org/10.2139/ssrn.3514879

Abstract: Advances in financial technology (FinTech) have revolutionized various product offerings in the financial services industry. One area of particular interest for this technology is the production of investment recommendations. Our study provides the first comprehensive analysis of the properties of investment recommendations generated by “Robo-Analysts,” which are human-analyst-assisted computer programs conducting automated research analysis. Our results indicate that Robo-Analysts differ from traditional “human” research analysts across several dimensions. First, Robo-Analysts collectively produce a more balanced distribution of buy, hold, and sell recommendations than do human analysts, which suggests that they are less subject to behavioral biases and conflicts of interest. Second, consistent with automation facilitating a greater scale of research production, Robo-Analysts revise their reports more frequently than human analysts and also adopt different production processes. Their revisions rely less on earnings announcements, and more on the large, volumes of data released in firms’ annual reports. Third, Robo-Analysts’ reports exhibit weaker short-window return reactions, suggesting that investors do not trade on their signals. Importantly, portfolios formed based on the buy recommendations of Robo-Analysts appear to outperform those of human analysts, suggesting that their buy calls are more profitable. Overall, our results suggest that Robo-Analysts are a valuable, alternative information intermediary to traditional sell-side analysts.

Keywords: FinTech, Analysts, Robo-Analyst, Investment Recommendations
JEL Classification: G14, G24


Household Electrification & Economic Development--Impacts can vary even across individuals in neighboring villages: Households that were willing to pay more for a grid electrification may gain more from electrification compared to households that would only connect for free

Does Household Electrification Supercharge Economic Development? Kenneth Lee, Edward Miguel, and Catherine Wolfram. Journal of Economic Perspectives—Volume 34, Number 1—Winter 2020—Pages 122–144. https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.34.1.122

Abstract: In recent years, electrification has re-emerged as a key priority in low-income countries, with a particular focus on electrifying households. Yet the microeconomic literature examining the impacts of electrifying households on economic development has produced a set of conflicting results. Does household electrification lead to measurable gains in living standards or not? Focusing on grid electrification, we discuss how the divergent conclusions across the literature can be explained by differences in methods, interventions, potential for spillovers, and populations. We then use experimental data from Lee, Miguel, and Wolfram (2019) — a field experiment that connected randomly-selected households to the grid in rural Kenya — to show that impacts can vary even across individuals in neighboring villages. Specifically, we show that households that were willing to pay more for a grid electrification may gain more from electrification compared to households that would only connect for free. We conclude that access to household electrification alone is not enough to drive meaningful gains in development outcomes. Instead, future initiatives may work better if paired with complementary inputs that allow people to do more with power.

For supplementary materials such as appendices, datasets, and author disclosure statements, see the article page at https://doi.org/10.1257/jep.34.1.122

From 1992... Paul Romer's "Two Strategies for Economic Development: Using Ideas and Producing Ideas"

From 1992... Two Strategies for Economic Development: Using Ideas and Producing Ideas. Paul M. Romer. The World Bank Economic Review, Volume 6, Issue suppl_1, 1 December 1992, Pages 63–91, https://doi.org/10.1093/wber/6.suppl_1.63

Abstract: The key step in understanding economic growth is to think carefully about ideas. This requires careful attention to the meaning of the words that we use and to the metaphors that we invoke when we construct mathematical models of growth. After addressing these issues, this paper describes two different ways in which ideas can contribute to economic development. The history of Mauritius shows how a poor economy can benefit by using ideas from industrial countries within its borders. The history of Taiwan (China) shows how a developing economy can be pushed forward into the ranks of those that produce ideas for sale on world markets.

New neurocognitive-psychometrics account of mental speed that decomposes the relationship between mental speed and intelligence: They found that the speed of higher-order processing is greater in smarter individuals

Neurocognitive Psychometrics of Intelligence: How Measurement Advancements Unveiled the Role of Mental Speed in Intelligence Differences. Anna-Lena Schubert, Gidon T. Frischkorn. Current Directions in Psychological Science, February 13, 2020. https://doi.org/10.1177/0963721419896365

Abstract: More intelligent individuals typically show faster reaction times. However, individual differences in reaction times do not represent individual differences in a single cognitive process but in multiple cognitive processes. Thus, it is unclear whether the association between mental speed and intelligence reflects advantages in a specific cognitive process or in general processing speed. In this article, we present a neurocognitive-psychometrics account of mental speed that decomposes the relationship between mental speed and intelligence. We summarize research employing mathematical models of cognition and chronometric analyses of neural processing to identify distinct stages of information processing strongly related to intelligence differences. Evidence from both approaches suggests that the speed of higher-order processing is greater in smarter individuals, which may reflect advantages in the structural and functional organization of brain networks. Adopting a similar neurocognitive-psychometrics approach for other cognitive processes associated with intelligence (e.g., working memory or executive control) may refine our understanding of the basic cognitive processes of intelligence.

Keywords: intelligence, mental speed, psychometrics, cognitive modeling

Crimes, deterrence, & paying for more security guards: Restricting guards in sparse, rural markets and requiring guards in dense, urban markets could be socially beneficial

The Race Between Deterrence and Displacement: Theory and Evidence from Bank Robberies. Vikram Maheshri and Giovanni Mastrobuoni. The Review of Economics and Statistics, January 23, 2020. https://doi.org/10.1162/rest_a_00900

Abstract: Security measures that deter crime may unwittingly displace it to neighboring areas, but evidence of displacement is scarce. We exploit precise information on the timing and locations of all Italian bank robberies and security guard hirings/firings over a decade to estimate deterrence and displacement effects of guards. A guard lowers the likelihood a bank is robbed by 35-40%. Over half of this reduction is displaced to nearby unguarded banks. Theory suggests optimal policy to mitigate this spillover is ambiguous. Our findings indicate restricting guards in sparse, rural markets and requiring guards in dense, urban markets could be socially beneficial.

JEL classification: K42
Keywords: deterrence, displacement, spillover, policing, bank security guards


Thursday, February 13, 2020

Mental experiment in which a person was split into two continuers: Most took decisions based on the continuity of memory, personality, and psychology, with some consideration given to the body and social relations


Putting your money where your self is: Connecting dimensions of closeness and theories of personal identity. Jan K. Woike, Philip Collard, Bruce Hood. PLOS, February 12, 2020. https://doi.org/10.1371/journal.pone.0228271

Abstract: Studying personal identity, the continuity and sameness of persons across lifetimes, is notoriously difficult and competing conceptualizations exist within philosophy and psychology. Personal reidentification, linking persons between points in time is a fundamental step in allocating merit and blame and assigning rights and privileges. Based on Nozick’s (1981) closest continuer theory we develop a theoretical framework that explicitly invites a meaningful empirical approach and offers a constructive, integrative solution to current disputes about appropriate experiments. Following Nozick, reidentification involves judging continuers on a metric of continuity and choosing the continuer with the highest acceptable value on this metric. We explore both the metric and its implications for personal identity. Since James (1890), academic theories have variously attributed personal identity to the continuity of memories, psychology, bodies, social networks, and possessions. In our experiments, we measure how participants (N = 1, 525) weighted the relative contributions of these five dimensions in hypothetical fission accidents, in which a person was split into two continuers. Participants allocated compensation money (Study 1) or adjudicated inheritance claims (Study 2) and reidentified the original person. Most decided based on the continuity of memory, personality, and psychology, with some consideration given to the body and social relations. Importantly, many participants identified the original with both continuers simultaneously, violating the transitivity of identity relations. We discuss the findings and their relevance for philosophy and psychology and place our approach within the current theoretical and empirical landscape.

Study materials, supporting analyses, and qualitative coding scheme: https://doi.org/10.1371/journal.pone.0228271.s001

Excerpts of discussion...

The fission scenario as scientific fiction: Benefits and validity concerns

Not all philosophers embrace the use of hypothetical scenarios in studying identity [129–131]. Some are critical of fission thought experiments, arguing that the intuitions derived from such fantasy accounts—in which anything goes—violate natural worlds and are therefore not valid measures of how people conceptualize identity in natural settings (e.g., [132]). Like contaminated test tubes in biochemical experiments, results obtained with faulty scenarios would have to be considered dubious [131]. Scholl [130] further argued that reactions to “bizarre scenarios” with forced responses might “tell us more about heuristic getting-through-the-experiment strategies than about actual metaphysical intuitions” (p. 580). One could compare these scenarios to visual illusions that generate experiences that are at odds with reality—but these experiences nevertheless often provide insight into the mechanisms that generate the illusion. For example, size and distance illusions reveal the computation the brain uses to calculate physical dimensions even in normal cases. Likewise, intuitions derived from cognitive processes during these unreal and outlandish examples may not be necessarily meaningful when measured against the constraints of reality [130] but rather cast light on how we use and reason with the concept of identity. While the participants’ open answers showed a degree of confusion in a minority of subjects, most answers reflected a well-considered and principled approach to the questions. Studies in experimental philosophy often feature unusual scenarios, and replicability in this sub-field compares favorably with replication rates in other domains of psychology [133].

Our science fiction scenario might be criticized for its lack of realism. It makes crucial assumptions that contradict the scientific understanding of how the components that we neatly separated in the story interact and co-depend on each other. According to Harle [68], a brain cannot be considered to be independent of the body; it would immediately adapt to a new body, rework the inner representation, and react to changes in social responses. Brain and body are interdependent in complex ways [39, 51, 63]. Motor learning [134] also involves two components, body and memory; muscles cannot work without neuronal input. The body and appearance component used in our studies could be understood to either include or not include brain matter. If included, all other components would depend on this component, as there can be no psychology without consciousness. If, alternatively, higher mental functions and consciousness are regarded as part of psychology or memory (which seems to be the approach taken by at least some participants), the brain, as a bodily organ that makes psychological functioning possible, can still be seen as source of tension in the scenario framework.

Participants responded to our scenario only from a first-person perspective. First-person evaluations of identity and survival might differ from third-person evaluations [128, 135]. For one thing, many legal, practical, and social concerns can be fulfilled by a person who is a spontaneous true copy of the original person. How much participants care about the disruption of continuity might therefore differ depending on whether the replaced person is them or a neutral other person [30]. Rorty [18] distinguished between an external observer’s perspective on individual identification and an individual’s internal perspective; features essential to an individual’s self-perspective might be irrelevant for an observer, and not all philosophers assume that first-person judgments have final authority [136, 137]. On the other hand, at least one study explicitly testing for the effect of perspective on intuitions about identity based on [28] found no substantial differences between a first-person and a third-person perspective [31]. At the same time, continuers were introduced from a third-person perspective. This shift was necessary to avoid a pre-judgment of the question how the original person relates to them, but might create a perceived distance. This could make it easier to give a negative answer to the survival question, but would have a symmetric influence on identity responses for both continuers.

It is less clear which perspective is better suited to evaluate claims about identity and survival. We assumed that involving the participant would be the best way to increase attention to the situation and diligence in responding. We induced a connection between our categories and their real-world instances, as perceived by the participants. A participant evaluating the importance of “body” in the money allocation problem will thus take the perception of her own body into consideration, which may lead to different results than when considering the importance of a body. This fact might play into our finding that a subgroup of participants placed a negative value on the continuity of the body.

On the other hand, our scenario avoids a complication common to many other fission scenarios: By splitting possessions and friends between survivors, it sidesteps the problem of nonsharable singular goods [34] with symmetric claims from two sides. These claims include access to special objects and, more significantly, the chance to engage in special relationships with others [51]. On the flipside, as Schechtman [138] argued, the scope and time frame of fission scenarios does not allow for societal reactions to the products of fission, which might include changes to the concept of persons and identity. Our approach avoids the confound of money being allocated for reasons of loss or pity, as both survivors emerge with their lives, bodies, and environments fully intact (if not unchanged). Williams [28] predicted a framing effect in the understanding of the scenario, a type of “leading the witness,” with a reversal of intuitions regarding identity depending on whether the story is told as involving body-swapping (transferring a person’s memory to a new body) or mind-manipulation (creating new memories in a person’s body). Empirical work has confirmed these predictions [31]. To counteract such potential framing effects, we used parallel language for all components varied in continuers; none of the components was privileged in the description of the scenario. Further, similar tensions involved in our scenario are discussed in section 1.3. in S1 Supporting Information.

Invoking hyperspace travel buys some degree of freedom, but at the cost of physical implausibility. However, factual impossibility does not prevent imaginability, and as Johnston [139] argued, “such per impossible thought experiments might nonetheless teach us about the relative importance of things that invariably go together” (p. 601). It is precisely because some aspects are not easily separated in realistic scenarios that we chose a fantasy scenario, allowing us to explore intuitions whose tests would otherwise be confounded.

Our scenario remains within the conventions of popular films (e.g., Total Recall or Blade Runner—both of which fittingly now exist in two versions) that deal with cases of copied or artificial memories and identities [140]. Body and mind transfer were considered intelligible by Locke [9], and the nature of personal identity is a recurrent theme in literature. Many readers have appreciated Franz Kafka’s tale of Gregor Samsa’s sudden metamorphosis into an insect [12] or Ovid’s metamorphosed subjects who survive the transformation [39]. Children become acquainted with bodily transfer in fairy tales like The Frog Prince [23, 120] or Hans Christian Andersen’s The Little Mermaid [39]. Johnson [141] takes this mere imaginability as an argument against declaring bodily continuity as a logical precondition for personal identity. Our scenario is no more fantastic than other thought experiments that have been employed to disentangle identity from its natural correlates—through neurosurgeons [6, 38, 51, 128, 142], amoeba-like duplication [127, 143], cloning [144], parallel universes [30], or even swamp-beings [66]. To come full circle, some of the philosophical conceptions and puzzle cases are reproduced in cultural creations and thereby further embedded into cultural consciousness [39]. It is possible that there is no theory of personal identity that would be able to “satisfy all intuitions about all devisable scenarios” (in [31], p. 297), but the advantage of imaginary scenarios lies in their power to isolate phenomena, which makes it possible to attend to specific aspects of our concepts [145]. In our scenarios we can separate changes from their ordinary causes and study decisions that may not occur in the world, but that probe concepts we apply to the world. In addition, our use of a novel paradigm limits the danger of previous exposure to similar questions and potential confusion, which is a concern with crowdsourced participants [146]. We therefore maintain that there is value in our approach of pitting dimensions that are generally accepted as dimensions of closeness against one another.


Dimensions of closeness

Our scenario bundles features into dimensions that might be further differentiated. For example, in contrast to [65], we did not differentiate between personality and psychology, on the one hand, and moral values, on the other. Evidence from several studies considering real-world personal transformations has indicated that identity judgments are most heavily influenced by changes or non-changes in moral values [65]. Changes in morality were judged to be more relevant than changes in (non-moral) personality attributes or memory. In a similar vein, Strohminger and Nichols [116] found that changes in morality in patients with neurodegenerative diseases strongly determined changes in perceived identity. Nunner-Winkler [21] reported on a study asking participants which changes would lead them to see themselves as a different person. Ideas about right and wrong and sex membership were considered to be quite important; appearance and money were considered less relevant (although some participants rated looks to be important, consistent with our distributional results).

The distinction between moral and nonmoral traits is somewhat ambiguous (e.g., conscientiousness was considered as a moral trait rather than a personality factor in [65]). One person’s morals do not and cannot exist in a social vacuum, moral consensus is central for co-ordination, affiliation and conflict resolution. Morality stands in complex relations to beliefs, values, behaviors and communities. It also depends on memory in nontrivial ways. Some of the induced changes in the scenarios even involved the loss of the moral faculty with a likely ripple effect reaching other dimensions of the self. If this perspective is true, the relevance of morality for personal identity might lie in these possibly disruptive consequences of changing one’s morals in relation to one’s environment and not because of its self-defining importance. Evidence for this interpretation is found in two studies demonstrating that changes in widely shared (and therefore less unique to the individual) moral values are considered to lead to more changes to the person than changes in controversial moral beliefs [106, 147]. For controversial moral beliefs, that might be considered most defining and informative for describing a person’s self, the effect was weaker than for memory. Also, the changes in memory induced by our scenarios would induce both errors of omission and errors of commission, which can have differential impacts on moral behavior [148]. In contrast, some studies focus mostly on omission errors due to memory changes (e.g., [118]), which are describes as having more limited effects on behavior towards others than changes in morality. In our scenarios, dimensions are replaced by random sampling from the participant’s reference population, which is a different operationalization of change. Heiphetz and colleagues [147] showed, for example, how the perceived change was mediated by perceived disruptions of friendships.

Some argue that morality is not even conceivable without personal identity [6, 10, 58]. Most people also seem to have inflated beliefs of their own morality [149]. In separate evaluations, our participants ranked memories and psychology to be more important for identity than moral values. Nonetheless, a further decomposition of the broad headings we used in our study would be feasible and interesting in future research. In particular, the role of moral traits and behavioral tendencies could be considered separately, even within a similar factorial setup as the one we employed.

The social dimension could be further differentiated, as well. Parents were considered to be more important than friends in Study 2, and Nunner-Winkler [21] reported similar findings. Of course, parents influence a person directly through the transmission of genes as well as indirectly through instruction and parenting behavior; changing one’s parents cannot be considered a merely social manipulation and could well have an impact on every other dimension.

In our scenario, changes in memory are considered universal and all-or-nothing. In real life, however, memories of self or self-knowledge seem to be better preserved than other knowledge, even in semantic dementia [20], and a subjective belief of self-persistence is demonstrated by patients with Alzheimer’s disease [150]. Alternatively, the sense of self may be impaired while episodic memories stay intact—as in the case of R.B. [67]. Further, separating specific psychological aspects or memory from a person’s social context and network of activities might prove impossible in practice [18]. There is also some overlap between criteria based on psychology and memory, but under the assumption that two organisms with the same memories might nonetheless differ in personality and psychology (e.g., based on differences in needs, intentions, values, or goals), it is not necessary that the criteria coincide. In fact, the psychological continuity criterion has been proposed as a critique of a narrow Lockean focus on memory [11].

Critics of our scenario might further object that our random collage of features in the two continuers destroys the causal connection between past and present states necessary for identity [6, 30]. Preschool children already individuate objects and persons spatio-temporally [23, 151] and, following Sagi and Rips [152], causal histories receive special attention in linguistic disambiguation in discourse. In all our scenarios (except the two extreme cases with exact duplicates), change in characteristics was induced by an accident, an unusual life event that disrupts spatio-temporal continuity. This fact might strengthen impressions that identity is not preserved. According to data reported in [21], for example, participants regarded changes in attitudes or beliefs that were due to normal life experiences as non-consequential for identity judgments—as opposed to changes induced by brainwashing, severe medical conditions, or accidents. Therefore, the nature of the transformation might play a role in our participants’ judgments. Note that both continuers underwent the same procedure, so this factor cannot explain differential assessments. Although the abruptness and symmetry of the original person’s transformation prevents the application of spatio-temporal continuation criteria, participants might still construct “fictive causal histories” [153] to assess which of the two continuers might have the better chance of being the result of changes within an ordinary life.

Finally, for a continuer to acquire a random set of possessions, these would have to materialize from somewhere. Our scenario also assumes that this change in possessions can leave memory, psychology, friends, and appearance untouched. This is incompatible with the reality that some of our memories are intertwined with objects in our possession and the difference between owning or not owning status symbols, for example, can impact self-value, build and burn bridges with others, and change perceptions of their owner.


Towards a process model of re-identification

Our studies allow to make some progress in the analysis of decisions involved in determining personal identity. Like Rips and colleagues [17], who develop the causal continuer model based on Nozick’s theory, we are interested in the decision process. Decision processes, as implemented by human beings, are often insufficiently described by functions merely predicting decision outcomes. A further analysis of the decision processes needs to address questions of information search: Which persons are considered as continuers? When and why is the search for possible continuers stopped? Which dimensions are considered in the subjective closeness metric, and how are these dimensions integrated? We showed some results compatible with decision-making following the closest continuer logic. Is there further evidence for the three steps being followed in a specific sequence—the fast-and-frugal tree in Fig 1 would not yield different outcomes if the first three levels changed their relative position—and how stable is this process across individuals? A structurally similar model of decision making has been proposed for explaining the phenomenon of choice deferral [154, 155]. When faced with a selection of possible alternatives, choice in the 2S2T-model [155] is deferred for one of two reasons. First, none of the options is good enough and surpasses a decision threshold or second, too many options are good enough, surpassing the threshold but it becomes difficult to choose the best option. Of course, personal re-identification is not simply preferential choice but the analysis of the decision process might still be informed by the analysis of related or parallel processes in other domains.

While the mathematical form of weighted-additive linear models implies weighting and adding, many other operations, such as lexicographic stepwise procedures that ignore (sometimes most of the) variables in the equation [35] would still be captured by this model [156]. Brook [12] argued for a model of personal re-identification that starts with psychological factors and only considered other dimensions if the information is missing (or inconclusive). Variance in choosing and applying criteria might again be related to other individual differences [41, 42]. Based on the variations in our chosen design for this study, it is not yet possible to build cognitive models of participants’ decisions. It is, for example, unclear whether an appropriate model should be stochastic, as in [17], or deterministic.

Our scenarios varied factors that should mostly influence the assessment of closeness and only indirectly the decision-making based on these assessments. Future research could shift this focus to the subsequent stages of the procedure. Thus, specific exit nodes of the decision tree in Fig 1 could be investigated. For example, is there a minimum level of closeness required for participants to determine that any of the continuers is identical to the original person? Do participants share the intuition that a fission resulting in multiple exact copies does not preserve identity, and would this depend on the level of closeness? What type of difference is considered to be sufficient to single out a closest continuer?

Both studies in this manuscript confronted participants with two continuers. Future studies could increase the number of continuers. A different approach could focus on one continuer by either keeping a second continuer constant, or moving from paired comparisons to binary reidentification. Previous studies have explored variants of thought experiments compatible with these ideas. White [157] implemented one such scenario, focusing on the likelihood that a living person might be the reincarnation of a deceased person (see also [65]). For reincarnation judgments, distinctiveness was found to guide decisions. Similar to our sci-fi scenario, this setting might introduce specific assumptions about the process of reincarnation that could guide responses. For example, the importance of body similarity might be evaluated to be a lot lower than when responding to a scenario, in which a ship wreck survivor is returned from an island and matched to missing persons, and the importance of moral attributes to be higher. In contrast to the second scenario, the reincarnation scenario prevents the use of causal histories that are useful for person tracking [1].

It might also be the case that different practical concerns demand different criteria of identity. We investigated the parameters of identity in the context of re-identification and compensation. Other practical concerns, such as attributing blame, responsibility, or guilt, or allocating punishments and rewards, might trigger different responses, as the criteria of identity might shift or current properties of persons might become more relevant than historical properties and re-identification questions [10, 158].

To sum up, while we made progress to shed light on the decision processes used by participants, we have not yet established a complete process model, which should be the goal for future research [159, 160].


Does it matter what matters for reidentification?

What are the implications of our studies for debates in cognitive sciences and other disciplines? Lay intuitions may be more prone to error than those of philosophers [100]—although experts’ authority may also be questionable, as their philosophical intuitions are partly a function of their personality [161]. Our participants’ endorsement of identity relations of an original with two non-identical persons violates the transitivity of identity. Yet this response pattern may simply show that our participants do not conceive of personal identity as strictly numerical, or that they have alternative conceptions of persons. Our results are in any case highly relevant for a descriptive analysis of people’s understanding of identity and their theories of survival. Further, Nozick [14] would not attribute the variance in people’s perspectives to errors or false everyday beliefs [162], but rather to the variance in closeness metrics legitimately deemed appropriate by different persons. Our findings are similarly relevant for cases in which scientists, philosophers, or marketers try to appeal directly to lay intuitions or common sense.

Philosophers and psychologists differ in their conceptualizations of intuitions [163], yet in our complex scenarios participants could not arrive at their answers without careful assessment. When appealing to the common sense of people both in theorizing and in legitimizing operationalizations, a researcher “should respect what ordinary people in fact say when asked—unless they are somehow led astray” (in [115], p. 216). Study 2 eliminated one way in which participants may have been led astray, by moving from a continuous to an all-or-nothing decision. Any appeal to general intuition should take into account both our main results and the interindividual variability demonstrated in both studies. Any attempt to measure the conceptualization of self or personal identity may be informed by both our positive and our negative findings.

Psychological research has analyzed psychopathological conditions that entail potential breaks in personal continuity and identity [164]. For example, patients with the Fregoli delusion are convinced that different people are in fact a single person appearing in a variety of disguises. Here, the recognition of the outward appearance is separated from the identification of the person. This disorder goes beyond prosopagnosia, or face blindness, where the perception of faces does not allow for identification of persons (see also [165]). Patients with Capgras delusion believe that a specific person, often a loved one, has been replaced by a duplicate who is indistinguishable from the original person. In cases of mirror misidentification, patients fail to recognize their own reflection and infer that the person in the mirror must be someone else [166]. These observations from clinical psychology indicate that the neural mechanisms of identification cannot be reduced to acts of perceptual recognition, and hint at the requisite capacities necessary for personal reidentification; furthermore, understanding ordinary reidentification processes might help to understand and locate their disruption.

As the introductory example shows, the importance and reach of identity questions are not limited to specialized academic discourse, even if not every instance is as dramatic as the hanging of Arnaud du Tilh. A survey of current debates outside philosophy referencing the personal identity literature creates the impression that many of Parfit’s [6] suggestions, examples and ideas are still in the process of being (re)discovered. Mott [167] explored Parfit’s suggestion that diminished personal connectedness might be a reason for statutes of limitations (p. 325), and provided evidence that desert of punishment and grounds for criticizing a person for past deeds are considered to diminish over time, which is partially explained by the reduction in closeness. A second debate is centered on the question of the validity of living wills after substantive changes to a person’s cognitive capabilities. For example, can a competent person impose values and interests on the future incompetent person or should the strength of advance directives grow weaker with the loss of closeness [21, 81, 168]? The incompetent person might, for example, derive unexpected pleasure and satisfaction under conditions the competent person did not foresee. Further, references to a future self might have a tremendous impact on ethical behavior [46], motivation, and goal-pursuit [169]; the sense of temporal persistence can motivate future-oriented self-regard and short-term sacrifices benefiting future outcomes [43, 44]. Without personal identity, it would be meaningless to make promises, grant ownership or the right to vote [79], or offer compensation [10, 12]; challenges to personal identity affect the institutions built upon it.

Disruptions of personal identity have been shown to severely impact people’s lives. Chandler and colleagues [61] presented impressive evidence of the connection between the inability to give an account of one’s identity and the risk of adolescent suicide, and how cultural continuity can moderate the elevation of suicide risk in vulnerable minority groups. This line of research raises the question or how one’s perspective on personal identity is shaped by the social environment and connects the concept to mental health. A diminished sense of self and the self’s stable existence is also deeply intertwined with borderline personality disorder [170]. Furthermore, challenges to personal identity can also emerge due to technological innovation. Notions of identity are fundamental in conceptualizing behavior in virtual environments [171] and have implications in law in connection with identity theft [172] or impersonation. Pascalev and colleagues [63] discussed how first suggestions for a medical head transplant procedure introduced questions of personal identity into neuroethics.

The gravity of these real-world examples goes well beyond that found in hypothetical thought experiments. The analysis of contrafactual scenarios has nevertheless paved the way for addressing real-world concerns and situations, whose connection to personal identity is discovered through analogy, created through technology, or bestowed by social institutions. Understanding how the concept is perceived and applied and how experimental puzzles of seemingly little direct relevance are tackled and solved can ultimately inform practitioners and theoreticians facing recurrent and novel situations with serious consequences.

The highest probability of reaching 90 years was found for those drinking 5– < 15 g alcohol/day; although not significant, the risk estimates also indicate to avoid binge drinking


Alcohol consumption in later life and reaching longevity: the Netherlands Cohort Study. Piet A van den Brandt, Lloyd Brandts. Age and Ageing, afaa003, February 9 2020. https://academic.oup.com/ageing/advance-article/doi/10.1093/ageing/afaa003/5730334

Abstract
Background: whether light-to-moderate alcohol intake is related to reduced mortality remains a subject of intense research and controversy. There are very few studies available on alcohol and reaching longevity.

Methods: we investigated the relationship of alcohol drinking characteristics with the probability to reach 90 years of age. Analyses were conducted using data from the Netherlands Cohort Study. Participants born in 1916–1917 (n = 7,807) completed a questionnaire in 1986 (age 68–70 years) and were followed up for vital status until the age of 90 years (2006–07). Multivariable Cox regression analyses with fixed follow-up time were based on 5,479 participants with complete data to calculate risk ratios (RRs) of reaching longevity (age 90 years).

Results: we found statistically significant positive associations between baseline alcohol intake and the probability of reaching 90 years in both men and women. Overall, the highest probability of reaching 90 was found in those consuming 5– < 15 g/d alcohol, with RR = 1.36 (95% CI, 1.20–1.55) when compared with abstainers. The exposure-response relationship was significantly non-linear in women, but not in men. Wine intake was positively associated with longevity (notably in women), whereas liquor was positively associated with longevity in men and inversely in women. Binge drinking pointed towards an inverse relationship with longevity. Alcohol intake was associated with longevity in those without and with a history of selected diseases.

Conclusions: the highest probability of reaching 90 years was found for those drinking 5– < 15 g alcohol/day. Although not significant, the risk estimates also indicate to avoid binge drinking.

Keywords: alcohol, longevity, aging, dose–response relationship, mortality, cohort studies, older people

Discussion

In this large prospective study, we found statistically significant positive associations between alcohol intake and the probability of reaching 90 years in both men and women. Overall, the highest probability was found in those consuming 5– < 15 g/d alcohol, which corresponds to 0.5–1.5 glass of alcoholic beverage per day. The exposure–response relationship was significantly non-linear in women, but not in men. Whereas the probability of longevity was decreasing in women with alcohol intakes above 15 g/d, it remained elevated at higher alcohol consumption levels in men. In beverage-specific analyses, wine intake was positively associated with longevity (notably in women), whereas liquor was positively associated with longevity in men and inversely in women. Binge drinking was not significantly associated with longevity, but the risk estimates indicate to avoid binge drinking. In subgroup analyses, alcohol intake was associated with longevity in those with or without a history of selected diseases.

Previous prospective studies on longevity from the US and France that reported on alcohol were rather limited (no alcohol focus) and found no significant associations using longevity cut-offs of 75 [12] and 90 years [13, 25]. However, higher alcohol intakes were seen in survivors compared to non-survivors [25], and in subsequent analyses (85+ years) of the Framingham Heart Study [26]. The Physicians Health Study amongst US male physicians (survival cut-off 90) reported small and non-significantly increased chances of longevity for various drinking categories compared to rarely/never alcohol drinkers, with no dose–response relationship [13]. The association between alcohol drinking and longevity was studied twice in the Honolulu Heart Program (HHP) amongst Japanese-American men using 85 years as longevity cut-off [10, 11]. Heavy alcohol intake, measured at baseline age 45–68 years, was significantly inversely related to longevity (OR = 0.63, for 3+ drinks/day versus drinking less) [10]. In the second analysis, moderate-heavy alcohol intake around 75 years was also significantly inversely related to longevity (OR = 0.66, for drinking >14.5 g/day versus less) [11]. The fact that the HHP study was conducted amongst men of Japanese ancestry may (partly) explain the more negative association of alcohol with longevity, and suggests a potential mechanism. It is known that East Asians are less efficient alcohol metabolizers due to a common loss-of-function variant of the ALDH2-gene, which decreases breakdown of acetaldehyde, the first, toxic alcohol metabolite [27]. It could be that those who nevertheless drink experience a higher mortality risk.

Overall, the results of previous longevity studies seem quite limited. Our detailed analyses show significantly positive associations between alcohol and longevity in both men and women, which is in agreement with the PHS [13]. Overall in men and women combined in the NLCS, the highest probability of reaching 90 was found in those consuming 5– < 15 g/d alcohol, with a HR of 1.36 compared to abstainers. Women experience higher blood alcohol concentrations than men of similar weight due to lower total body water [15]. Thus, adverse effects of higher alcohol intakes may appear earlier in women. This might explain the non-linear exposure–response relationship in women and not in men. We also found that wine intake was positively associated with longevity, whereas liquor was positively associated with longevity in men, and inversely in women. Before speculating on reasons for these beverage differences, future longevity studies are needed to replicate these sex-specific findings, with those on pattern and binge drinking. In mortality studies, there was no clear indication for sex differences [2, 5], and although beneficial associations with wine have been described for mortality, e.g. [2], this topic remains controversial.

As in observational studies on alcohol and mortality [1, 2, 8], studies on alcohol and longevity may be hampered by possible biases (selection and residual confounding biases). Here, selection bias can refer to abstainer bias (when the reference category of non-drinkers also includes sick quitters), the healthy drinker/survivor bias (when cohorts of older participants may be overrepresented by healthier drinkers who may have survived adverse effects of alcohol). Reverse causation may occur because health status may influence alcohol drinking [8], which could be addressed by restricting analyses to healthy people at baseline. Incomplete adjustment for confounding factors may lead to residual confounding. In our longevity analysis, we tried to address these possible biases by: (i) excluding ex-drinkers from the reference category; (ii) limiting analyses to stable drinkers and abstainers by taking alcohol consumption 5 years before baseline into account; (iii) restricting analyses to participants without prevalent diseases and (iv) adjusting for a large range of possible confounders with detailed information. These analysis strategies do not necessarily provide a full remedy against all possible biases [8], but these were the possibilities with the available data from our cohort. For example, we had no information on lifetime alcohol consumption or consumption on various ages during lifetime, so our analysis of past consumption was limited. After excluding ex-drinkers from the reference category, the analyses in the stable subgroup were essentially similar to what was seen overall. We also found that alcohol intake was associated with longevity in the subgroup without a history of selected diseases. Still, other diseases might have affected alcohol use or longevity. Residual confounding by socioeconomic status is also possible, because we only controlled for educational level.

It should be noted that the percentages of never drinkers were relatively high in the NLCS: 15% in men and 35% in women, making this common behaviour a logical reference category. These percentages were substantially higher than in other cohorts, e.g. 8% in male and 16% in female PLCO-participants [2], and 6% in male and 16% in female EPIC-participants [28]. Strengths of the NLCS are the prospective design and high completeness of follow-up, making information bias and selection bias due to differential follow-up unlikely. The validation study of the food frequency questionnaire has shown that it performs relatively well with respect to alcohol [19], but measurement error may still have attenuated associations. The lack of possibilities to update alcohol intake or other lifestyle data during follow-up may have resulted in some attenuated associations too. Our study was aimed at measuring alcohol intake at 68–70 years. Therefore, our study results are limited to alcohol drinking in later life; future longevity studies preferably include lifetime consumption. The alcohol measures in our study were not aimed to get an all-encompassing indication of risky drinking, like in the Alcohol Use Disorders Identification Test/AUDIT [29]. Our cut-off for binge drinking (>6 drinks per occasion) as used in the 1980s/1990s [29, 30] is somewhat higher than current cut-offs [29]. Because we were interested in the association of late life drinking with longevity, our study likely examined a resilient population that survived already until 68 years despite possible earlier risky drinking.

While older people perceive themselves as controlled responsible drinkers, according to a recent thematic synthesis of qualitative studies, they consider alcohol use often as important part of social occasions, and report that alcohol helps creating feelings of relaxation [31]. A possible beneficial effect of light-moderate alcohol intake on longevity (with inverted J-shaped dose-response on longevity) may also be related to hormesis [32, 33]. With higher consumption in older people, medication may be negatively affected by alcohol, and there is decreased physiological tolerance [34].

In conclusion, in this prospective study of men and women aged 68–70 years at baseline, we found the highest probability of reaching 90 years of age for those drinking 5– < 15 g alcohol/day. This does not necessarily mean that light-to-moderate drinking improves health. The estimated RR of 1.36 implies a modest absolute increase in this probability and should not be used as motivation to start drinking if one does not drink alcoholic beverages. Although no significant association was found, the risk estimates also indicate to avoid binge drinking.

Elected officials in eleven U.S. southern states: Framing the decision to remove Confederate symbols as good for business causes those officials to favor removing the Confederate flag from public spaces

Economic Interests Cause Elected Officials to Liberalize Their Racial Attitudes. Christian R. Grose, Jordan Carr Peterson. Political Research Quarterly, February 10, 2020. https://doi.org/10.1177/1065912919899725

Abstract: Do attitudes of elected officials toward racial issues change when the issues are portrayed as economic? Traditionally, scholars have presented Confederate symbols as primarily a racial issue: elites supporting their eradication from public life tend to emphasize the association of Confederate symbols with slavery and institutionalized racism, while those elected officials who oppose the removal of Confederate symbols often cite the heritage of white southerners. In addition to these racial explanations, we argue that there is an economic component underlying support for removal of Confederate symbols among political elites. Racial issues can also be economic issues, and framing a racial issue as an economic issue can change elite attitudes. In the case of removal of Confederate symbols, the presence of such imagery is considered harmful to business. Two survey experiments of elected officials in eleven U.S. southern states show that framing the decision to remove Confederate symbols as good for business causes those elected officials to favor removing the Confederate flag from public spaces. Elected officials can be susceptible to framing, just like regular citizens.

Keywords: American politics, race, ethnicity, and politics, experiments, political elites, framing, symbolic representation, policy



Strong results! Overall relative risk of mortality of 1.0018! And data "cannot be made publicly available"... Short term association between ozone and mortality: global two stage time series study in 406 locations in 20 countries

Strong results! Overall relative risk of mortality of 1.0018! And data "cannot be made publicly available"... "Short term association between ozone and mortality: global two stage time series study in 406 locations in 20 countries." Ana M Vicedo-Cabrera et al. BMJ 2020; 368, February 10. https://doi.org/10.1136/bmj.m108

Data sharing: Data have been collected within the MCC (Multi-City Multi-Country) Collaborative Research Network (http://mccstudy.lshtm.ac.uk/) under a data sharing agreement and cannot be made publicly available. [...]

Abstract
Objective To assess short term mortality risks and excess mortality associated with exposure to ozone in several cities worldwide.

Design Two stage time series analysis.

Setting 406 cities in 20 countries, with overlapping periods between 1985 and 2015, collected from the database of Multi-City Multi-Country Collaborative Research Network.

Population Deaths for all causes or for external causes only registered in each city within the study period.

Main outcome measures Daily total mortality (all or non-external causes only).

Results A total of 45 165 171 deaths were analysed in the 406 cities. On average, a 10 µg/m3 increase in ozone during the current and previous day was associated with an overall relative risk of mortality of 1.0018 (95% confidence interval 1.0012 to 1.0024). Some heterogeneity was found across countries, with estimates ranging from greater than 1.0020 in the United Kingdom, South Africa, Estonia, and Canada to less than 1.0008 in Mexico and Spain. Short term excess mortality in association with exposure to ozone higher than maximum background levels (70 µg/m3) was 0.26% (95% confidence interval 0.24% to 0.28%), corresponding to 8203 annual excess deaths (95% confidence interval 3525 to 12 840) across the 406 cities studied. The excess remained at 0.20% (0.18% to 0.22%) when restricting to days above the WHO guideline (100 µg/m3), corresponding to 6262 annual excess deaths (1413 to 11 065). Above more lenient thresholds for air quality standards in Europe, America, and China, excess mortality was 0.14%, 0.09%, and 0.05%, respectively.

Conclusions Results suggest that ozone related mortality could be potentially reduced under stricter air quality standards. These findings have relevance for the implementation of efficient clean air interventions and mitigation strategies designed within national and international climate policies.

Online sexual activities: Men engage in solitary-arousal activities, women in partnered-arousal activities; teens are the highest users of mobile digital devices and potentially of on-line SAs

Exploring new measures of online sexual activities, device use, and gender differences. Véronique O. Bélanger Lejars, Charles H. Bélanger, Jamil Razmak. Computers in Human Behavior, February 13 2020, 106300. https://doi.org/10.1016/j.chb.2020.106300

Highlights
•    Participants engage in OSA, particularly solitary-arousal-based-self-videos.
•    Men engage in solitary-arousal activities, women in partnered-arousal activities.
•    Computers are the preferred method for OSAs overall.
•    Smartphone apps were overwhelmingly preferred in partnered-arousal activities.
•    Teens are the highest users of mobile digital devices and potentially of OSA.

Abstract: Online sexual activities (OSA) are any sexual behaviours done using the Internet and are divided into non-arousal, partnered-arousal, and solitary-arousal activities. The means of accessing the Internet have extended past the traditional home computer and the rapid evolution of personal digital devices has led to a lag in the measurement of OSA. The current study’s aim is to explore a new measurement scale that considers the widespread use of personal digital devices and examines gender differences in OSA. Results show that the suggested scale is a reliable measurement of OSA. Women engaged in more partnered-arousal activities whereas men engaged in more solitary-arousal activities. Computer use was the preferred method for OSA overall but smartphone apps were the preferred method for partnered-arousal activities. Some implications for parents and educators, clinicians, and researchers as well as limitations inviting to further research are provided as OSA is an emerging but rapidly evolving field of investigation.

Wednesday, February 12, 2020

Congenital amusia (tone deafness) is a lifelong musical disorder that affects 4% of the population (single estimate based on a single test from 1980); it is more 1.5pct of pop. and is highly heritable

Peretz, Isabelle, and Dominique T. Vuvan. 2020. “Prevalence of Congenital Amusia.” PsyArXiv. February 12. doi:10.1038/ejhg.2017.15

Abstract: Congenital amusia (commonly known as tone deafness) is a lifelong musical disorder that affects 4% of the population according to a single estimate based on a single test from 1980. Here we present the first large-based measure of prevalence with a sample of 20 000 participants, which does not rely on self-referral. On the basis of three objective tests and a questionnaire, we show that (a) the prevalence of congenital amusia is only 1.5%, with slightly more females than males, unlike other developmental disorders where males often predominate; (b) self-disclosure is a reliable index of congenital amusia, which suggests that congenital amusia is hereditary, with 46% first-degree relatives similarly affected; (c) the deficit is not attenuated by musical training and (d) it emerges in relative isolation from other cognitive disorder, except for spatial orientation problems. Hence, we suggest that congenital amusia is likely to result from genetic variations that affect musical abilities specifically.

Domestic cats spontaneously discriminate between the number and size of potential prey in a way that can be interpreted as adaptive for a lone-hunting, obligate carnivore, and show complex levels of risk–reward analysis

Revisiting more or less: influence of numerosity and size on potential prey choice in the domestic cat. Jimena Chacha, Péter Szenczi, Daniel González, Sandra Martínez-Byer, Robyn Hudson & Oxána Bánszegi . Animal Cognition, Feb 12 2020. https://link.springer.com/article/10.1007/s10071-020-01351-w

Abstract: Quantity discrimination is of adaptive relevance in a wide range of contexts and across a wide range of species. Trained domestic cats can discriminate between different numbers of dots, and we have shown that they also spontaneously choose between different numbers and sizes of food balls. In the present study we performed two experiments with 24 adult cats to investigate spontaneous quantity discrimination in the more naturalistic context of potential predation. In Experiment 1 we presented each cat with the simultaneous choice between a different number of live prey (1 white mouse vs. 3 white mice), and in Experiment 2 with the simultaneous choice between live prey of different size (1 white mouse vs. 1 white rat). We repeated each experiment six times across 6 weeks, testing half the cats first in Experiment 1 and then in Experiment 2, and the other half in the reverse order. In Experiment 1 the cats more often chose the larger number of small prey (3 mice), and in Experiment 2, more often the small size prey (a mouse). They also showed repeatable individual differences in the choices which they made and in the performance of associated predation-like behaviours. We conclude that domestic cats spontaneously discriminate between the number and size of potential prey in a way that can be interpreted as adaptive for a lone-hunting, obligate carnivore, and show complex levels of risk–reward analysis.

Non-reproducible: Evidence that social network index is associated with gray matter volume from a data-driven investigation

No strong evidence that social network index is associated with gray matter volume from a data-driven investigation. Chujun Lin et al. Cortex, February 12 2020. https://doi.org/10.1016/j.cortex.2020.01.021

Abstract: Recent studies in adult humans have reported correlations between individual differences in people’s Social Network Index (SNI) and gray matter volume (GMV) across multiple regions of the brain. However, the cortical and subcortical loci identified are inconsistent across studies. These discrepancies might arise because different regions of interest were hypothesized and tested in different studies without controlling for multiple comparisons, and/or from insufficiently large sample sizes to fully protect against statistically unreliable findings. Here we took a data-driven approach in a pre-registered study to comprehensively investigate the relationship between SNI and GMV in every cortical and subcortical region, using three predictive modeling frameworks. We also included psychological predictors such as cognitive and emotional intelligence, personality, and mood. In a sample of healthy adults (n = 92), neither multivariate frameworks (e.g., ridge regression with cross-validation) nor univariate frameworks (e.g., univariate linear regression with cross-validation) showed a significant association between SNI and any GMV or psychological feature after multiple comparison corrections (all R-squared values ≤ 0.1). These results emphasize the importance of large sample sizes and hypothesis-driven studies to derive statistically reliable conclusions, and suggest that future meta-analyses will be needed to more accurately estimate the true effect sizes in this field.

Racial slurs “reclaimed” by the targeted group convey affiliation rather than derogation; authors found that the intergroup use of reappropriated slurs was perceived quite positively by both White and Black individuals

Perceptions of Racial Slurs Used by Black Individuals Toward White Individuals: Derogation or Affiliation? Conor J. O’Dea, Donald A. Saucier. Journal of Language and Social Psychology, February 11, 2020. https://doi.org/10.1177/0261927X20904983

Abstract: Research suggests that racial slurs may be “reclaimed” by the targeted group to convey affiliation rather than derogation. Although it is most common in intragroup uses (e.g., “nigga” by a Black individual toward another Black individual), intergroup examples of slur reappropriation (e.g., “nigga” by a Black individual toward a White individual) are also common. However, majority and minority group members’ perceptions of intergroup slur reappropriation remain untested. We examined White (Study 1) and Black (Study 2) individuals’ perceptions of the reappropriated terms, “nigga” and “nigger” compared with a control term chosen to be a non-race-related, neutral term (“buddy”), a nonracial derogative term (“asshole”) and a White racial slur (“cracker”) used by a Black individual toward a White individual. We found that the intergroup use of reappropriated slurs was perceived quite positively by both White and Black individuals. Our findings have important implications for research on intergroup relations and the reappropriation of slurs.

Keywords: racial slurs, common in-group identity, social dominance theory, affiliation, derogation



Calling into question that contagious yawning is a signal of empathy: No evidence of familiarity, gender or prosociality biases in dogs

Contagious yawning is not a signal of empathy: no evidence of familiarity, gender or prosociality biases in dogs. Patrick Neilands et al. Proceedings of the Royal Society B: Biological Sciences, Volume 287, Issue 1920, February 5 2020. https://doi.org/10.1098/rspb.2019.2236

Abstract: Contagious yawning has been suggested to be a potential signal of empathy in non-human animals. However, few studies have been able to robustly test this claim. Here, we ran a Bayesian multilevel reanalysis of six studies of contagious yawning in dogs. This provided robust support for claims that contagious yawning is present in dogs, but found no evidence that dogs display either a familiarity or gender bias in contagious yawning, two predictions made by the contagious yawning–empathy hypothesis. Furthermore, in an experiment testing the prosociality bias, a novel prediction of the contagious yawning–empathy hypothesis, dogs did not yawn more in response to a prosocial demonstrator than to an antisocial demonstrator. As such, these strands of evidence suggest that contagious yawning, although present in dogs, is not mediated by empathetic mechanisms. This calls into question claims that contagious yawning is a signal of empathy in mammals.

4. Discussion

By combining the data from six different studies, the resulting dataset is the largest used to date to examine the presence of contagious yawning in a non-human mammal. This allowed us to draw conclusions about the presence and absence of contagious yawning and the signatures predicted by the contagious yawning–empathy hypothesis with a greater level of certainty than by relying on individual studies alone. Our reanalysis shows that dogs do exhibit contagious yawning, showing higher probabilities and rates of yawning for yawning demonstrators compared to control demonstrators. This provides robust support for the claims that contagious yawning is present in dogs [35,4951]. In order to test whether this contagious yawning is related to mechanisms underpinning empathy, we examined this dataset for evidence of the familiarity bias and gender bias. However, dogs in our reanalysis showed no evidence of either of these biases. Similarly, when we ran a novel experiment to look for a prosociality bias, we found that the dogs in our experiment were no more likely to yawn for prosocial demonstrators than antisocial demonstrators. Dogs, therefore, show no evidence for any of the familiarity, gender, or prosociality biases predicted by the contagious yawning–empathy hypothesis. This suggests that contagious yawning in dogs is not mediated by an empathy-related perception–action mechanism [5254]. The presence of contagious yawning in non-human animals, therefore, cannot be assumed to be evidence for a perception–action mechanism shared between humans and other mammals, as has been previously proposed [1,35,41,58]. That is not to say that some non-human animals do not necessarily experience some form of empathy but that contagious yawning cannot be taken as a diagnostic signal for the presence of these empathetic processes. Furthermore, these results, alongside the arguments put forward by Massen & Gallup in their recent review [37], bring into question the validity of the contagious yawning–empathy hypothesis more broadly.
It is important to acknowledge several caveats to our conclusions. Firstly, in both our reanalysis and experiment, the subjects were primarily responding to interspecific yawns from human demonstrators. While it is possible that dogs would respond differently to conspecific and interspecific yawning, there are several reasons to believe that this is not the case. Research in other species such as chimpanzees suggests that they respond similarly to conspecific and interspecific yawns [41], and, in our reanalysis, controlling for demonstrator type did not improve model fit. Nevertheless, more rigorous comparisons between how dogs respond to conspecific and interspecific yawning would be a useful future line of research. Secondly, it is important to note that the familiarity, gender, and prosociality biases are indirect measures of empathy [37]. As such, care needs to be taken in interpreting these biases and there remains substantial debate over how to do so. For example, it has been argued that both the tendency for children with ASD to be less prone to contagious yawning [83] and the familiarity bias [37,84,85] can be explained in terms of differences in attending to yawners rather than differences in empathetic response. Similarly, the gender bias reported in humans [29] is not straightforward to interpret and there is debate over whether it simply reflects a false positive in the literature [33,34]. By contrast, proponents of the contagious yawning–empathy hypothesis argue that the familiarity bias continues to be found even when controlling for differences in subjects' attention [40,41] and that the negative results for the gender bias in previous studies reflects methodological issues with prior experiments [34]. Furthermore, although alternative hypotheses such as the attentional hypothesis could explain the presence of a single bias such as the familiarity bias, only the contagious yawning–empathy hypothesis predicts the presence of all three biases. As such, testing for all three biases represents a powerful test of the contagious yawning–empathy hypothesis. Finally, searching for a novel signature, the prosociality bias, required a novel experimental methodology where dogs were exposed to a prosocial experimenter that interacted with them and an antisocial experimenter that ignored them. Previous work which used a similar methodology demonstrated that dogs do show a preference for the prosocial demonstrator [73], and so if the contagious yawning–empathy hypothesis is correct, dogs should have reacted with increased yawning to the prosocial demonstrator. However, further work would be useful in confirming the presence or absence of the prosociality bias in dogs and other species such as humans.
Research into contagious yawning has been dominated by the contagious yawning–empathy debate [37]. However, contagious yawning is an interesting phenomenon in its own right as its evolutionary roots and ultimate function remain a mystery [20]. Contagious yawning in animals may be the result of stress [54,57], an affiliation strategy [67], a means of communication [61], or a mechanism to improve collective vigilance within groups [37,68,69] rather than being related to empathy via a perception–action mechanism. Future research into contagious yawning should include a greater focus on testing between these and other hypotheses. For example, the affiliation hypothesis might predict that contagious yawning should be seen more frequently during reconciliation periods after conflict while the collective vigilance hypothesis posits that contagious yawning should increase in response to external disturbances [37,86]. However, it is important to note that these theories are not necessarily mutually exclusive [87] and that factors such as stress appear to influence yawning propensity in complex ways [88,89]. Additionally, an important next step is to consider evidence of contagious yawning outside of mammals. While there has been some work looking at contagious yawning in budgerigars [86,90] and tortoises [91], research has otherwise been sparse outside of the mammalian class.
Future research would benefit from systematically testing contagious yawning across multiple species. One barrier to such projects is that studying a range of different species often requires different experimental set-ups to make such testing feasible. There is a concern that such a range of methodological approaches may make cross-species and cross-study comparisons difficult, if not impossible [35,66]. However, our finding that the effect of treatment on yawning probabilities and rates remains stable when controlling for various aspects of study design suggests that the presence of contagious yawning is relatively robust to differences in experimental design. As such, while it is important to use broadly similar designs (for instance, comparing animals’ yawning rates when exposed to either a yawning demonstrator or control demonstrator), there could be considerable flexibility in other aspects of study design. For example, our results suggest that animals' yawning probabilities and rates to either live demonstrators or recorded demonstrators are comparable. Therefore, our findings suggest that more ambitious cross-species work can be carried out with confidence in the validity of the subsequent comparisons.
To conclude, our results provide robust support for the hypothesis that contagious yawning is found in dogs, the first non-human species of mammal where it has been clearly shown outside of chimpanzees. However, we found no evidence that dogs yawn more in response to either familiar human yawners compared to unfamiliar human yawners, or to prosocial human yawners compared to antisocial human yawners. Additionally, we found no evidence that female dogs were more likely to yawn in response to a yawning demonstrator than male dogs. As such, these findings cast doubt on the widespread assertion that contagious yawning is mediated by the same perception–action mechanism as empathy [1,6,35,41,58]. Instead, they support recent claims that there is no link between contagious yawning and empathetic processes [37,67] and underline the importance of developing more direct measures of empathy in non-human animals [37,92]. However, while our results suggest that researchers cannot rely on contagious yawning as a diagnostic signal of empathy, our additional findings that the effect of contagious yawning appears to be robust to variations in experimental methods suggest that cross-species comparisons may be a powerful way to disentangle the evolutionary roots of this behaviour.

Of what they thought were 4 important predictors of subjective well-being (marriage, employment, prosociality, & life meaning), marriage showed only very small effects, & employment had larger effects that peaked around age 50 years

Subjective Well-Being Around the World: Trends and Predictors Across the Life Span. Andrew T. Jebb. Psychological Science, February 11, 2020. https://doi.org/10.1177/0956797619898826

Abstract: Using representative cross-sections from 166 nations (more than 1.7 million respondents), we examined differences in three measures of subjective well-being over the life span. Globally, and in the individual regions of the world, we found only very small differences in life satisfaction and negative affect. By contrast, decreases in positive affect were larger. We then examined four important predictors of subjective well-being and how their associations changed: marriage, employment, prosociality, and life meaning. These predictors were typically associated with higher subjective well-being over the life span in every world region. Marriage showed only very small associations for the three outcomes, whereas employment had larger effects that peaked around age 50 years. Prosociality had practically significant associations only with positive affect, and life meaning had strong, consistent associations with all subjective-well-being measures across regions and ages. These findings enhance our understanding of subjective-well-being patterns and what matters for subjective well-being across the life span.

Keywords: subjective well-being, cross-cultural, aging, life meaning, prosocial behavior

You may be more original than you think: Predictable biases in self-assessment of originality

You may be more original than you think: Predictable biases in self-assessment of originality. Yael Sidi et al. Acta Psychologica, Volume 203, February 2020, 103002. https://doi.org/10.1016/j.actpsy.2019.103002

Highlights
•    Self-judgments of originality are sensitive to the serial order effect.
•    Originality judgments reveal under-estimation robustly and resiliently.
•    People discriminate well between more and less original ideas.
•    There is a double dissociation between actual originality and originality judgments.

Abstract: How accurate are individuals in judging the originality of their own ideas? Most metacognitive research has focused on well-defined tasks, such as learning, memory, and problem solving, providing limited insight into ill-defined tasks. The present study introduces a novel metacognitive self-judgment of originality, defined as assessments of the uniqueness of an idea in a given context. In three experiments, we examined the reliability, potential biases, and factors affecting originality judgments. Using an ideation task, designed to assess the ability to generate multiple divergent ideas, we show that people accurately acknowledge the serial order effect—judging later ideas as more original than earlier ideas. However, they systematically underestimate their ideas' originality. We employed a manipulation for affecting actual originality level, which did not affect originality judgments, and another one designed to affect originality judgments, which did not affect actual originality performance. This double dissociation between judgments and performance calls for future research to expose additional factors underlying originality judgments.

Contrary to common views, use of social media and online portals fosters more visits to news sites and a greater variety of news sites visited

How social network sites and other online intermediaries increase exposure to news. Michael Scharkow, Frank Mangold, Sebastian Stier, and Johannes Breuer. PNAS February 11, 2020 117 (6) 2761-2763; January 27, 2020. https://doi.org/10.1073/pnas.1918279117

Abstract: Research has prominently assumed that social media and web portals that aggregate news restrict the diversity of content that users are exposed to by tailoring news diets toward the users’ preferences. In our empirical test of this argument, we apply a random-effects within–between model to two large representative datasets of individual web browsing histories. This approach allows us to better encapsulate the effects of social media and other intermediaries on news exposure. We find strong evidence that intermediaries foster more varied online news diets. The results call into question fears about the vanishing potential for incidental news exposure in digital media environments.

Keywords: news exposureonline media useweb tracking data

People can come across news and other internet offerings in a variety of ways, for example, by visiting their favorite websites, using search engines, or following recommendations from contacts on social media (1). These routes do not necessarily lead people to the same venues. While traditionally considered as an important ingredient of well-functioning democratic societies, getting news as a byproduct of other media-related activities has been assumed to wane in the online sphere. Intermediaries like social networking sites (SNS) and search engines are regarded with particular suspicion, often criticized for fostering news avoidance and selective exposure (2). This assumption has been, perhaps most prominently, ingrained in the “filter bubble” thesis, positing that search and recommendation algorithms bias news diets toward users’ preferences and, thus, decrease content diversity (3). On the other hand, incidental news exposure (INE) due to other online activities has received much scholarly attention for several decades (4). Contrary to widely held assumptions, recent INE research found that SNS users have more rather than less diverse news diets than nonusers. For example, one study showed that SNS users consumed almost twice the number of news outlets in the previous week as did nonusers (2). Similar results emerged regarding the use of web aggregators (portals) and search engines, although people may use search engines in a more goal-driven fashion compared to SNS (1).

In previous studies, SNS-based news exposure was typically measured by asking respondents whether they are (unintentionally) exposed to news via social media. Like many survey studies, this approach naturally suffers from the limited accuracy and reliability of self-reports (5). More specifically, recent work has criticized self-report measures for being biased toward active news choices and routine use (6) and being particularly inaccurate when people access news via intermediaries (7). To alleviate these limitations, some studies have used log data to estimate the quantity and quality of online news exposure, for example, in terms of exposure to cross-cutting news (8, 9). However, these studies have focused only on single social media platforms instead of different intermediary routes to news. Other recent studies (1, 10) have traced direct and indirect pathways to online news using browser logs, but have not distinguished nonregular—and therefore possibly incidental—news exposure from regular, typically more intentional or routinized forms of news consumption online. In other words, the question whether visiting SNS more often (than usual) actually leads to more varied news exposure (than usual) essentially remains unanswered. This problem concerns almost all studies on the use and effects of online media, and has received considerable attention in recent communication research (11). We argue that positive within-person effects of visiting intermediary sites on online news exposure are a necessary (although not sufficient, since even nonregular visits could be intentional) precondition for INE, and, therefore, testing for such effects is a useful endeavor. We address this question using a statistical model that distinguishes between stable between-person differences and within-person effects, that is, the random-effects within–between (REWB) model (12). Investigating within-person effects has additional value by safeguarding causal inferences against bias due to (previously) unmeasured person-level confounders. We apply the REWB model to two large, representative tracking datasets of individual-level browsing behavior in Germany, collected independently in 2012 and 2018. This allows us not only to compare within- and between-person effects but also to analyze possible changes in the effects of SNS (Facebook, Twitter) and intermediaries (Google, web portals) over recent years. Specifically, we investigate their effects on the amount and variety of online news exposure. Using this approach enables us to replicate and extend two recent survey studies (2, 13) that looked at the effects of SNS, web portals, and search engines on 1) overall online news exposure and 2) the diversity of people’s online news diets.


Conclusion
We used large-scale observational data to avoid the limited reliability and validity of self-reports on news exposure. Leveraging the potential of such data with the REWB model, our study provides strong evidence that getting more and more-diverse news as a consequence of other media-related activities is a common phenomenon in the online sphere. The findings contradict widely held concerns that social media and web portals specifically contribute to news avoidance and restrict the diversity of news diets. Note that we followed previous studies and measured the variety of news diets by counting the number of outlets visited. Given the overall low frequency of news visits, intermediaries add diversity to the news diets of the large majority of participants with a small news repertoire (2). While we cannot say that outlet variety always equals viewpoint variety, prior research has shown that using a larger number of online news sources typically translates into more-diverse overall news exposure (15). In contrast to previous studies (9, 10), we cannot quantify diversity in terms of cross-cutting exposure, but note that previous studies have shown little evidence for strong partisan alignments of news audiences in Germany (16) on the outlet level, so that variety would have to be measured on the level of individual news items, which requires URL-level tracking and content analysis data. In addition, future combinations of web tracking with experience sampling surveys are needed to disentangle in what instances nonregular news use is entirely nonintentional and how the respective contents specifically affect the diversity in news diets.

Tuesday, February 11, 2020

We show that in religious cultural contexts, religious people lived 2.2 years longer than did nonreligious people; but in nonreligious cultural contexts, religiosity conferred no such longevity

Ebert, T., Gebauer, J. E., Talman, J. R., & Rentfrow, P. J. (2020). Religious people only live longer in religious cultural contexts: A gravestone analysis. Journal of Personality and Social Psychology, Feb 2020. https://doi.org/10.1037/pspa0000187

Abstract: Religious people live longer than nonreligious people, according to a staple of social science research. Yet, are those longevity benefits an inherent feature of religiosity? To find out, we coded gravestone inscriptions and imagery to assess the religiosity and longevity of 6,400 deceased people from religious and nonreligious U.S. counties. We show that in religious cultural contexts, religious people lived 2.2 years longer than did nonreligious people. In nonreligious cultural contexts, however, religiosity conferred no such longevity benefits. Evidently, a longer life is not an inherent feature of religiosity. Instead, religious people only live longer in religious cultural contexts where religiosity is valued. Our study answers a fundamental question on the nature of religiosity and showcases the scientific potential of gravestone analyses.


Managing Systemic Financial Crises: New Lessons and Lessons Relearned

Managing Systemic Financial Crises: New Lessons and Lessons Relearned. Marina Moretti; Marc C Dobler; Alvaro Piris. IMF Departmental Paper No. 20/05, February 11, 2020. https://www.imf.org/en/Publications/Departmental-Papers-Policy-Papers/Issues/2020/02/10/Managing-Systemic-Financial-Crises-New-Lessons-and-Lessons-Relearned-48626

Chapter 1 Introduction
Systemic financial crises have been a recurring feature of economies in mod­ern times. Panics, wherein collapsing trust in the banking system and credi­tor runs have significant negative consequences for economic activity—rare events in any one country—have occurred relatively frequently across the IMF membership. Common causes include high leverage, booming credit, an erosion of underwriting standards, exposure to rapidly rising prop­erty prices and other asset bubbles, excessive exposure to the government, inadequate supervision, and often a high external current account deficit. Financial distress typically lasts several years and is associated with large economic contractions and high fiscal costs (Laeven and Valencia 2018). Figure 1 shows the prevalence of systemic financial crises over the past 30 years, including the number of crisis episodes each year. The global financial crisis (GFC) was just such a panic, albeit one that transcended national and regional boundaries.
IMF staff experience in helping countries manage systemic banking crises has evolved over time. Major financial sector problems have been addressed in the context of IMF-supported programs primarily in emerging market econ­omies, developing countries and, more recently, in some advanced economies during the GFC. The IMF approach to managing these events was summa­rized in a 2003 paper (Hoelscher and Quintyn 2003) before there was inter­national consensus on legal frameworks, preparedness, and policy approaches, and when practices varied widely across the membership. The principles out­lined in that paper built on staff experience in a range of countries—notably, Indonesia, Republic of Korea, Russia, and Thailand in the late 1990s; and Argentina, Ecuador, Turkey, and Uruguay in the early 2000s. It emphasized that managing a systemic banking crisis is a complex, multiyear process and presented tools available as part of a comprehensive framework for addressing systemic banking problems while minimizing taxpayers’ costs. Although these core concepts and principles remain largely valid today, they merit a revisit following the experiences and lessons learned from the GFC.
The GFC shared similarities with past systemic crises, albeit with an impact felt well beyond directly affected countries (Claessens and others 2010). As in previous episodes of financial distress, the countries most affected by the GFC—the US starting in 2008 and several countries in Europe—saw cred­itor runs and contagion across institutions, significant fiscal and quasi-fiscal outlays, and a sharp contraction in credit and economic activity (see Fig­ure 1). The reason the impact was more widely felt across the global econ­omy: the crisis originated in advanced economies with large financial sectors. These countries embodied a substantial portion of global economic output, trade, and financial activity and affected internationally active financial firms providing significant cross-border services. The speed of transmission of financial distress across borders was unprecedented, given the complex and opaque financial linkages between financial firms. These factors introduced new challenges, as they impacted the effectiveness of many existing crisis management tools.
Reflecting these new challenges, individual country responses during the GFC differed from past experiences in important respects (Table 1):
The size and scope of liquidity support provided by major central banks was unprecedented. More liquidity was provided to more counterparties for longer periods against a wider range of collateral. Much of this support was through liquidity facilities open to all market participants, while some was provided as emergency liquidity assistance (ELA) to individual institutions. This occurred against the backdrop of accommodative monetary policy and quantitative easing.
Explicit liability guarantees were more selectively deployed than in past crises, when blanket guarantees covering a wide set of liabilities were more commonly used by authorities. During the GFC (with some notable excep­tions), explicit liability guarantees typically applied only to specific institu­tions, new debt issuance, specific asset classes, or were capped (for example, a higher level of deposit insurance). However, implicit guarantees were widespread, as demonstrated by the extensive public solvency support pro­vided to financial institutions and markets. Systemic financial institutions were rarely liquidated or resolved,1 and, of those that were, some proved destabilizing for the broader financial system. This trend reflected in part inadequate powers to resolve such firms in an orderly way.
Difficulties in achieving effective cross-border cooperation in resolution between authorities in different countries came to the fore, given the global footprint of some weak institutions. The lack of mechanisms to enforce resolution measures on a cross-border basis and cooperate more broadly led, in some cases, to the breakup of cross-border groups into national components.
More emphasis was placed on banks’ ability to manage nonperforming assets internally or through market disposals, with less reliance on central­ized asset management companies (AMCs)—public agencies that purchase and manage nonperforming loans (NPLs). Protracted weak growth in some countries, the large scale of the problem, and gaps in legal frameworks also meant that progress in addressing distressed assets and deleveraging private sector balance sheets was slower in some countries than in previous crises.

Table 1. Lessons on the Design of the Financial Safety NetWhat is Similar?                                                                  What is New?
• Escalating early intervention and enforcement measures 
• More intrusive supervision and early intervention powers
 

• Special resolution regimes for banks                                 • A new international standard on resolution regimes for systemic financial institutions requiring a range of resolution powers and tools

• Establishing deposit insurance (if prior conditions enable)1 with adequate ex ante funding, available to fund resolution on a least cost basis           •
An international standard on deposit insurance, requiring ex ante funding and no coinsurance
                                                                                              • Desirability of depositor preference
 

• Capacity to provide emergency liquidity to banks, at the discretion of the central bank  Liquidity assistance frameworks with broader eligibility conditions, collateral, and safeguards
 

1 IMF staff does not recommend establishing a deposit insurance system in countries with weak banking supervision, ineffective resolution regimes, and identifiably weak banks. Doing so would expose a nascent scheme to significant risk, (when it has yet to build adequate funding and operational capacity) and could undermine depositor confidence.
The GFC was a watershed. Policymakers were confronted with the gaps and weaknesses in their legal and policy frameworks to address bank liquidity and solvency problems, their understanding of systemic risk in institutions and markets, and domestic and international cooperation. Under these constraints, the policy responses that were deployed put substantial public resources at risk. While ultimately successful in stabilizing financial sys­tems and the macroeconomy, the fiscal and economic costs were high. The far-reaching impact of the GFC provided impetus for a major overhaul of financial sector oversight (Financial Stability Forum 2008; IMF 2018). The regulatory reform agenda agreed to by the Group of Twenty leaders in 2009 elevated the discussions to the highest policy level and kept international attention focused on establishing a stronger set of globally consistent rules. The new architecture aimed to (1) enhance capital buffers and reduce lever­age and financial procyclicality; (2) contain funding mismatches and currency risk; (3) enhance the regulation and supervision of large and interconnected institutions, including by expanding the supervisory perimeter; (4) improve the supervision of a complex financial system; (5) align governance and com­pensation practices of banks with prudent risk taking; (6) overhaul resolution regimes of large financial institutions; and (7) introduce macroprudential policies. Through its multilateral and bilateral surveillance of its member­ship, including the Financial Sector Assessment Program (FSAP), Article IV missions, and its Global Financial Stability Reports, the IMF has contributed to implementing the regulatory reform agenda.
This paper summarizes the general principles, strategies, and techniques for preparing for and managing systemic banking crises, based on the views and experience of IMF staff, considering developments since the GFC. The paper does not summarize the causes of the GFC, its evolution, or the policy responses adopted; these concepts have been well documented elsewhere.2 Moreover, it does not cover the full reform agenda since the crisis, rather, only two parts—one on key elements of a legal and operational framework for crisis preparedness (the “financial safety net”) and the other on oper­ational strategies and techniques to manage systemic crises if they occur. Each section summarizes relevant lessons learned during the GFC and other recent episodes of financial distress, merging them with preexisting advice to give a complete picture of the main elements of IMF staff advice to member countries on operational aspects of crisis preparedness and management. The advice builds on and is consistent with international financial standards, tai­lored to country-specific circumstances based on IMF staff crisis experience. The advice recognizes that every crisis is different and that managing systemic failures is exceptionally challenging, both operationally and politically. None­theless, better-prepared authorities are less likely to resort to bailing out bank shareholders and creditors when facing such circumstances.
Part I, on crisis preparedness, outlines the design and operational features of a well-designed financial safety net. It discusses how staff advice on these issues has evolved, drawing from the international standards and good practice that emerged in the aftermath of the GFC. Effective financial safety nets play an important role in minimizing the risk of systemwide financial distress—by increasing the likelihood that failing financial institutions can be resolved without triggering financial instability. However, they cannot eliminate that risk, particularly at times of severe stress.
Part II, on crisis management, discusses aspects of a policy response to a full-blown banking crisis. It details the evolution of IMF advice in light of what worked well—or less well—during the GFC, reflecting the experience of IMF staff in actual crisis situations. The narrative is organized around poli­cies for dealing with three distinct aspects3 of systemic banking crisis:

*  Containment—strategies and techniques to stem creditor runs and stabilize financial sector liquidity in the acute phase of panic and high uncertainty. This phase is typically short-lived, with an escalating policy response as needed to avoid the collapse of the financial system.
*  Restructuring and resolution—strategies and techniques to diagnose bank soundness and viability, and to recapitalize or resolve failing financial insti­tutions, which are typically implemented over the following year or more, depending on the severity of the situation.
*  Dealing with distressed assets—strategies and techniques to clean up pri­vate sector balance sheets that first identify and then remove impediments to effective resolution of distressed assets, with implementation likely to stretch over several years.

IMF member countries have continued to cope with financial panics and widespread financial sector weakness. The IMF remains fully engaged on these issues, often in the context of IMF-supported programs, with a sig­nificant focus on managing systemic problems and financial sector reforms. Staff continue to provide support and advice on supervisory practice, reso­lution, deposit insurance, and emergency liquidity in IMF member coun­tries learning from experience and adapt policy advice to developments and country-specific circumstances.


Box 9. Dealing with Excessive Related-Party Exposures

Excessive related-party exposures present a major risk to financial stability. Related-party loans that go unreported conceal credit and concentration risk and may be on pre­ferred terms, reducing bank profitability and solvency. Persistently high related-party exposures also hold down economic growth by tying up capital that could otherwise be used to provide lending to legitimate, creditworthy businesses on an arms-length basis. Related-party exposures complicate bank resolution, as shareholders whose rights have been suspended have an incentive to default on their loans to the bank.

Opaque bank ownership greatly facilitates the hiding of related-party exposures and trans­actions. Opaque ownership is associated with poor governance, AML/CFT violations, and fraudulent activities. Banks without clear ultimate beneficial owners cannot count on share­holder support in times of crisis, and the quality of their capital cannot be verified. Moreover, unknown owners cannot be held accountable for criminal actions leading to a bank’s failure.
Resolving these problems requires a three-pillar approach. Legal reforms are needed to lay the foundation for targeted bank diagnostics and effective enforcement actions:

*  Legal reforms to introduce international standards for transparent disclosure and mon­itoring of bank owners and related parties—including prudent limits, strict conflict of interest rules on the processes and procedures for dealing with related parties, and esca­lating enforcement measures. Non-transparent ownership should be made a legal ground for license revocation or resolution, and the supervisor authorized to presume a related party under certain circumstances. This shifts from supervisors to banks the “burden of proof”—to demonstrate that a suspicious transaction is not with a related party.

*  Bank diagnostics are targeted at identifying ultimate beneficial owners and related-party exposures and transactions and assessing compliance with prudential lending limits for related-party and large exposures. The criteria for identification include control, economic dependency, and acting in concert. Identification of related-party transactions should also consider their risk-related features, such as the existence of preferential terms, the quality of documentation, and internal controls over the transactions.

*  Enforcement actions are taken to (1) remove unsuitable bank shareholders—that is, shareholders whose ultimate beneficial owner is not identified, or are otherwise found to be unsuitable; and (2) unwind excessive related-party exposures through repayment or disposal of the exposure, or resolution of the relationship (change in ownership of the bank or the borrower).

The three-pillar approach is best implemented in the context of a comprehensive financial sec­tor strategy. There may not be enough time to implement legal reforms during early interven­tion or the resolution of systemic banks. In such situations, suspected related-party exposures and liabilities must be swiftly identified and ringfenced. Once the system is stabilized, however, the three-pillar approach should be implemented for all banks (including those in liquidation).

Source: Karlsdóttir and others (forthcoming).