Thursday, February 13, 2020

Mental experiment in which a person was split into two continuers: Most took decisions based on the continuity of memory, personality, and psychology, with some consideration given to the body and social relations


Putting your money where your self is: Connecting dimensions of closeness and theories of personal identity. Jan K. Woike, Philip Collard, Bruce Hood. PLOS, February 12, 2020. https://doi.org/10.1371/journal.pone.0228271

Abstract: Studying personal identity, the continuity and sameness of persons across lifetimes, is notoriously difficult and competing conceptualizations exist within philosophy and psychology. Personal reidentification, linking persons between points in time is a fundamental step in allocating merit and blame and assigning rights and privileges. Based on Nozick’s (1981) closest continuer theory we develop a theoretical framework that explicitly invites a meaningful empirical approach and offers a constructive, integrative solution to current disputes about appropriate experiments. Following Nozick, reidentification involves judging continuers on a metric of continuity and choosing the continuer with the highest acceptable value on this metric. We explore both the metric and its implications for personal identity. Since James (1890), academic theories have variously attributed personal identity to the continuity of memories, psychology, bodies, social networks, and possessions. In our experiments, we measure how participants (N = 1, 525) weighted the relative contributions of these five dimensions in hypothetical fission accidents, in which a person was split into two continuers. Participants allocated compensation money (Study 1) or adjudicated inheritance claims (Study 2) and reidentified the original person. Most decided based on the continuity of memory, personality, and psychology, with some consideration given to the body and social relations. Importantly, many participants identified the original with both continuers simultaneously, violating the transitivity of identity relations. We discuss the findings and their relevance for philosophy and psychology and place our approach within the current theoretical and empirical landscape.

Study materials, supporting analyses, and qualitative coding scheme: https://doi.org/10.1371/journal.pone.0228271.s001

Excerpts of discussion...

The fission scenario as scientific fiction: Benefits and validity concerns

Not all philosophers embrace the use of hypothetical scenarios in studying identity [129–131]. Some are critical of fission thought experiments, arguing that the intuitions derived from such fantasy accounts—in which anything goes—violate natural worlds and are therefore not valid measures of how people conceptualize identity in natural settings (e.g., [132]). Like contaminated test tubes in biochemical experiments, results obtained with faulty scenarios would have to be considered dubious [131]. Scholl [130] further argued that reactions to “bizarre scenarios” with forced responses might “tell us more about heuristic getting-through-the-experiment strategies than about actual metaphysical intuitions” (p. 580). One could compare these scenarios to visual illusions that generate experiences that are at odds with reality—but these experiences nevertheless often provide insight into the mechanisms that generate the illusion. For example, size and distance illusions reveal the computation the brain uses to calculate physical dimensions even in normal cases. Likewise, intuitions derived from cognitive processes during these unreal and outlandish examples may not be necessarily meaningful when measured against the constraints of reality [130] but rather cast light on how we use and reason with the concept of identity. While the participants’ open answers showed a degree of confusion in a minority of subjects, most answers reflected a well-considered and principled approach to the questions. Studies in experimental philosophy often feature unusual scenarios, and replicability in this sub-field compares favorably with replication rates in other domains of psychology [133].

Our science fiction scenario might be criticized for its lack of realism. It makes crucial assumptions that contradict the scientific understanding of how the components that we neatly separated in the story interact and co-depend on each other. According to Harle [68], a brain cannot be considered to be independent of the body; it would immediately adapt to a new body, rework the inner representation, and react to changes in social responses. Brain and body are interdependent in complex ways [39, 51, 63]. Motor learning [134] also involves two components, body and memory; muscles cannot work without neuronal input. The body and appearance component used in our studies could be understood to either include or not include brain matter. If included, all other components would depend on this component, as there can be no psychology without consciousness. If, alternatively, higher mental functions and consciousness are regarded as part of psychology or memory (which seems to be the approach taken by at least some participants), the brain, as a bodily organ that makes psychological functioning possible, can still be seen as source of tension in the scenario framework.

Participants responded to our scenario only from a first-person perspective. First-person evaluations of identity and survival might differ from third-person evaluations [128, 135]. For one thing, many legal, practical, and social concerns can be fulfilled by a person who is a spontaneous true copy of the original person. How much participants care about the disruption of continuity might therefore differ depending on whether the replaced person is them or a neutral other person [30]. Rorty [18] distinguished between an external observer’s perspective on individual identification and an individual’s internal perspective; features essential to an individual’s self-perspective might be irrelevant for an observer, and not all philosophers assume that first-person judgments have final authority [136, 137]. On the other hand, at least one study explicitly testing for the effect of perspective on intuitions about identity based on [28] found no substantial differences between a first-person and a third-person perspective [31]. At the same time, continuers were introduced from a third-person perspective. This shift was necessary to avoid a pre-judgment of the question how the original person relates to them, but might create a perceived distance. This could make it easier to give a negative answer to the survival question, but would have a symmetric influence on identity responses for both continuers.

It is less clear which perspective is better suited to evaluate claims about identity and survival. We assumed that involving the participant would be the best way to increase attention to the situation and diligence in responding. We induced a connection between our categories and their real-world instances, as perceived by the participants. A participant evaluating the importance of “body” in the money allocation problem will thus take the perception of her own body into consideration, which may lead to different results than when considering the importance of a body. This fact might play into our finding that a subgroup of participants placed a negative value on the continuity of the body.

On the other hand, our scenario avoids a complication common to many other fission scenarios: By splitting possessions and friends between survivors, it sidesteps the problem of nonsharable singular goods [34] with symmetric claims from two sides. These claims include access to special objects and, more significantly, the chance to engage in special relationships with others [51]. On the flipside, as Schechtman [138] argued, the scope and time frame of fission scenarios does not allow for societal reactions to the products of fission, which might include changes to the concept of persons and identity. Our approach avoids the confound of money being allocated for reasons of loss or pity, as both survivors emerge with their lives, bodies, and environments fully intact (if not unchanged). Williams [28] predicted a framing effect in the understanding of the scenario, a type of “leading the witness,” with a reversal of intuitions regarding identity depending on whether the story is told as involving body-swapping (transferring a person’s memory to a new body) or mind-manipulation (creating new memories in a person’s body). Empirical work has confirmed these predictions [31]. To counteract such potential framing effects, we used parallel language for all components varied in continuers; none of the components was privileged in the description of the scenario. Further, similar tensions involved in our scenario are discussed in section 1.3. in S1 Supporting Information.

Invoking hyperspace travel buys some degree of freedom, but at the cost of physical implausibility. However, factual impossibility does not prevent imaginability, and as Johnston [139] argued, “such per impossible thought experiments might nonetheless teach us about the relative importance of things that invariably go together” (p. 601). It is precisely because some aspects are not easily separated in realistic scenarios that we chose a fantasy scenario, allowing us to explore intuitions whose tests would otherwise be confounded.

Our scenario remains within the conventions of popular films (e.g., Total Recall or Blade Runner—both of which fittingly now exist in two versions) that deal with cases of copied or artificial memories and identities [140]. Body and mind transfer were considered intelligible by Locke [9], and the nature of personal identity is a recurrent theme in literature. Many readers have appreciated Franz Kafka’s tale of Gregor Samsa’s sudden metamorphosis into an insect [12] or Ovid’s metamorphosed subjects who survive the transformation [39]. Children become acquainted with bodily transfer in fairy tales like The Frog Prince [23, 120] or Hans Christian Andersen’s The Little Mermaid [39]. Johnson [141] takes this mere imaginability as an argument against declaring bodily continuity as a logical precondition for personal identity. Our scenario is no more fantastic than other thought experiments that have been employed to disentangle identity from its natural correlates—through neurosurgeons [6, 38, 51, 128, 142], amoeba-like duplication [127, 143], cloning [144], parallel universes [30], or even swamp-beings [66]. To come full circle, some of the philosophical conceptions and puzzle cases are reproduced in cultural creations and thereby further embedded into cultural consciousness [39]. It is possible that there is no theory of personal identity that would be able to “satisfy all intuitions about all devisable scenarios” (in [31], p. 297), but the advantage of imaginary scenarios lies in their power to isolate phenomena, which makes it possible to attend to specific aspects of our concepts [145]. In our scenarios we can separate changes from their ordinary causes and study decisions that may not occur in the world, but that probe concepts we apply to the world. In addition, our use of a novel paradigm limits the danger of previous exposure to similar questions and potential confusion, which is a concern with crowdsourced participants [146]. We therefore maintain that there is value in our approach of pitting dimensions that are generally accepted as dimensions of closeness against one another.


Dimensions of closeness

Our scenario bundles features into dimensions that might be further differentiated. For example, in contrast to [65], we did not differentiate between personality and psychology, on the one hand, and moral values, on the other. Evidence from several studies considering real-world personal transformations has indicated that identity judgments are most heavily influenced by changes or non-changes in moral values [65]. Changes in morality were judged to be more relevant than changes in (non-moral) personality attributes or memory. In a similar vein, Strohminger and Nichols [116] found that changes in morality in patients with neurodegenerative diseases strongly determined changes in perceived identity. Nunner-Winkler [21] reported on a study asking participants which changes would lead them to see themselves as a different person. Ideas about right and wrong and sex membership were considered to be quite important; appearance and money were considered less relevant (although some participants rated looks to be important, consistent with our distributional results).

The distinction between moral and nonmoral traits is somewhat ambiguous (e.g., conscientiousness was considered as a moral trait rather than a personality factor in [65]). One person’s morals do not and cannot exist in a social vacuum, moral consensus is central for co-ordination, affiliation and conflict resolution. Morality stands in complex relations to beliefs, values, behaviors and communities. It also depends on memory in nontrivial ways. Some of the induced changes in the scenarios even involved the loss of the moral faculty with a likely ripple effect reaching other dimensions of the self. If this perspective is true, the relevance of morality for personal identity might lie in these possibly disruptive consequences of changing one’s morals in relation to one’s environment and not because of its self-defining importance. Evidence for this interpretation is found in two studies demonstrating that changes in widely shared (and therefore less unique to the individual) moral values are considered to lead to more changes to the person than changes in controversial moral beliefs [106, 147]. For controversial moral beliefs, that might be considered most defining and informative for describing a person’s self, the effect was weaker than for memory. Also, the changes in memory induced by our scenarios would induce both errors of omission and errors of commission, which can have differential impacts on moral behavior [148]. In contrast, some studies focus mostly on omission errors due to memory changes (e.g., [118]), which are describes as having more limited effects on behavior towards others than changes in morality. In our scenarios, dimensions are replaced by random sampling from the participant’s reference population, which is a different operationalization of change. Heiphetz and colleagues [147] showed, for example, how the perceived change was mediated by perceived disruptions of friendships.

Some argue that morality is not even conceivable without personal identity [6, 10, 58]. Most people also seem to have inflated beliefs of their own morality [149]. In separate evaluations, our participants ranked memories and psychology to be more important for identity than moral values. Nonetheless, a further decomposition of the broad headings we used in our study would be feasible and interesting in future research. In particular, the role of moral traits and behavioral tendencies could be considered separately, even within a similar factorial setup as the one we employed.

The social dimension could be further differentiated, as well. Parents were considered to be more important than friends in Study 2, and Nunner-Winkler [21] reported similar findings. Of course, parents influence a person directly through the transmission of genes as well as indirectly through instruction and parenting behavior; changing one’s parents cannot be considered a merely social manipulation and could well have an impact on every other dimension.

In our scenario, changes in memory are considered universal and all-or-nothing. In real life, however, memories of self or self-knowledge seem to be better preserved than other knowledge, even in semantic dementia [20], and a subjective belief of self-persistence is demonstrated by patients with Alzheimer’s disease [150]. Alternatively, the sense of self may be impaired while episodic memories stay intact—as in the case of R.B. [67]. Further, separating specific psychological aspects or memory from a person’s social context and network of activities might prove impossible in practice [18]. There is also some overlap between criteria based on psychology and memory, but under the assumption that two organisms with the same memories might nonetheless differ in personality and psychology (e.g., based on differences in needs, intentions, values, or goals), it is not necessary that the criteria coincide. In fact, the psychological continuity criterion has been proposed as a critique of a narrow Lockean focus on memory [11].

Critics of our scenario might further object that our random collage of features in the two continuers destroys the causal connection between past and present states necessary for identity [6, 30]. Preschool children already individuate objects and persons spatio-temporally [23, 151] and, following Sagi and Rips [152], causal histories receive special attention in linguistic disambiguation in discourse. In all our scenarios (except the two extreme cases with exact duplicates), change in characteristics was induced by an accident, an unusual life event that disrupts spatio-temporal continuity. This fact might strengthen impressions that identity is not preserved. According to data reported in [21], for example, participants regarded changes in attitudes or beliefs that were due to normal life experiences as non-consequential for identity judgments—as opposed to changes induced by brainwashing, severe medical conditions, or accidents. Therefore, the nature of the transformation might play a role in our participants’ judgments. Note that both continuers underwent the same procedure, so this factor cannot explain differential assessments. Although the abruptness and symmetry of the original person’s transformation prevents the application of spatio-temporal continuation criteria, participants might still construct “fictive causal histories” [153] to assess which of the two continuers might have the better chance of being the result of changes within an ordinary life.

Finally, for a continuer to acquire a random set of possessions, these would have to materialize from somewhere. Our scenario also assumes that this change in possessions can leave memory, psychology, friends, and appearance untouched. This is incompatible with the reality that some of our memories are intertwined with objects in our possession and the difference between owning or not owning status symbols, for example, can impact self-value, build and burn bridges with others, and change perceptions of their owner.


Towards a process model of re-identification

Our studies allow to make some progress in the analysis of decisions involved in determining personal identity. Like Rips and colleagues [17], who develop the causal continuer model based on Nozick’s theory, we are interested in the decision process. Decision processes, as implemented by human beings, are often insufficiently described by functions merely predicting decision outcomes. A further analysis of the decision processes needs to address questions of information search: Which persons are considered as continuers? When and why is the search for possible continuers stopped? Which dimensions are considered in the subjective closeness metric, and how are these dimensions integrated? We showed some results compatible with decision-making following the closest continuer logic. Is there further evidence for the three steps being followed in a specific sequence—the fast-and-frugal tree in Fig 1 would not yield different outcomes if the first three levels changed their relative position—and how stable is this process across individuals? A structurally similar model of decision making has been proposed for explaining the phenomenon of choice deferral [154, 155]. When faced with a selection of possible alternatives, choice in the 2S2T-model [155] is deferred for one of two reasons. First, none of the options is good enough and surpasses a decision threshold or second, too many options are good enough, surpassing the threshold but it becomes difficult to choose the best option. Of course, personal re-identification is not simply preferential choice but the analysis of the decision process might still be informed by the analysis of related or parallel processes in other domains.

While the mathematical form of weighted-additive linear models implies weighting and adding, many other operations, such as lexicographic stepwise procedures that ignore (sometimes most of the) variables in the equation [35] would still be captured by this model [156]. Brook [12] argued for a model of personal re-identification that starts with psychological factors and only considered other dimensions if the information is missing (or inconclusive). Variance in choosing and applying criteria might again be related to other individual differences [41, 42]. Based on the variations in our chosen design for this study, it is not yet possible to build cognitive models of participants’ decisions. It is, for example, unclear whether an appropriate model should be stochastic, as in [17], or deterministic.

Our scenarios varied factors that should mostly influence the assessment of closeness and only indirectly the decision-making based on these assessments. Future research could shift this focus to the subsequent stages of the procedure. Thus, specific exit nodes of the decision tree in Fig 1 could be investigated. For example, is there a minimum level of closeness required for participants to determine that any of the continuers is identical to the original person? Do participants share the intuition that a fission resulting in multiple exact copies does not preserve identity, and would this depend on the level of closeness? What type of difference is considered to be sufficient to single out a closest continuer?

Both studies in this manuscript confronted participants with two continuers. Future studies could increase the number of continuers. A different approach could focus on one continuer by either keeping a second continuer constant, or moving from paired comparisons to binary reidentification. Previous studies have explored variants of thought experiments compatible with these ideas. White [157] implemented one such scenario, focusing on the likelihood that a living person might be the reincarnation of a deceased person (see also [65]). For reincarnation judgments, distinctiveness was found to guide decisions. Similar to our sci-fi scenario, this setting might introduce specific assumptions about the process of reincarnation that could guide responses. For example, the importance of body similarity might be evaluated to be a lot lower than when responding to a scenario, in which a ship wreck survivor is returned from an island and matched to missing persons, and the importance of moral attributes to be higher. In contrast to the second scenario, the reincarnation scenario prevents the use of causal histories that are useful for person tracking [1].

It might also be the case that different practical concerns demand different criteria of identity. We investigated the parameters of identity in the context of re-identification and compensation. Other practical concerns, such as attributing blame, responsibility, or guilt, or allocating punishments and rewards, might trigger different responses, as the criteria of identity might shift or current properties of persons might become more relevant than historical properties and re-identification questions [10, 158].

To sum up, while we made progress to shed light on the decision processes used by participants, we have not yet established a complete process model, which should be the goal for future research [159, 160].


Does it matter what matters for reidentification?

What are the implications of our studies for debates in cognitive sciences and other disciplines? Lay intuitions may be more prone to error than those of philosophers [100]—although experts’ authority may also be questionable, as their philosophical intuitions are partly a function of their personality [161]. Our participants’ endorsement of identity relations of an original with two non-identical persons violates the transitivity of identity. Yet this response pattern may simply show that our participants do not conceive of personal identity as strictly numerical, or that they have alternative conceptions of persons. Our results are in any case highly relevant for a descriptive analysis of people’s understanding of identity and their theories of survival. Further, Nozick [14] would not attribute the variance in people’s perspectives to errors or false everyday beliefs [162], but rather to the variance in closeness metrics legitimately deemed appropriate by different persons. Our findings are similarly relevant for cases in which scientists, philosophers, or marketers try to appeal directly to lay intuitions or common sense.

Philosophers and psychologists differ in their conceptualizations of intuitions [163], yet in our complex scenarios participants could not arrive at their answers without careful assessment. When appealing to the common sense of people both in theorizing and in legitimizing operationalizations, a researcher “should respect what ordinary people in fact say when asked—unless they are somehow led astray” (in [115], p. 216). Study 2 eliminated one way in which participants may have been led astray, by moving from a continuous to an all-or-nothing decision. Any appeal to general intuition should take into account both our main results and the interindividual variability demonstrated in both studies. Any attempt to measure the conceptualization of self or personal identity may be informed by both our positive and our negative findings.

Psychological research has analyzed psychopathological conditions that entail potential breaks in personal continuity and identity [164]. For example, patients with the Fregoli delusion are convinced that different people are in fact a single person appearing in a variety of disguises. Here, the recognition of the outward appearance is separated from the identification of the person. This disorder goes beyond prosopagnosia, or face blindness, where the perception of faces does not allow for identification of persons (see also [165]). Patients with Capgras delusion believe that a specific person, often a loved one, has been replaced by a duplicate who is indistinguishable from the original person. In cases of mirror misidentification, patients fail to recognize their own reflection and infer that the person in the mirror must be someone else [166]. These observations from clinical psychology indicate that the neural mechanisms of identification cannot be reduced to acts of perceptual recognition, and hint at the requisite capacities necessary for personal reidentification; furthermore, understanding ordinary reidentification processes might help to understand and locate their disruption.

As the introductory example shows, the importance and reach of identity questions are not limited to specialized academic discourse, even if not every instance is as dramatic as the hanging of Arnaud du Tilh. A survey of current debates outside philosophy referencing the personal identity literature creates the impression that many of Parfit’s [6] suggestions, examples and ideas are still in the process of being (re)discovered. Mott [167] explored Parfit’s suggestion that diminished personal connectedness might be a reason for statutes of limitations (p. 325), and provided evidence that desert of punishment and grounds for criticizing a person for past deeds are considered to diminish over time, which is partially explained by the reduction in closeness. A second debate is centered on the question of the validity of living wills after substantive changes to a person’s cognitive capabilities. For example, can a competent person impose values and interests on the future incompetent person or should the strength of advance directives grow weaker with the loss of closeness [21, 81, 168]? The incompetent person might, for example, derive unexpected pleasure and satisfaction under conditions the competent person did not foresee. Further, references to a future self might have a tremendous impact on ethical behavior [46], motivation, and goal-pursuit [169]; the sense of temporal persistence can motivate future-oriented self-regard and short-term sacrifices benefiting future outcomes [43, 44]. Without personal identity, it would be meaningless to make promises, grant ownership or the right to vote [79], or offer compensation [10, 12]; challenges to personal identity affect the institutions built upon it.

Disruptions of personal identity have been shown to severely impact people’s lives. Chandler and colleagues [61] presented impressive evidence of the connection between the inability to give an account of one’s identity and the risk of adolescent suicide, and how cultural continuity can moderate the elevation of suicide risk in vulnerable minority groups. This line of research raises the question or how one’s perspective on personal identity is shaped by the social environment and connects the concept to mental health. A diminished sense of self and the self’s stable existence is also deeply intertwined with borderline personality disorder [170]. Furthermore, challenges to personal identity can also emerge due to technological innovation. Notions of identity are fundamental in conceptualizing behavior in virtual environments [171] and have implications in law in connection with identity theft [172] or impersonation. Pascalev and colleagues [63] discussed how first suggestions for a medical head transplant procedure introduced questions of personal identity into neuroethics.

The gravity of these real-world examples goes well beyond that found in hypothetical thought experiments. The analysis of contrafactual scenarios has nevertheless paved the way for addressing real-world concerns and situations, whose connection to personal identity is discovered through analogy, created through technology, or bestowed by social institutions. Understanding how the concept is perceived and applied and how experimental puzzles of seemingly little direct relevance are tackled and solved can ultimately inform practitioners and theoreticians facing recurrent and novel situations with serious consequences.

The highest probability of reaching 90 years was found for those drinking 5– < 15 g alcohol/day; although not significant, the risk estimates also indicate to avoid binge drinking


Alcohol consumption in later life and reaching longevity: the Netherlands Cohort Study. Piet A van den Brandt, Lloyd Brandts. Age and Ageing, afaa003, February 9 2020. https://academic.oup.com/ageing/advance-article/doi/10.1093/ageing/afaa003/5730334

Abstract
Background: whether light-to-moderate alcohol intake is related to reduced mortality remains a subject of intense research and controversy. There are very few studies available on alcohol and reaching longevity.

Methods: we investigated the relationship of alcohol drinking characteristics with the probability to reach 90 years of age. Analyses were conducted using data from the Netherlands Cohort Study. Participants born in 1916–1917 (n = 7,807) completed a questionnaire in 1986 (age 68–70 years) and were followed up for vital status until the age of 90 years (2006–07). Multivariable Cox regression analyses with fixed follow-up time were based on 5,479 participants with complete data to calculate risk ratios (RRs) of reaching longevity (age 90 years).

Results: we found statistically significant positive associations between baseline alcohol intake and the probability of reaching 90 years in both men and women. Overall, the highest probability of reaching 90 was found in those consuming 5– < 15 g/d alcohol, with RR = 1.36 (95% CI, 1.20–1.55) when compared with abstainers. The exposure-response relationship was significantly non-linear in women, but not in men. Wine intake was positively associated with longevity (notably in women), whereas liquor was positively associated with longevity in men and inversely in women. Binge drinking pointed towards an inverse relationship with longevity. Alcohol intake was associated with longevity in those without and with a history of selected diseases.

Conclusions: the highest probability of reaching 90 years was found for those drinking 5– < 15 g alcohol/day. Although not significant, the risk estimates also indicate to avoid binge drinking.

Keywords: alcohol, longevity, aging, dose–response relationship, mortality, cohort studies, older people

Discussion

In this large prospective study, we found statistically significant positive associations between alcohol intake and the probability of reaching 90 years in both men and women. Overall, the highest probability was found in those consuming 5– < 15 g/d alcohol, which corresponds to 0.5–1.5 glass of alcoholic beverage per day. The exposure–response relationship was significantly non-linear in women, but not in men. Whereas the probability of longevity was decreasing in women with alcohol intakes above 15 g/d, it remained elevated at higher alcohol consumption levels in men. In beverage-specific analyses, wine intake was positively associated with longevity (notably in women), whereas liquor was positively associated with longevity in men and inversely in women. Binge drinking was not significantly associated with longevity, but the risk estimates indicate to avoid binge drinking. In subgroup analyses, alcohol intake was associated with longevity in those with or without a history of selected diseases.

Previous prospective studies on longevity from the US and France that reported on alcohol were rather limited (no alcohol focus) and found no significant associations using longevity cut-offs of 75 [12] and 90 years [13, 25]. However, higher alcohol intakes were seen in survivors compared to non-survivors [25], and in subsequent analyses (85+ years) of the Framingham Heart Study [26]. The Physicians Health Study amongst US male physicians (survival cut-off 90) reported small and non-significantly increased chances of longevity for various drinking categories compared to rarely/never alcohol drinkers, with no dose–response relationship [13]. The association between alcohol drinking and longevity was studied twice in the Honolulu Heart Program (HHP) amongst Japanese-American men using 85 years as longevity cut-off [10, 11]. Heavy alcohol intake, measured at baseline age 45–68 years, was significantly inversely related to longevity (OR = 0.63, for 3+ drinks/day versus drinking less) [10]. In the second analysis, moderate-heavy alcohol intake around 75 years was also significantly inversely related to longevity (OR = 0.66, for drinking >14.5 g/day versus less) [11]. The fact that the HHP study was conducted amongst men of Japanese ancestry may (partly) explain the more negative association of alcohol with longevity, and suggests a potential mechanism. It is known that East Asians are less efficient alcohol metabolizers due to a common loss-of-function variant of the ALDH2-gene, which decreases breakdown of acetaldehyde, the first, toxic alcohol metabolite [27]. It could be that those who nevertheless drink experience a higher mortality risk.

Overall, the results of previous longevity studies seem quite limited. Our detailed analyses show significantly positive associations between alcohol and longevity in both men and women, which is in agreement with the PHS [13]. Overall in men and women combined in the NLCS, the highest probability of reaching 90 was found in those consuming 5– < 15 g/d alcohol, with a HR of 1.36 compared to abstainers. Women experience higher blood alcohol concentrations than men of similar weight due to lower total body water [15]. Thus, adverse effects of higher alcohol intakes may appear earlier in women. This might explain the non-linear exposure–response relationship in women and not in men. We also found that wine intake was positively associated with longevity, whereas liquor was positively associated with longevity in men, and inversely in women. Before speculating on reasons for these beverage differences, future longevity studies are needed to replicate these sex-specific findings, with those on pattern and binge drinking. In mortality studies, there was no clear indication for sex differences [2, 5], and although beneficial associations with wine have been described for mortality, e.g. [2], this topic remains controversial.

As in observational studies on alcohol and mortality [1, 2, 8], studies on alcohol and longevity may be hampered by possible biases (selection and residual confounding biases). Here, selection bias can refer to abstainer bias (when the reference category of non-drinkers also includes sick quitters), the healthy drinker/survivor bias (when cohorts of older participants may be overrepresented by healthier drinkers who may have survived adverse effects of alcohol). Reverse causation may occur because health status may influence alcohol drinking [8], which could be addressed by restricting analyses to healthy people at baseline. Incomplete adjustment for confounding factors may lead to residual confounding. In our longevity analysis, we tried to address these possible biases by: (i) excluding ex-drinkers from the reference category; (ii) limiting analyses to stable drinkers and abstainers by taking alcohol consumption 5 years before baseline into account; (iii) restricting analyses to participants without prevalent diseases and (iv) adjusting for a large range of possible confounders with detailed information. These analysis strategies do not necessarily provide a full remedy against all possible biases [8], but these were the possibilities with the available data from our cohort. For example, we had no information on lifetime alcohol consumption or consumption on various ages during lifetime, so our analysis of past consumption was limited. After excluding ex-drinkers from the reference category, the analyses in the stable subgroup were essentially similar to what was seen overall. We also found that alcohol intake was associated with longevity in the subgroup without a history of selected diseases. Still, other diseases might have affected alcohol use or longevity. Residual confounding by socioeconomic status is also possible, because we only controlled for educational level.

It should be noted that the percentages of never drinkers were relatively high in the NLCS: 15% in men and 35% in women, making this common behaviour a logical reference category. These percentages were substantially higher than in other cohorts, e.g. 8% in male and 16% in female PLCO-participants [2], and 6% in male and 16% in female EPIC-participants [28]. Strengths of the NLCS are the prospective design and high completeness of follow-up, making information bias and selection bias due to differential follow-up unlikely. The validation study of the food frequency questionnaire has shown that it performs relatively well with respect to alcohol [19], but measurement error may still have attenuated associations. The lack of possibilities to update alcohol intake or other lifestyle data during follow-up may have resulted in some attenuated associations too. Our study was aimed at measuring alcohol intake at 68–70 years. Therefore, our study results are limited to alcohol drinking in later life; future longevity studies preferably include lifetime consumption. The alcohol measures in our study were not aimed to get an all-encompassing indication of risky drinking, like in the Alcohol Use Disorders Identification Test/AUDIT [29]. Our cut-off for binge drinking (>6 drinks per occasion) as used in the 1980s/1990s [29, 30] is somewhat higher than current cut-offs [29]. Because we were interested in the association of late life drinking with longevity, our study likely examined a resilient population that survived already until 68 years despite possible earlier risky drinking.

While older people perceive themselves as controlled responsible drinkers, according to a recent thematic synthesis of qualitative studies, they consider alcohol use often as important part of social occasions, and report that alcohol helps creating feelings of relaxation [31]. A possible beneficial effect of light-moderate alcohol intake on longevity (with inverted J-shaped dose-response on longevity) may also be related to hormesis [32, 33]. With higher consumption in older people, medication may be negatively affected by alcohol, and there is decreased physiological tolerance [34].

In conclusion, in this prospective study of men and women aged 68–70 years at baseline, we found the highest probability of reaching 90 years of age for those drinking 5– < 15 g alcohol/day. This does not necessarily mean that light-to-moderate drinking improves health. The estimated RR of 1.36 implies a modest absolute increase in this probability and should not be used as motivation to start drinking if one does not drink alcoholic beverages. Although no significant association was found, the risk estimates also indicate to avoid binge drinking.

Elected officials in eleven U.S. southern states: Framing the decision to remove Confederate symbols as good for business causes those officials to favor removing the Confederate flag from public spaces

Economic Interests Cause Elected Officials to Liberalize Their Racial Attitudes. Christian R. Grose, Jordan Carr Peterson. Political Research Quarterly, February 10, 2020. https://doi.org/10.1177/1065912919899725

Abstract: Do attitudes of elected officials toward racial issues change when the issues are portrayed as economic? Traditionally, scholars have presented Confederate symbols as primarily a racial issue: elites supporting their eradication from public life tend to emphasize the association of Confederate symbols with slavery and institutionalized racism, while those elected officials who oppose the removal of Confederate symbols often cite the heritage of white southerners. In addition to these racial explanations, we argue that there is an economic component underlying support for removal of Confederate symbols among political elites. Racial issues can also be economic issues, and framing a racial issue as an economic issue can change elite attitudes. In the case of removal of Confederate symbols, the presence of such imagery is considered harmful to business. Two survey experiments of elected officials in eleven U.S. southern states show that framing the decision to remove Confederate symbols as good for business causes those elected officials to favor removing the Confederate flag from public spaces. Elected officials can be susceptible to framing, just like regular citizens.

Keywords: American politics, race, ethnicity, and politics, experiments, political elites, framing, symbolic representation, policy



Strong results! Overall relative risk of mortality of 1.0018! And data "cannot be made publicly available"... Short term association between ozone and mortality: global two stage time series study in 406 locations in 20 countries

Strong results! Overall relative risk of mortality of 1.0018! And data "cannot be made publicly available"... "Short term association between ozone and mortality: global two stage time series study in 406 locations in 20 countries." Ana M Vicedo-Cabrera et al. BMJ 2020; 368, February 10. https://doi.org/10.1136/bmj.m108

Data sharing: Data have been collected within the MCC (Multi-City Multi-Country) Collaborative Research Network (http://mccstudy.lshtm.ac.uk/) under a data sharing agreement and cannot be made publicly available. [...]

Abstract
Objective To assess short term mortality risks and excess mortality associated with exposure to ozone in several cities worldwide.

Design Two stage time series analysis.

Setting 406 cities in 20 countries, with overlapping periods between 1985 and 2015, collected from the database of Multi-City Multi-Country Collaborative Research Network.

Population Deaths for all causes or for external causes only registered in each city within the study period.

Main outcome measures Daily total mortality (all or non-external causes only).

Results A total of 45 165 171 deaths were analysed in the 406 cities. On average, a 10 µg/m3 increase in ozone during the current and previous day was associated with an overall relative risk of mortality of 1.0018 (95% confidence interval 1.0012 to 1.0024). Some heterogeneity was found across countries, with estimates ranging from greater than 1.0020 in the United Kingdom, South Africa, Estonia, and Canada to less than 1.0008 in Mexico and Spain. Short term excess mortality in association with exposure to ozone higher than maximum background levels (70 µg/m3) was 0.26% (95% confidence interval 0.24% to 0.28%), corresponding to 8203 annual excess deaths (95% confidence interval 3525 to 12 840) across the 406 cities studied. The excess remained at 0.20% (0.18% to 0.22%) when restricting to days above the WHO guideline (100 µg/m3), corresponding to 6262 annual excess deaths (1413 to 11 065). Above more lenient thresholds for air quality standards in Europe, America, and China, excess mortality was 0.14%, 0.09%, and 0.05%, respectively.

Conclusions Results suggest that ozone related mortality could be potentially reduced under stricter air quality standards. These findings have relevance for the implementation of efficient clean air interventions and mitigation strategies designed within national and international climate policies.

Online sexual activities: Men engage in solitary-arousal activities, women in partnered-arousal activities; teens are the highest users of mobile digital devices and potentially of on-line SAs

Exploring new measures of online sexual activities, device use, and gender differences. Véronique O. Bélanger Lejars, Charles H. Bélanger, Jamil Razmak. Computers in Human Behavior, February 13 2020, 106300. https://doi.org/10.1016/j.chb.2020.106300

Highlights
•    Participants engage in OSA, particularly solitary-arousal-based-self-videos.
•    Men engage in solitary-arousal activities, women in partnered-arousal activities.
•    Computers are the preferred method for OSAs overall.
•    Smartphone apps were overwhelmingly preferred in partnered-arousal activities.
•    Teens are the highest users of mobile digital devices and potentially of OSA.

Abstract: Online sexual activities (OSA) are any sexual behaviours done using the Internet and are divided into non-arousal, partnered-arousal, and solitary-arousal activities. The means of accessing the Internet have extended past the traditional home computer and the rapid evolution of personal digital devices has led to a lag in the measurement of OSA. The current study’s aim is to explore a new measurement scale that considers the widespread use of personal digital devices and examines gender differences in OSA. Results show that the suggested scale is a reliable measurement of OSA. Women engaged in more partnered-arousal activities whereas men engaged in more solitary-arousal activities. Computer use was the preferred method for OSA overall but smartphone apps were the preferred method for partnered-arousal activities. Some implications for parents and educators, clinicians, and researchers as well as limitations inviting to further research are provided as OSA is an emerging but rapidly evolving field of investigation.

Wednesday, February 12, 2020

Congenital amusia (tone deafness) is a lifelong musical disorder that affects 4% of the population (single estimate based on a single test from 1980); it is more 1.5pct of pop. and is highly heritable

Peretz, Isabelle, and Dominique T. Vuvan. 2020. “Prevalence of Congenital Amusia.” PsyArXiv. February 12. doi:10.1038/ejhg.2017.15

Abstract: Congenital amusia (commonly known as tone deafness) is a lifelong musical disorder that affects 4% of the population according to a single estimate based on a single test from 1980. Here we present the first large-based measure of prevalence with a sample of 20 000 participants, which does not rely on self-referral. On the basis of three objective tests and a questionnaire, we show that (a) the prevalence of congenital amusia is only 1.5%, with slightly more females than males, unlike other developmental disorders where males often predominate; (b) self-disclosure is a reliable index of congenital amusia, which suggests that congenital amusia is hereditary, with 46% first-degree relatives similarly affected; (c) the deficit is not attenuated by musical training and (d) it emerges in relative isolation from other cognitive disorder, except for spatial orientation problems. Hence, we suggest that congenital amusia is likely to result from genetic variations that affect musical abilities specifically.

Domestic cats spontaneously discriminate between the number and size of potential prey in a way that can be interpreted as adaptive for a lone-hunting, obligate carnivore, and show complex levels of risk–reward analysis

Revisiting more or less: influence of numerosity and size on potential prey choice in the domestic cat. Jimena Chacha, Péter Szenczi, Daniel González, Sandra Martínez-Byer, Robyn Hudson & Oxána Bánszegi . Animal Cognition, Feb 12 2020. https://link.springer.com/article/10.1007/s10071-020-01351-w

Abstract: Quantity discrimination is of adaptive relevance in a wide range of contexts and across a wide range of species. Trained domestic cats can discriminate between different numbers of dots, and we have shown that they also spontaneously choose between different numbers and sizes of food balls. In the present study we performed two experiments with 24 adult cats to investigate spontaneous quantity discrimination in the more naturalistic context of potential predation. In Experiment 1 we presented each cat with the simultaneous choice between a different number of live prey (1 white mouse vs. 3 white mice), and in Experiment 2 with the simultaneous choice between live prey of different size (1 white mouse vs. 1 white rat). We repeated each experiment six times across 6 weeks, testing half the cats first in Experiment 1 and then in Experiment 2, and the other half in the reverse order. In Experiment 1 the cats more often chose the larger number of small prey (3 mice), and in Experiment 2, more often the small size prey (a mouse). They also showed repeatable individual differences in the choices which they made and in the performance of associated predation-like behaviours. We conclude that domestic cats spontaneously discriminate between the number and size of potential prey in a way that can be interpreted as adaptive for a lone-hunting, obligate carnivore, and show complex levels of risk–reward analysis.

Non-reproducible: Evidence that social network index is associated with gray matter volume from a data-driven investigation

No strong evidence that social network index is associated with gray matter volume from a data-driven investigation. Chujun Lin et al. Cortex, February 12 2020. https://doi.org/10.1016/j.cortex.2020.01.021

Abstract: Recent studies in adult humans have reported correlations between individual differences in people’s Social Network Index (SNI) and gray matter volume (GMV) across multiple regions of the brain. However, the cortical and subcortical loci identified are inconsistent across studies. These discrepancies might arise because different regions of interest were hypothesized and tested in different studies without controlling for multiple comparisons, and/or from insufficiently large sample sizes to fully protect against statistically unreliable findings. Here we took a data-driven approach in a pre-registered study to comprehensively investigate the relationship between SNI and GMV in every cortical and subcortical region, using three predictive modeling frameworks. We also included psychological predictors such as cognitive and emotional intelligence, personality, and mood. In a sample of healthy adults (n = 92), neither multivariate frameworks (e.g., ridge regression with cross-validation) nor univariate frameworks (e.g., univariate linear regression with cross-validation) showed a significant association between SNI and any GMV or psychological feature after multiple comparison corrections (all R-squared values ≤ 0.1). These results emphasize the importance of large sample sizes and hypothesis-driven studies to derive statistically reliable conclusions, and suggest that future meta-analyses will be needed to more accurately estimate the true effect sizes in this field.

Racial slurs “reclaimed” by the targeted group convey affiliation rather than derogation; authors found that the intergroup use of reappropriated slurs was perceived quite positively by both White and Black individuals

Perceptions of Racial Slurs Used by Black Individuals Toward White Individuals: Derogation or Affiliation? Conor J. O’Dea, Donald A. Saucier. Journal of Language and Social Psychology, February 11, 2020. https://doi.org/10.1177/0261927X20904983

Abstract: Research suggests that racial slurs may be “reclaimed” by the targeted group to convey affiliation rather than derogation. Although it is most common in intragroup uses (e.g., “nigga” by a Black individual toward another Black individual), intergroup examples of slur reappropriation (e.g., “nigga” by a Black individual toward a White individual) are also common. However, majority and minority group members’ perceptions of intergroup slur reappropriation remain untested. We examined White (Study 1) and Black (Study 2) individuals’ perceptions of the reappropriated terms, “nigga” and “nigger” compared with a control term chosen to be a non-race-related, neutral term (“buddy”), a nonracial derogative term (“asshole”) and a White racial slur (“cracker”) used by a Black individual toward a White individual. We found that the intergroup use of reappropriated slurs was perceived quite positively by both White and Black individuals. Our findings have important implications for research on intergroup relations and the reappropriation of slurs.

Keywords: racial slurs, common in-group identity, social dominance theory, affiliation, derogation



Calling into question that contagious yawning is a signal of empathy: No evidence of familiarity, gender or prosociality biases in dogs

Contagious yawning is not a signal of empathy: no evidence of familiarity, gender or prosociality biases in dogs. Patrick Neilands et al. Proceedings of the Royal Society B: Biological Sciences, Volume 287, Issue 1920, February 5 2020. https://doi.org/10.1098/rspb.2019.2236

Abstract: Contagious yawning has been suggested to be a potential signal of empathy in non-human animals. However, few studies have been able to robustly test this claim. Here, we ran a Bayesian multilevel reanalysis of six studies of contagious yawning in dogs. This provided robust support for claims that contagious yawning is present in dogs, but found no evidence that dogs display either a familiarity or gender bias in contagious yawning, two predictions made by the contagious yawning–empathy hypothesis. Furthermore, in an experiment testing the prosociality bias, a novel prediction of the contagious yawning–empathy hypothesis, dogs did not yawn more in response to a prosocial demonstrator than to an antisocial demonstrator. As such, these strands of evidence suggest that contagious yawning, although present in dogs, is not mediated by empathetic mechanisms. This calls into question claims that contagious yawning is a signal of empathy in mammals.

4. Discussion

By combining the data from six different studies, the resulting dataset is the largest used to date to examine the presence of contagious yawning in a non-human mammal. This allowed us to draw conclusions about the presence and absence of contagious yawning and the signatures predicted by the contagious yawning–empathy hypothesis with a greater level of certainty than by relying on individual studies alone. Our reanalysis shows that dogs do exhibit contagious yawning, showing higher probabilities and rates of yawning for yawning demonstrators compared to control demonstrators. This provides robust support for the claims that contagious yawning is present in dogs [35,4951]. In order to test whether this contagious yawning is related to mechanisms underpinning empathy, we examined this dataset for evidence of the familiarity bias and gender bias. However, dogs in our reanalysis showed no evidence of either of these biases. Similarly, when we ran a novel experiment to look for a prosociality bias, we found that the dogs in our experiment were no more likely to yawn for prosocial demonstrators than antisocial demonstrators. Dogs, therefore, show no evidence for any of the familiarity, gender, or prosociality biases predicted by the contagious yawning–empathy hypothesis. This suggests that contagious yawning in dogs is not mediated by an empathy-related perception–action mechanism [5254]. The presence of contagious yawning in non-human animals, therefore, cannot be assumed to be evidence for a perception–action mechanism shared between humans and other mammals, as has been previously proposed [1,35,41,58]. That is not to say that some non-human animals do not necessarily experience some form of empathy but that contagious yawning cannot be taken as a diagnostic signal for the presence of these empathetic processes. Furthermore, these results, alongside the arguments put forward by Massen & Gallup in their recent review [37], bring into question the validity of the contagious yawning–empathy hypothesis more broadly.
It is important to acknowledge several caveats to our conclusions. Firstly, in both our reanalysis and experiment, the subjects were primarily responding to interspecific yawns from human demonstrators. While it is possible that dogs would respond differently to conspecific and interspecific yawning, there are several reasons to believe that this is not the case. Research in other species such as chimpanzees suggests that they respond similarly to conspecific and interspecific yawns [41], and, in our reanalysis, controlling for demonstrator type did not improve model fit. Nevertheless, more rigorous comparisons between how dogs respond to conspecific and interspecific yawning would be a useful future line of research. Secondly, it is important to note that the familiarity, gender, and prosociality biases are indirect measures of empathy [37]. As such, care needs to be taken in interpreting these biases and there remains substantial debate over how to do so. For example, it has been argued that both the tendency for children with ASD to be less prone to contagious yawning [83] and the familiarity bias [37,84,85] can be explained in terms of differences in attending to yawners rather than differences in empathetic response. Similarly, the gender bias reported in humans [29] is not straightforward to interpret and there is debate over whether it simply reflects a false positive in the literature [33,34]. By contrast, proponents of the contagious yawning–empathy hypothesis argue that the familiarity bias continues to be found even when controlling for differences in subjects' attention [40,41] and that the negative results for the gender bias in previous studies reflects methodological issues with prior experiments [34]. Furthermore, although alternative hypotheses such as the attentional hypothesis could explain the presence of a single bias such as the familiarity bias, only the contagious yawning–empathy hypothesis predicts the presence of all three biases. As such, testing for all three biases represents a powerful test of the contagious yawning–empathy hypothesis. Finally, searching for a novel signature, the prosociality bias, required a novel experimental methodology where dogs were exposed to a prosocial experimenter that interacted with them and an antisocial experimenter that ignored them. Previous work which used a similar methodology demonstrated that dogs do show a preference for the prosocial demonstrator [73], and so if the contagious yawning–empathy hypothesis is correct, dogs should have reacted with increased yawning to the prosocial demonstrator. However, further work would be useful in confirming the presence or absence of the prosociality bias in dogs and other species such as humans.
Research into contagious yawning has been dominated by the contagious yawning–empathy debate [37]. However, contagious yawning is an interesting phenomenon in its own right as its evolutionary roots and ultimate function remain a mystery [20]. Contagious yawning in animals may be the result of stress [54,57], an affiliation strategy [67], a means of communication [61], or a mechanism to improve collective vigilance within groups [37,68,69] rather than being related to empathy via a perception–action mechanism. Future research into contagious yawning should include a greater focus on testing between these and other hypotheses. For example, the affiliation hypothesis might predict that contagious yawning should be seen more frequently during reconciliation periods after conflict while the collective vigilance hypothesis posits that contagious yawning should increase in response to external disturbances [37,86]. However, it is important to note that these theories are not necessarily mutually exclusive [87] and that factors such as stress appear to influence yawning propensity in complex ways [88,89]. Additionally, an important next step is to consider evidence of contagious yawning outside of mammals. While there has been some work looking at contagious yawning in budgerigars [86,90] and tortoises [91], research has otherwise been sparse outside of the mammalian class.
Future research would benefit from systematically testing contagious yawning across multiple species. One barrier to such projects is that studying a range of different species often requires different experimental set-ups to make such testing feasible. There is a concern that such a range of methodological approaches may make cross-species and cross-study comparisons difficult, if not impossible [35,66]. However, our finding that the effect of treatment on yawning probabilities and rates remains stable when controlling for various aspects of study design suggests that the presence of contagious yawning is relatively robust to differences in experimental design. As such, while it is important to use broadly similar designs (for instance, comparing animals’ yawning rates when exposed to either a yawning demonstrator or control demonstrator), there could be considerable flexibility in other aspects of study design. For example, our results suggest that animals' yawning probabilities and rates to either live demonstrators or recorded demonstrators are comparable. Therefore, our findings suggest that more ambitious cross-species work can be carried out with confidence in the validity of the subsequent comparisons.
To conclude, our results provide robust support for the hypothesis that contagious yawning is found in dogs, the first non-human species of mammal where it has been clearly shown outside of chimpanzees. However, we found no evidence that dogs yawn more in response to either familiar human yawners compared to unfamiliar human yawners, or to prosocial human yawners compared to antisocial human yawners. Additionally, we found no evidence that female dogs were more likely to yawn in response to a yawning demonstrator than male dogs. As such, these findings cast doubt on the widespread assertion that contagious yawning is mediated by the same perception–action mechanism as empathy [1,6,35,41,58]. Instead, they support recent claims that there is no link between contagious yawning and empathetic processes [37,67] and underline the importance of developing more direct measures of empathy in non-human animals [37,92]. However, while our results suggest that researchers cannot rely on contagious yawning as a diagnostic signal of empathy, our additional findings that the effect of contagious yawning appears to be robust to variations in experimental methods suggest that cross-species comparisons may be a powerful way to disentangle the evolutionary roots of this behaviour.

Of what they thought were 4 important predictors of subjective well-being (marriage, employment, prosociality, & life meaning), marriage showed only very small effects, & employment had larger effects that peaked around age 50 years

Subjective Well-Being Around the World: Trends and Predictors Across the Life Span. Andrew T. Jebb. Psychological Science, February 11, 2020. https://doi.org/10.1177/0956797619898826

Abstract: Using representative cross-sections from 166 nations (more than 1.7 million respondents), we examined differences in three measures of subjective well-being over the life span. Globally, and in the individual regions of the world, we found only very small differences in life satisfaction and negative affect. By contrast, decreases in positive affect were larger. We then examined four important predictors of subjective well-being and how their associations changed: marriage, employment, prosociality, and life meaning. These predictors were typically associated with higher subjective well-being over the life span in every world region. Marriage showed only very small associations for the three outcomes, whereas employment had larger effects that peaked around age 50 years. Prosociality had practically significant associations only with positive affect, and life meaning had strong, consistent associations with all subjective-well-being measures across regions and ages. These findings enhance our understanding of subjective-well-being patterns and what matters for subjective well-being across the life span.

Keywords: subjective well-being, cross-cultural, aging, life meaning, prosocial behavior

You may be more original than you think: Predictable biases in self-assessment of originality

You may be more original than you think: Predictable biases in self-assessment of originality. Yael Sidi et al. Acta Psychologica, Volume 203, February 2020, 103002. https://doi.org/10.1016/j.actpsy.2019.103002

Highlights
•    Self-judgments of originality are sensitive to the serial order effect.
•    Originality judgments reveal under-estimation robustly and resiliently.
•    People discriminate well between more and less original ideas.
•    There is a double dissociation between actual originality and originality judgments.

Abstract: How accurate are individuals in judging the originality of their own ideas? Most metacognitive research has focused on well-defined tasks, such as learning, memory, and problem solving, providing limited insight into ill-defined tasks. The present study introduces a novel metacognitive self-judgment of originality, defined as assessments of the uniqueness of an idea in a given context. In three experiments, we examined the reliability, potential biases, and factors affecting originality judgments. Using an ideation task, designed to assess the ability to generate multiple divergent ideas, we show that people accurately acknowledge the serial order effect—judging later ideas as more original than earlier ideas. However, they systematically underestimate their ideas' originality. We employed a manipulation for affecting actual originality level, which did not affect originality judgments, and another one designed to affect originality judgments, which did not affect actual originality performance. This double dissociation between judgments and performance calls for future research to expose additional factors underlying originality judgments.

Contrary to common views, use of social media and online portals fosters more visits to news sites and a greater variety of news sites visited

How social network sites and other online intermediaries increase exposure to news. Michael Scharkow, Frank Mangold, Sebastian Stier, and Johannes Breuer. PNAS February 11, 2020 117 (6) 2761-2763; January 27, 2020. https://doi.org/10.1073/pnas.1918279117

Abstract: Research has prominently assumed that social media and web portals that aggregate news restrict the diversity of content that users are exposed to by tailoring news diets toward the users’ preferences. In our empirical test of this argument, we apply a random-effects within–between model to two large representative datasets of individual web browsing histories. This approach allows us to better encapsulate the effects of social media and other intermediaries on news exposure. We find strong evidence that intermediaries foster more varied online news diets. The results call into question fears about the vanishing potential for incidental news exposure in digital media environments.

Keywords: news exposureonline media useweb tracking data

People can come across news and other internet offerings in a variety of ways, for example, by visiting their favorite websites, using search engines, or following recommendations from contacts on social media (1). These routes do not necessarily lead people to the same venues. While traditionally considered as an important ingredient of well-functioning democratic societies, getting news as a byproduct of other media-related activities has been assumed to wane in the online sphere. Intermediaries like social networking sites (SNS) and search engines are regarded with particular suspicion, often criticized for fostering news avoidance and selective exposure (2). This assumption has been, perhaps most prominently, ingrained in the “filter bubble” thesis, positing that search and recommendation algorithms bias news diets toward users’ preferences and, thus, decrease content diversity (3). On the other hand, incidental news exposure (INE) due to other online activities has received much scholarly attention for several decades (4). Contrary to widely held assumptions, recent INE research found that SNS users have more rather than less diverse news diets than nonusers. For example, one study showed that SNS users consumed almost twice the number of news outlets in the previous week as did nonusers (2). Similar results emerged regarding the use of web aggregators (portals) and search engines, although people may use search engines in a more goal-driven fashion compared to SNS (1).

In previous studies, SNS-based news exposure was typically measured by asking respondents whether they are (unintentionally) exposed to news via social media. Like many survey studies, this approach naturally suffers from the limited accuracy and reliability of self-reports (5). More specifically, recent work has criticized self-report measures for being biased toward active news choices and routine use (6) and being particularly inaccurate when people access news via intermediaries (7). To alleviate these limitations, some studies have used log data to estimate the quantity and quality of online news exposure, for example, in terms of exposure to cross-cutting news (8, 9). However, these studies have focused only on single social media platforms instead of different intermediary routes to news. Other recent studies (1, 10) have traced direct and indirect pathways to online news using browser logs, but have not distinguished nonregular—and therefore possibly incidental—news exposure from regular, typically more intentional or routinized forms of news consumption online. In other words, the question whether visiting SNS more often (than usual) actually leads to more varied news exposure (than usual) essentially remains unanswered. This problem concerns almost all studies on the use and effects of online media, and has received considerable attention in recent communication research (11). We argue that positive within-person effects of visiting intermediary sites on online news exposure are a necessary (although not sufficient, since even nonregular visits could be intentional) precondition for INE, and, therefore, testing for such effects is a useful endeavor. We address this question using a statistical model that distinguishes between stable between-person differences and within-person effects, that is, the random-effects within–between (REWB) model (12). Investigating within-person effects has additional value by safeguarding causal inferences against bias due to (previously) unmeasured person-level confounders. We apply the REWB model to two large, representative tracking datasets of individual-level browsing behavior in Germany, collected independently in 2012 and 2018. This allows us not only to compare within- and between-person effects but also to analyze possible changes in the effects of SNS (Facebook, Twitter) and intermediaries (Google, web portals) over recent years. Specifically, we investigate their effects on the amount and variety of online news exposure. Using this approach enables us to replicate and extend two recent survey studies (2, 13) that looked at the effects of SNS, web portals, and search engines on 1) overall online news exposure and 2) the diversity of people’s online news diets.


Conclusion
We used large-scale observational data to avoid the limited reliability and validity of self-reports on news exposure. Leveraging the potential of such data with the REWB model, our study provides strong evidence that getting more and more-diverse news as a consequence of other media-related activities is a common phenomenon in the online sphere. The findings contradict widely held concerns that social media and web portals specifically contribute to news avoidance and restrict the diversity of news diets. Note that we followed previous studies and measured the variety of news diets by counting the number of outlets visited. Given the overall low frequency of news visits, intermediaries add diversity to the news diets of the large majority of participants with a small news repertoire (2). While we cannot say that outlet variety always equals viewpoint variety, prior research has shown that using a larger number of online news sources typically translates into more-diverse overall news exposure (15). In contrast to previous studies (9, 10), we cannot quantify diversity in terms of cross-cutting exposure, but note that previous studies have shown little evidence for strong partisan alignments of news audiences in Germany (16) on the outlet level, so that variety would have to be measured on the level of individual news items, which requires URL-level tracking and content analysis data. In addition, future combinations of web tracking with experience sampling surveys are needed to disentangle in what instances nonregular news use is entirely nonintentional and how the respective contents specifically affect the diversity in news diets.

Tuesday, February 11, 2020

We show that in religious cultural contexts, religious people lived 2.2 years longer than did nonreligious people; but in nonreligious cultural contexts, religiosity conferred no such longevity

Ebert, T., Gebauer, J. E., Talman, J. R., & Rentfrow, P. J. (2020). Religious people only live longer in religious cultural contexts: A gravestone analysis. Journal of Personality and Social Psychology, Feb 2020. https://doi.org/10.1037/pspa0000187

Abstract: Religious people live longer than nonreligious people, according to a staple of social science research. Yet, are those longevity benefits an inherent feature of religiosity? To find out, we coded gravestone inscriptions and imagery to assess the religiosity and longevity of 6,400 deceased people from religious and nonreligious U.S. counties. We show that in religious cultural contexts, religious people lived 2.2 years longer than did nonreligious people. In nonreligious cultural contexts, however, religiosity conferred no such longevity benefits. Evidently, a longer life is not an inherent feature of religiosity. Instead, religious people only live longer in religious cultural contexts where religiosity is valued. Our study answers a fundamental question on the nature of religiosity and showcases the scientific potential of gravestone analyses.


Managing Systemic Financial Crises: New Lessons and Lessons Relearned

Managing Systemic Financial Crises: New Lessons and Lessons Relearned. Marina Moretti; Marc C Dobler; Alvaro Piris. IMF Departmental Paper No. 20/05, February 11, 2020. https://www.imf.org/en/Publications/Departmental-Papers-Policy-Papers/Issues/2020/02/10/Managing-Systemic-Financial-Crises-New-Lessons-and-Lessons-Relearned-48626

Chapter 1 Introduction
Systemic financial crises have been a recurring feature of economies in mod­ern times. Panics, wherein collapsing trust in the banking system and credi­tor runs have significant negative consequences for economic activity—rare events in any one country—have occurred relatively frequently across the IMF membership. Common causes include high leverage, booming credit, an erosion of underwriting standards, exposure to rapidly rising prop­erty prices and other asset bubbles, excessive exposure to the government, inadequate supervision, and often a high external current account deficit. Financial distress typically lasts several years and is associated with large economic contractions and high fiscal costs (Laeven and Valencia 2018). Figure 1 shows the prevalence of systemic financial crises over the past 30 years, including the number of crisis episodes each year. The global financial crisis (GFC) was just such a panic, albeit one that transcended national and regional boundaries.
IMF staff experience in helping countries manage systemic banking crises has evolved over time. Major financial sector problems have been addressed in the context of IMF-supported programs primarily in emerging market econ­omies, developing countries and, more recently, in some advanced economies during the GFC. The IMF approach to managing these events was summa­rized in a 2003 paper (Hoelscher and Quintyn 2003) before there was inter­national consensus on legal frameworks, preparedness, and policy approaches, and when practices varied widely across the membership. The principles out­lined in that paper built on staff experience in a range of countries—notably, Indonesia, Republic of Korea, Russia, and Thailand in the late 1990s; and Argentina, Ecuador, Turkey, and Uruguay in the early 2000s. It emphasized that managing a systemic banking crisis is a complex, multiyear process and presented tools available as part of a comprehensive framework for addressing systemic banking problems while minimizing taxpayers’ costs. Although these core concepts and principles remain largely valid today, they merit a revisit following the experiences and lessons learned from the GFC.
The GFC shared similarities with past systemic crises, albeit with an impact felt well beyond directly affected countries (Claessens and others 2010). As in previous episodes of financial distress, the countries most affected by the GFC—the US starting in 2008 and several countries in Europe—saw cred­itor runs and contagion across institutions, significant fiscal and quasi-fiscal outlays, and a sharp contraction in credit and economic activity (see Fig­ure 1). The reason the impact was more widely felt across the global econ­omy: the crisis originated in advanced economies with large financial sectors. These countries embodied a substantial portion of global economic output, trade, and financial activity and affected internationally active financial firms providing significant cross-border services. The speed of transmission of financial distress across borders was unprecedented, given the complex and opaque financial linkages between financial firms. These factors introduced new challenges, as they impacted the effectiveness of many existing crisis management tools.
Reflecting these new challenges, individual country responses during the GFC differed from past experiences in important respects (Table 1):
The size and scope of liquidity support provided by major central banks was unprecedented. More liquidity was provided to more counterparties for longer periods against a wider range of collateral. Much of this support was through liquidity facilities open to all market participants, while some was provided as emergency liquidity assistance (ELA) to individual institutions. This occurred against the backdrop of accommodative monetary policy and quantitative easing.
Explicit liability guarantees were more selectively deployed than in past crises, when blanket guarantees covering a wide set of liabilities were more commonly used by authorities. During the GFC (with some notable excep­tions), explicit liability guarantees typically applied only to specific institu­tions, new debt issuance, specific asset classes, or were capped (for example, a higher level of deposit insurance). However, implicit guarantees were widespread, as demonstrated by the extensive public solvency support pro­vided to financial institutions and markets. Systemic financial institutions were rarely liquidated or resolved,1 and, of those that were, some proved destabilizing for the broader financial system. This trend reflected in part inadequate powers to resolve such firms in an orderly way.
Difficulties in achieving effective cross-border cooperation in resolution between authorities in different countries came to the fore, given the global footprint of some weak institutions. The lack of mechanisms to enforce resolution measures on a cross-border basis and cooperate more broadly led, in some cases, to the breakup of cross-border groups into national components.
More emphasis was placed on banks’ ability to manage nonperforming assets internally or through market disposals, with less reliance on central­ized asset management companies (AMCs)—public agencies that purchase and manage nonperforming loans (NPLs). Protracted weak growth in some countries, the large scale of the problem, and gaps in legal frameworks also meant that progress in addressing distressed assets and deleveraging private sector balance sheets was slower in some countries than in previous crises.

Table 1. Lessons on the Design of the Financial Safety NetWhat is Similar?                                                                  What is New?
• Escalating early intervention and enforcement measures 
• More intrusive supervision and early intervention powers
 

• Special resolution regimes for banks                                 • A new international standard on resolution regimes for systemic financial institutions requiring a range of resolution powers and tools

• Establishing deposit insurance (if prior conditions enable)1 with adequate ex ante funding, available to fund resolution on a least cost basis           •
An international standard on deposit insurance, requiring ex ante funding and no coinsurance
                                                                                              • Desirability of depositor preference
 

• Capacity to provide emergency liquidity to banks, at the discretion of the central bank  Liquidity assistance frameworks with broader eligibility conditions, collateral, and safeguards
 

1 IMF staff does not recommend establishing a deposit insurance system in countries with weak banking supervision, ineffective resolution regimes, and identifiably weak banks. Doing so would expose a nascent scheme to significant risk, (when it has yet to build adequate funding and operational capacity) and could undermine depositor confidence.
The GFC was a watershed. Policymakers were confronted with the gaps and weaknesses in their legal and policy frameworks to address bank liquidity and solvency problems, their understanding of systemic risk in institutions and markets, and domestic and international cooperation. Under these constraints, the policy responses that were deployed put substantial public resources at risk. While ultimately successful in stabilizing financial sys­tems and the macroeconomy, the fiscal and economic costs were high. The far-reaching impact of the GFC provided impetus for a major overhaul of financial sector oversight (Financial Stability Forum 2008; IMF 2018). The regulatory reform agenda agreed to by the Group of Twenty leaders in 2009 elevated the discussions to the highest policy level and kept international attention focused on establishing a stronger set of globally consistent rules. The new architecture aimed to (1) enhance capital buffers and reduce lever­age and financial procyclicality; (2) contain funding mismatches and currency risk; (3) enhance the regulation and supervision of large and interconnected institutions, including by expanding the supervisory perimeter; (4) improve the supervision of a complex financial system; (5) align governance and com­pensation practices of banks with prudent risk taking; (6) overhaul resolution regimes of large financial institutions; and (7) introduce macroprudential policies. Through its multilateral and bilateral surveillance of its member­ship, including the Financial Sector Assessment Program (FSAP), Article IV missions, and its Global Financial Stability Reports, the IMF has contributed to implementing the regulatory reform agenda.
This paper summarizes the general principles, strategies, and techniques for preparing for and managing systemic banking crises, based on the views and experience of IMF staff, considering developments since the GFC. The paper does not summarize the causes of the GFC, its evolution, or the policy responses adopted; these concepts have been well documented elsewhere.2 Moreover, it does not cover the full reform agenda since the crisis, rather, only two parts—one on key elements of a legal and operational framework for crisis preparedness (the “financial safety net”) and the other on oper­ational strategies and techniques to manage systemic crises if they occur. Each section summarizes relevant lessons learned during the GFC and other recent episodes of financial distress, merging them with preexisting advice to give a complete picture of the main elements of IMF staff advice to member countries on operational aspects of crisis preparedness and management. The advice builds on and is consistent with international financial standards, tai­lored to country-specific circumstances based on IMF staff crisis experience. The advice recognizes that every crisis is different and that managing systemic failures is exceptionally challenging, both operationally and politically. None­theless, better-prepared authorities are less likely to resort to bailing out bank shareholders and creditors when facing such circumstances.
Part I, on crisis preparedness, outlines the design and operational features of a well-designed financial safety net. It discusses how staff advice on these issues has evolved, drawing from the international standards and good practice that emerged in the aftermath of the GFC. Effective financial safety nets play an important role in minimizing the risk of systemwide financial distress—by increasing the likelihood that failing financial institutions can be resolved without triggering financial instability. However, they cannot eliminate that risk, particularly at times of severe stress.
Part II, on crisis management, discusses aspects of a policy response to a full-blown banking crisis. It details the evolution of IMF advice in light of what worked well—or less well—during the GFC, reflecting the experience of IMF staff in actual crisis situations. The narrative is organized around poli­cies for dealing with three distinct aspects3 of systemic banking crisis:

*  Containment—strategies and techniques to stem creditor runs and stabilize financial sector liquidity in the acute phase of panic and high uncertainty. This phase is typically short-lived, with an escalating policy response as needed to avoid the collapse of the financial system.
*  Restructuring and resolution—strategies and techniques to diagnose bank soundness and viability, and to recapitalize or resolve failing financial insti­tutions, which are typically implemented over the following year or more, depending on the severity of the situation.
*  Dealing with distressed assets—strategies and techniques to clean up pri­vate sector balance sheets that first identify and then remove impediments to effective resolution of distressed assets, with implementation likely to stretch over several years.

IMF member countries have continued to cope with financial panics and widespread financial sector weakness. The IMF remains fully engaged on these issues, often in the context of IMF-supported programs, with a sig­nificant focus on managing systemic problems and financial sector reforms. Staff continue to provide support and advice on supervisory practice, reso­lution, deposit insurance, and emergency liquidity in IMF member coun­tries learning from experience and adapt policy advice to developments and country-specific circumstances.


Box 9. Dealing with Excessive Related-Party Exposures

Excessive related-party exposures present a major risk to financial stability. Related-party loans that go unreported conceal credit and concentration risk and may be on pre­ferred terms, reducing bank profitability and solvency. Persistently high related-party exposures also hold down economic growth by tying up capital that could otherwise be used to provide lending to legitimate, creditworthy businesses on an arms-length basis. Related-party exposures complicate bank resolution, as shareholders whose rights have been suspended have an incentive to default on their loans to the bank.

Opaque bank ownership greatly facilitates the hiding of related-party exposures and trans­actions. Opaque ownership is associated with poor governance, AML/CFT violations, and fraudulent activities. Banks without clear ultimate beneficial owners cannot count on share­holder support in times of crisis, and the quality of their capital cannot be verified. Moreover, unknown owners cannot be held accountable for criminal actions leading to a bank’s failure.
Resolving these problems requires a three-pillar approach. Legal reforms are needed to lay the foundation for targeted bank diagnostics and effective enforcement actions:

*  Legal reforms to introduce international standards for transparent disclosure and mon­itoring of bank owners and related parties—including prudent limits, strict conflict of interest rules on the processes and procedures for dealing with related parties, and esca­lating enforcement measures. Non-transparent ownership should be made a legal ground for license revocation or resolution, and the supervisor authorized to presume a related party under certain circumstances. This shifts from supervisors to banks the “burden of proof”—to demonstrate that a suspicious transaction is not with a related party.

*  Bank diagnostics are targeted at identifying ultimate beneficial owners and related-party exposures and transactions and assessing compliance with prudential lending limits for related-party and large exposures. The criteria for identification include control, economic dependency, and acting in concert. Identification of related-party transactions should also consider their risk-related features, such as the existence of preferential terms, the quality of documentation, and internal controls over the transactions.

*  Enforcement actions are taken to (1) remove unsuitable bank shareholders—that is, shareholders whose ultimate beneficial owner is not identified, or are otherwise found to be unsuitable; and (2) unwind excessive related-party exposures through repayment or disposal of the exposure, or resolution of the relationship (change in ownership of the bank or the borrower).

The three-pillar approach is best implemented in the context of a comprehensive financial sec­tor strategy. There may not be enough time to implement legal reforms during early interven­tion or the resolution of systemic banks. In such situations, suspected related-party exposures and liabilities must be swiftly identified and ringfenced. Once the system is stabilized, however, the three-pillar approach should be implemented for all banks (including those in liquidation).

Source: Karlsdóttir and others (forthcoming).

Those who share our musical taste are likely to be regarded as in-group members and will be subject to in-group favoritism according to our self-esteem and how strongly we identify with our fellow music fans

Musical taste, in-group favoritism, and social identity theory: Re-testing the predictions of the self-esteem hypothesis. Adam J Lonsdale. Psychology of Music, February 10, 2020. https://doi.org/10.1177/0305735619899158

Abstract: Musical taste is thought to function as a social “badge” of group membership, contributing to an individual’s sense of social identity. Following from this, social identity theory predicts that individuals should perceive those who share their musical tastes more favorably than those who do not. Social identity theory also asserts that this in-group favoritism is motivated by the need to achieve, maintain, or enhance a positive social identity and self-esteem (i.e., the “self-esteem hypothesis”). The findings of the present study supported both of these predictions. Participants rated fans of their favorite musical style significantly more favorably than fans of their least favorite musical style. The present findings also offer, for the first time, evidence of significant positive correlations between an individual’s self-esteem and the in-group bias shown to those who share their musical tastes. However, significant relationships with in-group identification also indicate that self-esteem is unlikely to be the sole factor responsible for this apparent in-group bias. Together these findings suggest that those who share our musical taste are likely to be regarded as in-group members and will be subject to in-group favoritism according to our self-esteem and how strongly we identify with our fellow music fans.

Keywords: in-group bias, in-group favoritism, musical taste, self-esteem, social identity


The higher the participants rated their own IQ, the higher their own ratings of EQ (EmotionalQ), attractiveness, and health; men overestimated more their IQ, attractiveness & health than women did, but not their EQ

Correlates of Self-Estimated Intelligence. Adrian Furnham and Simmy Grover. J. Intell. 2020, 8(1), 6; February 10 2020. https://www.mdpi.com/2079-3200/8/1/6

Abstract: This paper reports two studies examining correlates of self-estimated intelligence (SEI). In the first, 517 participants completed a measure of SEI as well as self-estimated emotional intelligence (SEEQ), physical attractiveness, health, and other ratings. Males rated their IQ higher (74.12 vs. 71.55) but EQ lower (68.22 vs. 71.81) than females but there were no differences in their ratings of physical health in Study 1. Correlations showed for all participants that the higher they rated their IQ, the higher their ratings of EQ, attractiveness, and health. A regression of self-estimated intelligence onto three demographic, three self-ratings and three beliefs factors accounted for 30% of the variance. Religious, educated males who did not believe in alternative medicine gave higher SEI scores. The second study partly replicated the first, with an N = 475. Again, males rated their IQ higher (106.88 vs. 100.71) than females, but no difference was found for EQ (103.16 vs. 103.74). Males rated both their attractiveness (54.79 vs. 49.81) and health (61.24 vs. 55.49) higher than females. An objective test-based cognitive ability and SEI were correlated r = 0.30. Correlations showed, as in Study 1, positive relationships between all self-ratings. A regression showed the strongest correlates of SEI were IQ, sex and positive self-ratings. Implications and limitations are noted.

Keywords: self-estimated; intelligence; sex differences; attitudes



Non-reproducible: About a decade ago, a study documented that conservatives have stronger physiological responses to threatening stimuli than liberals

Conservatives and liberals have similar physiological responses to threats. Bert N. Bakker, Gijs Schumacher, Claire Gothreau & Kevin Arceneaux. Nature Human Behaviour, February 10 2020. https://www.nature.com/articles/s41562-020-0823-z

Abstract: About a decade ago, a study documented that conservatives have stronger physiological responses to threatening stimuli than liberals. This work launched an approach aimed at uncovering the biological roots of ideology. Despite wide-ranging scientific and popular impact, independent laboratories have not replicated the study. We conducted a pre-registered direct replication (n = 202) and conceptual replications in the United States (n = 352) and the Netherlands (n = 81). Our analyses do not support the conclusions of the original study, nor do we find evidence for broader claims regarding the effect of disgust and the existence of a physiological trait. Rather than studying unconscious responses as the real predispositions, alignment between conscious and unconscious responses promises deeper insights into the emotional roots of ideology.

People rated their own faces as more attractive than others rated them, no matter if original or artificially rendered more masculine or feminine

Influence of sexual dimorphism on the attractiveness evaluation of one’s own face. Zhaoyi Li, Zhiguo Hu, Hongyan Liu. Vision Research, Volume 168, March 2020, Pages 1-8. https://doi.org/10.1016/j.visres.2020.01.005

Abstract: The present study aimed to explore the influence of sexual dimorphism on the evaluation of the attractiveness of one’s own face. In the experiment, a masculinized and a feminized version of the self-faces of the participants were obtained by transferring the original faces toward the average male or female face. The participants were required to rate the attractiveness of three types (original, masculine, feminine) of their own faces and the other participants’ faces in same-sex and opposite-sex contexts. The results revealed that the participants rated their own faces as more attractive than other participants rated them regardless of the sexually dimorphic type (original, masculine, feminine) or the evaluation context. More importantly, the male and female participants showed different preferences for the three types of self-faces. Specifically, in the same-sex context, the female participants rated their own original faces as significantly more attractive than the masculine and feminine faces, and the male participants rated their own masculine faces as significantly more attractive than the feminine faces; while in the opposite-sex context, no significant difference among the attractiveness scores of the three types of self-faces was found in both the male and female participants. The present study provides empirical evidence of the influence of sexual dimorphism on the evaluation of the attractiveness of self-faces.


We examined perceptions of the Dark Triad traits in 6 occupations; participants believed musicians & lawyers should be high in the Dark Triad, and teachers should be high in narcissism, but low in Machiavellianism & psychopathy

Insert a joke about lawyers: Evaluating preferences for the Dark Triad traits in six occupations. Cameron S. Kay, Gerard Saucier. Personality and Individual Differences, Volume 159, 1 June 2020, 109863. https://doi.org/10.1016/j.paid.2020.109863

Highlights
•    We examined perceptions of the Dark Triad traits in six occupations.
•    Participants believed musicians and lawyers should be high in the Dark Triad.
•    Participants believed teachers should be high in narcissism.
•    Overall, participants believed others should have the same dark traits they have.

Abstract: The current research examined how perceptions of the Dark Triad traits vary across occupations. Results from two studies (NTOTAL = 933) suggested that participants believe it is acceptable, if not advantageous, for lawyers and musicians to be high in the Dark Triad traits. Participants, likewise, indicated that teachers should be high in narcissism but low in Machiavellianism and psychopathy. Potentially, the performative aspects of narcissism are considered an asset for teachers, while Machiavellianism and psychopathy are considered a liability. The findings further indicated that, regardless of the occupation in question, people high in a specific Dark Triad trait believe others should also be high in that same trait. All results are considered in the context of the attraction-selection-attrition model.

Cultured meat safety: Unlike conventional meat, cultured muscle cells may be safer, without any adjacent digestive organs; but with this high level of cell multiplication, some dysregulation is likely as happens in cancer cells

The Myth of Cultured Meat: A Review. Sghaier Chriki and Jean-François Hocquette. Front. Nutr., February 7 2020. https://doi.org/10.3389/fnut.2020.00007

Abstract: To satisfy the increasing demand for food by the growing human population, cultured meat (also called in vitro, artificial or lab-grown meat) is presented by its advocates as a good alternative for consumers who want to be more responsible but do not wish to change their diet. This review aims to update the current knowledge on this subject by focusing on recent publications and issues not well described previously. The main conclusion is that no major advances were observed despite many new publications. Indeed, in terms of technical issues, research is still required to optimize cell culture methodology. It is also almost impossible to reproduce the diversity of meats derived from various species, breeds and cuts. Although these are not yet known, we speculated on the potential health benefits and drawbacks of cultured meat. Unlike conventional meat, cultured muscle cells may be safer, without any adjacent digestive organs. On the other hand, with this high level of cell multiplication, some dysregulation is likely as happens in cancer cells. Likewise, the control of its nutritional composition is still unclear, especially for micronutrients and iron. Regarding environmental issues, the potential advantages of cultured meat for greenhouse gas emissions are a matter of controversy, although less land will be used compared to livestock, ruminants in particular. However, more criteria need to be taken into account for a comparison with current meat production. Cultured meat will have to compete with other meat substitutes, especially plant-based alternatives. Consumer acceptance will be strongly influenced by many factors and consumers seem to dislike unnatural food. Ethically, cultured meat aims to use considerably fewer animals than conventional livestock farming. However, some animals will still have to be reared to harvest cells for the production of in vitro meat. Finally, we discussed in this review the nebulous status of cultured meat from a religious point of view. Indeed, religious authorities are still debating the question of whether in vitro meat is Kosher or Halal (e.g., compliant with Jewish or Islamic dietary laws).

---
Health and Safety

Advocates of in vitro meat claim that it is safer than conventional meat, based on the fact that lab-grown meat is produced in an environment fully controlled by researchers or producers, without any other organism, whereas conventional meat is part of an animal in contact with the external world, although each tissue (including muscles) is protected by the skin and/or by mucosa. Indeed, without any digestive organs nearby (despite the fact that conventional meat is generally protected from this), and therefore without any potential contamination at slaughter, cultured muscle cells do not have the same opportunity to encounter intestinal pathogens such as E. coli, Salmonella or Campylobacter (10), three pathogens that are responsible for millions of episodes of illness each year (19). However, we can argue that scientists or manufacturers are never in a position to control everything and any mistake or oversight may have dramatic consequences in the event of a health problem. This occurs frequently nowadays during industrial production of chopped meat.

Another positive aspect related to the safety of cultured meat is that it is not produced from animals raised in a confined space, so that the risk of an outbreak is eliminated and there is no need for costly vaccinations against diseases like influenza. On the other hand, we can argue that it is the cells, not the animals, which live in high numbers in incubators to produce cultured meat. Unfortunately, we do not know all the consequences of meat culture for public health, as in vitro meat is a new product. Some authors argue that the process of cell culture is never perfectly controlled and that some unexpected biological mechanisms may occur. For instance, given the great number of cell multiplications taking place, some dysregulation of cell lines is likely to occur as happens in cancer cells, although we can imagine that deregulated cell lines can be eliminated for production or consumption. This may have unknown potential effects on the muscle structure and possibly on human metabolism and health when in vitro meat is consumed (21).

Antibiotic resistance is known as one of the major problems facing livestock (7). In comparison, cultured meat is kept in a controlled environment and close monitoring can easily stop any sign of infection. Nevertheless, if antibiotics are added to prevent any contamination, even occasionally to stop early contamination and illness, this argument is less convincing.

Moreover, it has been suggested that the nutritional content of cultured meat can be controlled by adjusting fat composites used in the medium of production. Indeed, the ratio between saturated fatty acids and polyunsaturated fatty acids can be easily controlled. Saturated fats can be replaced by other types of fats, such as omega-3, but the risk of higher rancidity has to be controlled. However, new strategies have been developed to increase the content of omega-3 fatty acids in meat using current livestock farming systems (23). In addition, no strategy has been developed to endow cultured meat with certain micronutrients specific to animal products (such as vitamin B12 and iron) and which contribute to good health. Furthermore, the positive effect of any (micro)nutrient can be enhanced if it is introduced in an appropriate matrix. In the case of in vitro meat, it is not certain that the other biological compounds and the way they are organized in cultured cells could potentiate the positive effects of micronutrients on human health. Uptake of micronutrients (such as iron) by cultured cells has thus to be well understood. We cannot exclude a reduction in the health benefits of micronutrients due to the culture medium, depending on its composition. And adding chemicals to the medium makes cultured meat more “chemical” food with less of a clean label.

Monday, February 10, 2020

Mexican drug cartels: We see a positive connection between cartel presence & better socioeconomic outcomes at the municipality level; results help understand why drug lords have great support in the communities in which they operate

Following the poppy trail: Origins and consequences of Mexican drug cartels. Tommy E. Murphy, Martín A. Rossi. Journal of Development Economics, Volume 143, March 2020, 102433. https://doi.org/10.1016/j.jdeveco.2019.102433

Highlights
•    We study the origins, and economic and social consequences of Mexican drug cartels.
•    The location of current cartels is strongly linked to the location of Chinese migration at the beginning of the 20th century.
•    We report a positive connection between cartel presence and better socioeconomic outcomes at the municipality level.
•    Our results help to understand why drug lords have great support in the local communities in which they operate.

Abstract: This paper studies the origins, and economic and social consequences of some of the most prominent drug trafficking organizations in the world: the Mexican cartels. It first traces the current location of cartels to the places where Chinese migrated at the beginning of the 20th century, discussing and documenting how both events are strongly connected. Information on Chinese presence at the beginning of the 20th century is then used to instrument for cartel presence today, to identify the effect of cartels on society. Contrary to what seems to happen with other forms of organized crime, the IV estimates in this study indicate that at the local level there is a positive link between cartel presence and better socioeconomic outcomes (e.g. lower marginalization rates, lower illiteracy rates, higher salaries), better public services, and higher tax revenues, evidence that is consistent with the known stylized fact that drug lords tend have great support in the local communities in which they operate.

JEL classification: N36, O15


Increasingly, evidence suggests aggressive video games have little impact on player behavior in the realm of aggression and violence, but most professional guild policy statements failed to reflect these data

Aggressive Video Games Research Emerges from its Replication Crisis (Sort of). Christopher J Ferguson. Current Opinion in Psychology, February 10 2020. https://doi.org/10.1016/j.copsyc.2020.01.002

Highlights
• Previous research on aggressive video games (AVGs) suffered from high false positive rates.
• New, preregistered studies suggest AVGs have little impact on player aggression.
• Prior meta-analyses overestimated the evidence for effects.
• Professional guild statements by the American Psychological Association and American Academy of Pediatrics are inaccurate.
• Consumers may not mimic behaviors seen in fictional media.

Abstract: The impact of aggressive video games (AVGs) on aggression and violent behavior among players, particularly youth, has been debated for decades. In recent years, evidence for publication bias, questionable researcher practices, citation bias and poor standardization of many measures and research designs has indicated that the false positive rate among studies of AVGs has been high. Several studies have undergone retraction. A small recent wave of preregistered studies has largely returned null results for outcomes related to youth violence as well as outcomes related to milder aggression. Increasingly, evidence suggests AVGs have little impact on player behavior in the realm of aggression and violence. Nonetheless, most professional guild policy statements (e.g. American Psychological Association) have failed to reflect these changes in the literature. Such policy statements should be retired or revised lest they misinform the public or do damage to the reputation of these organizations.


The Nuclear Family Was a Mistake: Loneliness, lack of support, fragility

The Nuclear Family Was a Mistake. David Brooks. The Atlantic. Mar 2020. https://www.theatlantic.com/magazine/archive/2020/03/the-nuclear-family-was-a-mistake/605536/

The family structure we’ve held up as the cultural ideal for the past half century has been a catastrophe for many. It’s time to figure out better ways to live together.

Excerpts:

This is the story of our times—the story of the family, once a dense cluster of many siblings and extended kin, fragmenting into ever smaller and more fragile forms. The initial result of that fragmentation, the nuclear family, didn’t seem so bad. But then, because the nuclear family is so brittle, the fragmentation continued. In many sectors of society, nuclear families fragmented into single-parent families, single-parent families into chaotic families or no families.

If you want to summarize the changes in family structure over the past century, the truest thing to say is this: We’ve made life freer for individuals and more unstable for families. We’ve made life better for adults but worse for children. We’ve moved from big, interconnected, and extended families, which helped protect the most vulnerable people in society from the shocks of life, to smaller, detached nuclear families (a married couple and their children), which give the most privileged people in society room to maximize their talents and expand their options. The shift from bigger and interconnected extended families to smaller and detached nuclear families ultimately led to a familial system that liberates the rich and ravages the working-class and the poor.

...

Ever since I started working on this article, a chart has been haunting me [https://www.pewforum.org/2019/12/12/religion-and-living-arrangements-around-the-world/pf_12-12-19_religion-households-00-02/]. It plots the percentage of people living alone in a country against that nation’s GDP. There’s a strong correlation. Nations where a fifth of the people live alone, like Denmark and Finland, are a lot richer than nations where almost no one lives alone, like the ones in Latin America or Africa. Rich nations have smaller households than poor nations. The average German lives in a household with 2.7 people. The average Gambian lives in a household with 13.8 people.

That chart suggests two things, especially in the American context. First, the market wants us to live alone or with just a few people. That way we are mobile, unattached, and uncommitted, able to devote an enormous number of hours to our jobs. Second, when people who are raised in developed countries get money, they buy privacy.

For the privileged, this sort of works. The arrangement enables the affluent to dedicate more hours to work and email, unencumbered by family commitments. They can afford to hire people who will do the work that extended family used to do. But a lingering sadness lurks, an awareness that life is emotionally vacant when family and close friends aren’t physically present, when neighbors aren’t geographically or metaphorically close enough for you to lean on them, or for them to lean on you. Today’s crisis of connection flows from the impoverishment of family life.

I often ask African friends who have immigrated to America what most struck them when they arrived. Their answer is always a variation on a theme—the loneliness. It’s the empty suburban street in the middle of the day, maybe with a lone mother pushing a baby carriage on the sidewalk but nobody else around.

For those who are not privileged, the era of the isolated nuclear family has been a catastrophe. It’s led to broken families or no families; to merry-go-round families that leave children traumatized and isolated; to senior citizens dying alone in a room. All forms of inequality are cruel, but family inequality may be the cruelest. It damages the heart. Eventually family inequality even undermines the economy the nuclear family was meant to serve: Children who grow up in chaos have trouble becoming skilled, stable, and socially mobile employees later on.

Human populations vary substantially & unexpectedly in both the range and pattern of facial sexually dimorphic traits; European & South American populations display larger levels of facial sexual dimorphism than African populations

Kleisner, Karel, Petr Tureček, S. Craig Roberts, Jan Havlicek, Jaroslava V. Valentova, Robert M. Akoko, Juan David Leongómez, et al. 2020. “How and Why Patterns of Sexual Dimorphism in Human Faces Vary Across the World.” PsyArXiv. February 10. doi:10.31234/osf.io/7vdm

Abstract: Sexual selection, including mate choice and intrasexual competition, is responsible for the evolution of some of the most elaborated and sexually dimorphic traits in animals. Although there is clear sexual dimorphism in the shape of human faces, it is not clear whether this is similarly due to mate choice, or whether mate choice affects only part of the facial shape difference between men and women.  Here we explore these questions by investigating patterns of both facial shape and facial preference across a diverse set of human populations. We find evidence that human populations vary substantially and unexpectedly in both the range and pattern of facial sexually dimorphic traits. In particular, European and South American populations display larger levels of facial sexual dimorphism than African populations. Neither cross-cultural differences in facial shape variation, differences in body height between sexes, nor differing preferences for facial sex-typicality across countries, explain the observed patterns of facial dimorphism. Altogether, the association between morphological sex-typicality and attractiveness is moderate for women and weak (or absent) for men. Analysis that distinguishes between allometric and non-allometric components reveals that non-allometric sex-typicality is preferred in women’s faces but not in faces of men. This might be due to different regimes of ongoing sexual selection acting on men, such as stronger intersexual selection for body height and more intense intrasexual physical competition, compared with women.