Wednesday, February 12, 2020

Congenital amusia (tone deafness) is a lifelong musical disorder that affects 4% of the population (single estimate based on a single test from 1980); it is more 1.5pct of pop. and is highly heritable

Peretz, Isabelle, and Dominique T. Vuvan. 2020. “Prevalence of Congenital Amusia.” PsyArXiv. February 12. doi:10.1038/ejhg.2017.15

Abstract: Congenital amusia (commonly known as tone deafness) is a lifelong musical disorder that affects 4% of the population according to a single estimate based on a single test from 1980. Here we present the first large-based measure of prevalence with a sample of 20 000 participants, which does not rely on self-referral. On the basis of three objective tests and a questionnaire, we show that (a) the prevalence of congenital amusia is only 1.5%, with slightly more females than males, unlike other developmental disorders where males often predominate; (b) self-disclosure is a reliable index of congenital amusia, which suggests that congenital amusia is hereditary, with 46% first-degree relatives similarly affected; (c) the deficit is not attenuated by musical training and (d) it emerges in relative isolation from other cognitive disorder, except for spatial orientation problems. Hence, we suggest that congenital amusia is likely to result from genetic variations that affect musical abilities specifically.

Domestic cats spontaneously discriminate between the number and size of potential prey in a way that can be interpreted as adaptive for a lone-hunting, obligate carnivore, and show complex levels of risk–reward analysis

Revisiting more or less: influence of numerosity and size on potential prey choice in the domestic cat. Jimena Chacha, Péter Szenczi, Daniel González, Sandra Martínez-Byer, Robyn Hudson & Oxána Bánszegi . Animal Cognition, Feb 12 2020. https://link.springer.com/article/10.1007/s10071-020-01351-w

Abstract: Quantity discrimination is of adaptive relevance in a wide range of contexts and across a wide range of species. Trained domestic cats can discriminate between different numbers of dots, and we have shown that they also spontaneously choose between different numbers and sizes of food balls. In the present study we performed two experiments with 24 adult cats to investigate spontaneous quantity discrimination in the more naturalistic context of potential predation. In Experiment 1 we presented each cat with the simultaneous choice between a different number of live prey (1 white mouse vs. 3 white mice), and in Experiment 2 with the simultaneous choice between live prey of different size (1 white mouse vs. 1 white rat). We repeated each experiment six times across 6 weeks, testing half the cats first in Experiment 1 and then in Experiment 2, and the other half in the reverse order. In Experiment 1 the cats more often chose the larger number of small prey (3 mice), and in Experiment 2, more often the small size prey (a mouse). They also showed repeatable individual differences in the choices which they made and in the performance of associated predation-like behaviours. We conclude that domestic cats spontaneously discriminate between the number and size of potential prey in a way that can be interpreted as adaptive for a lone-hunting, obligate carnivore, and show complex levels of risk–reward analysis.

Non-reproducible: Evidence that social network index is associated with gray matter volume from a data-driven investigation

No strong evidence that social network index is associated with gray matter volume from a data-driven investigation. Chujun Lin et al. Cortex, February 12 2020. https://doi.org/10.1016/j.cortex.2020.01.021

Abstract: Recent studies in adult humans have reported correlations between individual differences in people’s Social Network Index (SNI) and gray matter volume (GMV) across multiple regions of the brain. However, the cortical and subcortical loci identified are inconsistent across studies. These discrepancies might arise because different regions of interest were hypothesized and tested in different studies without controlling for multiple comparisons, and/or from insufficiently large sample sizes to fully protect against statistically unreliable findings. Here we took a data-driven approach in a pre-registered study to comprehensively investigate the relationship between SNI and GMV in every cortical and subcortical region, using three predictive modeling frameworks. We also included psychological predictors such as cognitive and emotional intelligence, personality, and mood. In a sample of healthy adults (n = 92), neither multivariate frameworks (e.g., ridge regression with cross-validation) nor univariate frameworks (e.g., univariate linear regression with cross-validation) showed a significant association between SNI and any GMV or psychological feature after multiple comparison corrections (all R-squared values ≤ 0.1). These results emphasize the importance of large sample sizes and hypothesis-driven studies to derive statistically reliable conclusions, and suggest that future meta-analyses will be needed to more accurately estimate the true effect sizes in this field.

Racial slurs “reclaimed” by the targeted group convey affiliation rather than derogation; authors found that the intergroup use of reappropriated slurs was perceived quite positively by both White and Black individuals

Perceptions of Racial Slurs Used by Black Individuals Toward White Individuals: Derogation or Affiliation? Conor J. O’Dea, Donald A. Saucier. Journal of Language and Social Psychology, February 11, 2020. https://doi.org/10.1177/0261927X20904983

Abstract: Research suggests that racial slurs may be “reclaimed” by the targeted group to convey affiliation rather than derogation. Although it is most common in intragroup uses (e.g., “nigga” by a Black individual toward another Black individual), intergroup examples of slur reappropriation (e.g., “nigga” by a Black individual toward a White individual) are also common. However, majority and minority group members’ perceptions of intergroup slur reappropriation remain untested. We examined White (Study 1) and Black (Study 2) individuals’ perceptions of the reappropriated terms, “nigga” and “nigger” compared with a control term chosen to be a non-race-related, neutral term (“buddy”), a nonracial derogative term (“asshole”) and a White racial slur (“cracker”) used by a Black individual toward a White individual. We found that the intergroup use of reappropriated slurs was perceived quite positively by both White and Black individuals. Our findings have important implications for research on intergroup relations and the reappropriation of slurs.

Keywords: racial slurs, common in-group identity, social dominance theory, affiliation, derogation



Calling into question that contagious yawning is a signal of empathy: No evidence of familiarity, gender or prosociality biases in dogs

Contagious yawning is not a signal of empathy: no evidence of familiarity, gender or prosociality biases in dogs. Patrick Neilands et al. Proceedings of the Royal Society B: Biological Sciences, Volume 287, Issue 1920, February 5 2020. https://doi.org/10.1098/rspb.2019.2236

Abstract: Contagious yawning has been suggested to be a potential signal of empathy in non-human animals. However, few studies have been able to robustly test this claim. Here, we ran a Bayesian multilevel reanalysis of six studies of contagious yawning in dogs. This provided robust support for claims that contagious yawning is present in dogs, but found no evidence that dogs display either a familiarity or gender bias in contagious yawning, two predictions made by the contagious yawning–empathy hypothesis. Furthermore, in an experiment testing the prosociality bias, a novel prediction of the contagious yawning–empathy hypothesis, dogs did not yawn more in response to a prosocial demonstrator than to an antisocial demonstrator. As such, these strands of evidence suggest that contagious yawning, although present in dogs, is not mediated by empathetic mechanisms. This calls into question claims that contagious yawning is a signal of empathy in mammals.

4. Discussion

By combining the data from six different studies, the resulting dataset is the largest used to date to examine the presence of contagious yawning in a non-human mammal. This allowed us to draw conclusions about the presence and absence of contagious yawning and the signatures predicted by the contagious yawning–empathy hypothesis with a greater level of certainty than by relying on individual studies alone. Our reanalysis shows that dogs do exhibit contagious yawning, showing higher probabilities and rates of yawning for yawning demonstrators compared to control demonstrators. This provides robust support for the claims that contagious yawning is present in dogs [35,4951]. In order to test whether this contagious yawning is related to mechanisms underpinning empathy, we examined this dataset for evidence of the familiarity bias and gender bias. However, dogs in our reanalysis showed no evidence of either of these biases. Similarly, when we ran a novel experiment to look for a prosociality bias, we found that the dogs in our experiment were no more likely to yawn for prosocial demonstrators than antisocial demonstrators. Dogs, therefore, show no evidence for any of the familiarity, gender, or prosociality biases predicted by the contagious yawning–empathy hypothesis. This suggests that contagious yawning in dogs is not mediated by an empathy-related perception–action mechanism [5254]. The presence of contagious yawning in non-human animals, therefore, cannot be assumed to be evidence for a perception–action mechanism shared between humans and other mammals, as has been previously proposed [1,35,41,58]. That is not to say that some non-human animals do not necessarily experience some form of empathy but that contagious yawning cannot be taken as a diagnostic signal for the presence of these empathetic processes. Furthermore, these results, alongside the arguments put forward by Massen & Gallup in their recent review [37], bring into question the validity of the contagious yawning–empathy hypothesis more broadly.
It is important to acknowledge several caveats to our conclusions. Firstly, in both our reanalysis and experiment, the subjects were primarily responding to interspecific yawns from human demonstrators. While it is possible that dogs would respond differently to conspecific and interspecific yawning, there are several reasons to believe that this is not the case. Research in other species such as chimpanzees suggests that they respond similarly to conspecific and interspecific yawns [41], and, in our reanalysis, controlling for demonstrator type did not improve model fit. Nevertheless, more rigorous comparisons between how dogs respond to conspecific and interspecific yawning would be a useful future line of research. Secondly, it is important to note that the familiarity, gender, and prosociality biases are indirect measures of empathy [37]. As such, care needs to be taken in interpreting these biases and there remains substantial debate over how to do so. For example, it has been argued that both the tendency for children with ASD to be less prone to contagious yawning [83] and the familiarity bias [37,84,85] can be explained in terms of differences in attending to yawners rather than differences in empathetic response. Similarly, the gender bias reported in humans [29] is not straightforward to interpret and there is debate over whether it simply reflects a false positive in the literature [33,34]. By contrast, proponents of the contagious yawning–empathy hypothesis argue that the familiarity bias continues to be found even when controlling for differences in subjects' attention [40,41] and that the negative results for the gender bias in previous studies reflects methodological issues with prior experiments [34]. Furthermore, although alternative hypotheses such as the attentional hypothesis could explain the presence of a single bias such as the familiarity bias, only the contagious yawning–empathy hypothesis predicts the presence of all three biases. As such, testing for all three biases represents a powerful test of the contagious yawning–empathy hypothesis. Finally, searching for a novel signature, the prosociality bias, required a novel experimental methodology where dogs were exposed to a prosocial experimenter that interacted with them and an antisocial experimenter that ignored them. Previous work which used a similar methodology demonstrated that dogs do show a preference for the prosocial demonstrator [73], and so if the contagious yawning–empathy hypothesis is correct, dogs should have reacted with increased yawning to the prosocial demonstrator. However, further work would be useful in confirming the presence or absence of the prosociality bias in dogs and other species such as humans.
Research into contagious yawning has been dominated by the contagious yawning–empathy debate [37]. However, contagious yawning is an interesting phenomenon in its own right as its evolutionary roots and ultimate function remain a mystery [20]. Contagious yawning in animals may be the result of stress [54,57], an affiliation strategy [67], a means of communication [61], or a mechanism to improve collective vigilance within groups [37,68,69] rather than being related to empathy via a perception–action mechanism. Future research into contagious yawning should include a greater focus on testing between these and other hypotheses. For example, the affiliation hypothesis might predict that contagious yawning should be seen more frequently during reconciliation periods after conflict while the collective vigilance hypothesis posits that contagious yawning should increase in response to external disturbances [37,86]. However, it is important to note that these theories are not necessarily mutually exclusive [87] and that factors such as stress appear to influence yawning propensity in complex ways [88,89]. Additionally, an important next step is to consider evidence of contagious yawning outside of mammals. While there has been some work looking at contagious yawning in budgerigars [86,90] and tortoises [91], research has otherwise been sparse outside of the mammalian class.
Future research would benefit from systematically testing contagious yawning across multiple species. One barrier to such projects is that studying a range of different species often requires different experimental set-ups to make such testing feasible. There is a concern that such a range of methodological approaches may make cross-species and cross-study comparisons difficult, if not impossible [35,66]. However, our finding that the effect of treatment on yawning probabilities and rates remains stable when controlling for various aspects of study design suggests that the presence of contagious yawning is relatively robust to differences in experimental design. As such, while it is important to use broadly similar designs (for instance, comparing animals’ yawning rates when exposed to either a yawning demonstrator or control demonstrator), there could be considerable flexibility in other aspects of study design. For example, our results suggest that animals' yawning probabilities and rates to either live demonstrators or recorded demonstrators are comparable. Therefore, our findings suggest that more ambitious cross-species work can be carried out with confidence in the validity of the subsequent comparisons.
To conclude, our results provide robust support for the hypothesis that contagious yawning is found in dogs, the first non-human species of mammal where it has been clearly shown outside of chimpanzees. However, we found no evidence that dogs yawn more in response to either familiar human yawners compared to unfamiliar human yawners, or to prosocial human yawners compared to antisocial human yawners. Additionally, we found no evidence that female dogs were more likely to yawn in response to a yawning demonstrator than male dogs. As such, these findings cast doubt on the widespread assertion that contagious yawning is mediated by the same perception–action mechanism as empathy [1,6,35,41,58]. Instead, they support recent claims that there is no link between contagious yawning and empathetic processes [37,67] and underline the importance of developing more direct measures of empathy in non-human animals [37,92]. However, while our results suggest that researchers cannot rely on contagious yawning as a diagnostic signal of empathy, our additional findings that the effect of contagious yawning appears to be robust to variations in experimental methods suggest that cross-species comparisons may be a powerful way to disentangle the evolutionary roots of this behaviour.

Of what they thought were 4 important predictors of subjective well-being (marriage, employment, prosociality, & life meaning), marriage showed only very small effects, & employment had larger effects that peaked around age 50 years

Subjective Well-Being Around the World: Trends and Predictors Across the Life Span. Andrew T. Jebb. Psychological Science, February 11, 2020. https://doi.org/10.1177/0956797619898826

Abstract: Using representative cross-sections from 166 nations (more than 1.7 million respondents), we examined differences in three measures of subjective well-being over the life span. Globally, and in the individual regions of the world, we found only very small differences in life satisfaction and negative affect. By contrast, decreases in positive affect were larger. We then examined four important predictors of subjective well-being and how their associations changed: marriage, employment, prosociality, and life meaning. These predictors were typically associated with higher subjective well-being over the life span in every world region. Marriage showed only very small associations for the three outcomes, whereas employment had larger effects that peaked around age 50 years. Prosociality had practically significant associations only with positive affect, and life meaning had strong, consistent associations with all subjective-well-being measures across regions and ages. These findings enhance our understanding of subjective-well-being patterns and what matters for subjective well-being across the life span.

Keywords: subjective well-being, cross-cultural, aging, life meaning, prosocial behavior

You may be more original than you think: Predictable biases in self-assessment of originality

You may be more original than you think: Predictable biases in self-assessment of originality. Yael Sidi et al. Acta Psychologica, Volume 203, February 2020, 103002. https://doi.org/10.1016/j.actpsy.2019.103002

Highlights
•    Self-judgments of originality are sensitive to the serial order effect.
•    Originality judgments reveal under-estimation robustly and resiliently.
•    People discriminate well between more and less original ideas.
•    There is a double dissociation between actual originality and originality judgments.

Abstract: How accurate are individuals in judging the originality of their own ideas? Most metacognitive research has focused on well-defined tasks, such as learning, memory, and problem solving, providing limited insight into ill-defined tasks. The present study introduces a novel metacognitive self-judgment of originality, defined as assessments of the uniqueness of an idea in a given context. In three experiments, we examined the reliability, potential biases, and factors affecting originality judgments. Using an ideation task, designed to assess the ability to generate multiple divergent ideas, we show that people accurately acknowledge the serial order effect—judging later ideas as more original than earlier ideas. However, they systematically underestimate their ideas' originality. We employed a manipulation for affecting actual originality level, which did not affect originality judgments, and another one designed to affect originality judgments, which did not affect actual originality performance. This double dissociation between judgments and performance calls for future research to expose additional factors underlying originality judgments.

Contrary to common views, use of social media and online portals fosters more visits to news sites and a greater variety of news sites visited

How social network sites and other online intermediaries increase exposure to news. Michael Scharkow, Frank Mangold, Sebastian Stier, and Johannes Breuer. PNAS February 11, 2020 117 (6) 2761-2763; January 27, 2020. https://doi.org/10.1073/pnas.1918279117

Abstract: Research has prominently assumed that social media and web portals that aggregate news restrict the diversity of content that users are exposed to by tailoring news diets toward the users’ preferences. In our empirical test of this argument, we apply a random-effects within–between model to two large representative datasets of individual web browsing histories. This approach allows us to better encapsulate the effects of social media and other intermediaries on news exposure. We find strong evidence that intermediaries foster more varied online news diets. The results call into question fears about the vanishing potential for incidental news exposure in digital media environments.

Keywords: news exposureonline media useweb tracking data

People can come across news and other internet offerings in a variety of ways, for example, by visiting their favorite websites, using search engines, or following recommendations from contacts on social media (1). These routes do not necessarily lead people to the same venues. While traditionally considered as an important ingredient of well-functioning democratic societies, getting news as a byproduct of other media-related activities has been assumed to wane in the online sphere. Intermediaries like social networking sites (SNS) and search engines are regarded with particular suspicion, often criticized for fostering news avoidance and selective exposure (2). This assumption has been, perhaps most prominently, ingrained in the “filter bubble” thesis, positing that search and recommendation algorithms bias news diets toward users’ preferences and, thus, decrease content diversity (3). On the other hand, incidental news exposure (INE) due to other online activities has received much scholarly attention for several decades (4). Contrary to widely held assumptions, recent INE research found that SNS users have more rather than less diverse news diets than nonusers. For example, one study showed that SNS users consumed almost twice the number of news outlets in the previous week as did nonusers (2). Similar results emerged regarding the use of web aggregators (portals) and search engines, although people may use search engines in a more goal-driven fashion compared to SNS (1).

In previous studies, SNS-based news exposure was typically measured by asking respondents whether they are (unintentionally) exposed to news via social media. Like many survey studies, this approach naturally suffers from the limited accuracy and reliability of self-reports (5). More specifically, recent work has criticized self-report measures for being biased toward active news choices and routine use (6) and being particularly inaccurate when people access news via intermediaries (7). To alleviate these limitations, some studies have used log data to estimate the quantity and quality of online news exposure, for example, in terms of exposure to cross-cutting news (8, 9). However, these studies have focused only on single social media platforms instead of different intermediary routes to news. Other recent studies (1, 10) have traced direct and indirect pathways to online news using browser logs, but have not distinguished nonregular—and therefore possibly incidental—news exposure from regular, typically more intentional or routinized forms of news consumption online. In other words, the question whether visiting SNS more often (than usual) actually leads to more varied news exposure (than usual) essentially remains unanswered. This problem concerns almost all studies on the use and effects of online media, and has received considerable attention in recent communication research (11). We argue that positive within-person effects of visiting intermediary sites on online news exposure are a necessary (although not sufficient, since even nonregular visits could be intentional) precondition for INE, and, therefore, testing for such effects is a useful endeavor. We address this question using a statistical model that distinguishes between stable between-person differences and within-person effects, that is, the random-effects within–between (REWB) model (12). Investigating within-person effects has additional value by safeguarding causal inferences against bias due to (previously) unmeasured person-level confounders. We apply the REWB model to two large, representative tracking datasets of individual-level browsing behavior in Germany, collected independently in 2012 and 2018. This allows us not only to compare within- and between-person effects but also to analyze possible changes in the effects of SNS (Facebook, Twitter) and intermediaries (Google, web portals) over recent years. Specifically, we investigate their effects on the amount and variety of online news exposure. Using this approach enables us to replicate and extend two recent survey studies (2, 13) that looked at the effects of SNS, web portals, and search engines on 1) overall online news exposure and 2) the diversity of people’s online news diets.


Conclusion
We used large-scale observational data to avoid the limited reliability and validity of self-reports on news exposure. Leveraging the potential of such data with the REWB model, our study provides strong evidence that getting more and more-diverse news as a consequence of other media-related activities is a common phenomenon in the online sphere. The findings contradict widely held concerns that social media and web portals specifically contribute to news avoidance and restrict the diversity of news diets. Note that we followed previous studies and measured the variety of news diets by counting the number of outlets visited. Given the overall low frequency of news visits, intermediaries add diversity to the news diets of the large majority of participants with a small news repertoire (2). While we cannot say that outlet variety always equals viewpoint variety, prior research has shown that using a larger number of online news sources typically translates into more-diverse overall news exposure (15). In contrast to previous studies (9, 10), we cannot quantify diversity in terms of cross-cutting exposure, but note that previous studies have shown little evidence for strong partisan alignments of news audiences in Germany (16) on the outlet level, so that variety would have to be measured on the level of individual news items, which requires URL-level tracking and content analysis data. In addition, future combinations of web tracking with experience sampling surveys are needed to disentangle in what instances nonregular news use is entirely nonintentional and how the respective contents specifically affect the diversity in news diets.

Tuesday, February 11, 2020

We show that in religious cultural contexts, religious people lived 2.2 years longer than did nonreligious people; but in nonreligious cultural contexts, religiosity conferred no such longevity

Ebert, T., Gebauer, J. E., Talman, J. R., & Rentfrow, P. J. (2020). Religious people only live longer in religious cultural contexts: A gravestone analysis. Journal of Personality and Social Psychology, Feb 2020. https://doi.org/10.1037/pspa0000187

Abstract: Religious people live longer than nonreligious people, according to a staple of social science research. Yet, are those longevity benefits an inherent feature of religiosity? To find out, we coded gravestone inscriptions and imagery to assess the religiosity and longevity of 6,400 deceased people from religious and nonreligious U.S. counties. We show that in religious cultural contexts, religious people lived 2.2 years longer than did nonreligious people. In nonreligious cultural contexts, however, religiosity conferred no such longevity benefits. Evidently, a longer life is not an inherent feature of religiosity. Instead, religious people only live longer in religious cultural contexts where religiosity is valued. Our study answers a fundamental question on the nature of religiosity and showcases the scientific potential of gravestone analyses.


Managing Systemic Financial Crises: New Lessons and Lessons Relearned

Managing Systemic Financial Crises: New Lessons and Lessons Relearned. Marina Moretti; Marc C Dobler; Alvaro Piris. IMF Departmental Paper No. 20/05, February 11, 2020. https://www.imf.org/en/Publications/Departmental-Papers-Policy-Papers/Issues/2020/02/10/Managing-Systemic-Financial-Crises-New-Lessons-and-Lessons-Relearned-48626

Chapter 1 Introduction
Systemic financial crises have been a recurring feature of economies in mod­ern times. Panics, wherein collapsing trust in the banking system and credi­tor runs have significant negative consequences for economic activity—rare events in any one country—have occurred relatively frequently across the IMF membership. Common causes include high leverage, booming credit, an erosion of underwriting standards, exposure to rapidly rising prop­erty prices and other asset bubbles, excessive exposure to the government, inadequate supervision, and often a high external current account deficit. Financial distress typically lasts several years and is associated with large economic contractions and high fiscal costs (Laeven and Valencia 2018). Figure 1 shows the prevalence of systemic financial crises over the past 30 years, including the number of crisis episodes each year. The global financial crisis (GFC) was just such a panic, albeit one that transcended national and regional boundaries.
IMF staff experience in helping countries manage systemic banking crises has evolved over time. Major financial sector problems have been addressed in the context of IMF-supported programs primarily in emerging market econ­omies, developing countries and, more recently, in some advanced economies during the GFC. The IMF approach to managing these events was summa­rized in a 2003 paper (Hoelscher and Quintyn 2003) before there was inter­national consensus on legal frameworks, preparedness, and policy approaches, and when practices varied widely across the membership. The principles out­lined in that paper built on staff experience in a range of countries—notably, Indonesia, Republic of Korea, Russia, and Thailand in the late 1990s; and Argentina, Ecuador, Turkey, and Uruguay in the early 2000s. It emphasized that managing a systemic banking crisis is a complex, multiyear process and presented tools available as part of a comprehensive framework for addressing systemic banking problems while minimizing taxpayers’ costs. Although these core concepts and principles remain largely valid today, they merit a revisit following the experiences and lessons learned from the GFC.
The GFC shared similarities with past systemic crises, albeit with an impact felt well beyond directly affected countries (Claessens and others 2010). As in previous episodes of financial distress, the countries most affected by the GFC—the US starting in 2008 and several countries in Europe—saw cred­itor runs and contagion across institutions, significant fiscal and quasi-fiscal outlays, and a sharp contraction in credit and economic activity (see Fig­ure 1). The reason the impact was more widely felt across the global econ­omy: the crisis originated in advanced economies with large financial sectors. These countries embodied a substantial portion of global economic output, trade, and financial activity and affected internationally active financial firms providing significant cross-border services. The speed of transmission of financial distress across borders was unprecedented, given the complex and opaque financial linkages between financial firms. These factors introduced new challenges, as they impacted the effectiveness of many existing crisis management tools.
Reflecting these new challenges, individual country responses during the GFC differed from past experiences in important respects (Table 1):
The size and scope of liquidity support provided by major central banks was unprecedented. More liquidity was provided to more counterparties for longer periods against a wider range of collateral. Much of this support was through liquidity facilities open to all market participants, while some was provided as emergency liquidity assistance (ELA) to individual institutions. This occurred against the backdrop of accommodative monetary policy and quantitative easing.
Explicit liability guarantees were more selectively deployed than in past crises, when blanket guarantees covering a wide set of liabilities were more commonly used by authorities. During the GFC (with some notable excep­tions), explicit liability guarantees typically applied only to specific institu­tions, new debt issuance, specific asset classes, or were capped (for example, a higher level of deposit insurance). However, implicit guarantees were widespread, as demonstrated by the extensive public solvency support pro­vided to financial institutions and markets. Systemic financial institutions were rarely liquidated or resolved,1 and, of those that were, some proved destabilizing for the broader financial system. This trend reflected in part inadequate powers to resolve such firms in an orderly way.
Difficulties in achieving effective cross-border cooperation in resolution between authorities in different countries came to the fore, given the global footprint of some weak institutions. The lack of mechanisms to enforce resolution measures on a cross-border basis and cooperate more broadly led, in some cases, to the breakup of cross-border groups into national components.
More emphasis was placed on banks’ ability to manage nonperforming assets internally or through market disposals, with less reliance on central­ized asset management companies (AMCs)—public agencies that purchase and manage nonperforming loans (NPLs). Protracted weak growth in some countries, the large scale of the problem, and gaps in legal frameworks also meant that progress in addressing distressed assets and deleveraging private sector balance sheets was slower in some countries than in previous crises.

Table 1. Lessons on the Design of the Financial Safety NetWhat is Similar?                                                                  What is New?
• Escalating early intervention and enforcement measures 
• More intrusive supervision and early intervention powers
 

• Special resolution regimes for banks                                 • A new international standard on resolution regimes for systemic financial institutions requiring a range of resolution powers and tools

• Establishing deposit insurance (if prior conditions enable)1 with adequate ex ante funding, available to fund resolution on a least cost basis           •
An international standard on deposit insurance, requiring ex ante funding and no coinsurance
                                                                                              • Desirability of depositor preference
 

• Capacity to provide emergency liquidity to banks, at the discretion of the central bank  Liquidity assistance frameworks with broader eligibility conditions, collateral, and safeguards
 

1 IMF staff does not recommend establishing a deposit insurance system in countries with weak banking supervision, ineffective resolution regimes, and identifiably weak banks. Doing so would expose a nascent scheme to significant risk, (when it has yet to build adequate funding and operational capacity) and could undermine depositor confidence.
The GFC was a watershed. Policymakers were confronted with the gaps and weaknesses in their legal and policy frameworks to address bank liquidity and solvency problems, their understanding of systemic risk in institutions and markets, and domestic and international cooperation. Under these constraints, the policy responses that were deployed put substantial public resources at risk. While ultimately successful in stabilizing financial sys­tems and the macroeconomy, the fiscal and economic costs were high. The far-reaching impact of the GFC provided impetus for a major overhaul of financial sector oversight (Financial Stability Forum 2008; IMF 2018). The regulatory reform agenda agreed to by the Group of Twenty leaders in 2009 elevated the discussions to the highest policy level and kept international attention focused on establishing a stronger set of globally consistent rules. The new architecture aimed to (1) enhance capital buffers and reduce lever­age and financial procyclicality; (2) contain funding mismatches and currency risk; (3) enhance the regulation and supervision of large and interconnected institutions, including by expanding the supervisory perimeter; (4) improve the supervision of a complex financial system; (5) align governance and com­pensation practices of banks with prudent risk taking; (6) overhaul resolution regimes of large financial institutions; and (7) introduce macroprudential policies. Through its multilateral and bilateral surveillance of its member­ship, including the Financial Sector Assessment Program (FSAP), Article IV missions, and its Global Financial Stability Reports, the IMF has contributed to implementing the regulatory reform agenda.
This paper summarizes the general principles, strategies, and techniques for preparing for and managing systemic banking crises, based on the views and experience of IMF staff, considering developments since the GFC. The paper does not summarize the causes of the GFC, its evolution, or the policy responses adopted; these concepts have been well documented elsewhere.2 Moreover, it does not cover the full reform agenda since the crisis, rather, only two parts—one on key elements of a legal and operational framework for crisis preparedness (the “financial safety net”) and the other on oper­ational strategies and techniques to manage systemic crises if they occur. Each section summarizes relevant lessons learned during the GFC and other recent episodes of financial distress, merging them with preexisting advice to give a complete picture of the main elements of IMF staff advice to member countries on operational aspects of crisis preparedness and management. The advice builds on and is consistent with international financial standards, tai­lored to country-specific circumstances based on IMF staff crisis experience. The advice recognizes that every crisis is different and that managing systemic failures is exceptionally challenging, both operationally and politically. None­theless, better-prepared authorities are less likely to resort to bailing out bank shareholders and creditors when facing such circumstances.
Part I, on crisis preparedness, outlines the design and operational features of a well-designed financial safety net. It discusses how staff advice on these issues has evolved, drawing from the international standards and good practice that emerged in the aftermath of the GFC. Effective financial safety nets play an important role in minimizing the risk of systemwide financial distress—by increasing the likelihood that failing financial institutions can be resolved without triggering financial instability. However, they cannot eliminate that risk, particularly at times of severe stress.
Part II, on crisis management, discusses aspects of a policy response to a full-blown banking crisis. It details the evolution of IMF advice in light of what worked well—or less well—during the GFC, reflecting the experience of IMF staff in actual crisis situations. The narrative is organized around poli­cies for dealing with three distinct aspects3 of systemic banking crisis:

*  Containment—strategies and techniques to stem creditor runs and stabilize financial sector liquidity in the acute phase of panic and high uncertainty. This phase is typically short-lived, with an escalating policy response as needed to avoid the collapse of the financial system.
*  Restructuring and resolution—strategies and techniques to diagnose bank soundness and viability, and to recapitalize or resolve failing financial insti­tutions, which are typically implemented over the following year or more, depending on the severity of the situation.
*  Dealing with distressed assets—strategies and techniques to clean up pri­vate sector balance sheets that first identify and then remove impediments to effective resolution of distressed assets, with implementation likely to stretch over several years.

IMF member countries have continued to cope with financial panics and widespread financial sector weakness. The IMF remains fully engaged on these issues, often in the context of IMF-supported programs, with a sig­nificant focus on managing systemic problems and financial sector reforms. Staff continue to provide support and advice on supervisory practice, reso­lution, deposit insurance, and emergency liquidity in IMF member coun­tries learning from experience and adapt policy advice to developments and country-specific circumstances.


Box 9. Dealing with Excessive Related-Party Exposures

Excessive related-party exposures present a major risk to financial stability. Related-party loans that go unreported conceal credit and concentration risk and may be on pre­ferred terms, reducing bank profitability and solvency. Persistently high related-party exposures also hold down economic growth by tying up capital that could otherwise be used to provide lending to legitimate, creditworthy businesses on an arms-length basis. Related-party exposures complicate bank resolution, as shareholders whose rights have been suspended have an incentive to default on their loans to the bank.

Opaque bank ownership greatly facilitates the hiding of related-party exposures and trans­actions. Opaque ownership is associated with poor governance, AML/CFT violations, and fraudulent activities. Banks without clear ultimate beneficial owners cannot count on share­holder support in times of crisis, and the quality of their capital cannot be verified. Moreover, unknown owners cannot be held accountable for criminal actions leading to a bank’s failure.
Resolving these problems requires a three-pillar approach. Legal reforms are needed to lay the foundation for targeted bank diagnostics and effective enforcement actions:

*  Legal reforms to introduce international standards for transparent disclosure and mon­itoring of bank owners and related parties—including prudent limits, strict conflict of interest rules on the processes and procedures for dealing with related parties, and esca­lating enforcement measures. Non-transparent ownership should be made a legal ground for license revocation or resolution, and the supervisor authorized to presume a related party under certain circumstances. This shifts from supervisors to banks the “burden of proof”—to demonstrate that a suspicious transaction is not with a related party.

*  Bank diagnostics are targeted at identifying ultimate beneficial owners and related-party exposures and transactions and assessing compliance with prudential lending limits for related-party and large exposures. The criteria for identification include control, economic dependency, and acting in concert. Identification of related-party transactions should also consider their risk-related features, such as the existence of preferential terms, the quality of documentation, and internal controls over the transactions.

*  Enforcement actions are taken to (1) remove unsuitable bank shareholders—that is, shareholders whose ultimate beneficial owner is not identified, or are otherwise found to be unsuitable; and (2) unwind excessive related-party exposures through repayment or disposal of the exposure, or resolution of the relationship (change in ownership of the bank or the borrower).

The three-pillar approach is best implemented in the context of a comprehensive financial sec­tor strategy. There may not be enough time to implement legal reforms during early interven­tion or the resolution of systemic banks. In such situations, suspected related-party exposures and liabilities must be swiftly identified and ringfenced. Once the system is stabilized, however, the three-pillar approach should be implemented for all banks (including those in liquidation).

Source: Karlsdóttir and others (forthcoming).

Those who share our musical taste are likely to be regarded as in-group members and will be subject to in-group favoritism according to our self-esteem and how strongly we identify with our fellow music fans

Musical taste, in-group favoritism, and social identity theory: Re-testing the predictions of the self-esteem hypothesis. Adam J Lonsdale. Psychology of Music, February 10, 2020. https://doi.org/10.1177/0305735619899158

Abstract: Musical taste is thought to function as a social “badge” of group membership, contributing to an individual’s sense of social identity. Following from this, social identity theory predicts that individuals should perceive those who share their musical tastes more favorably than those who do not. Social identity theory also asserts that this in-group favoritism is motivated by the need to achieve, maintain, or enhance a positive social identity and self-esteem (i.e., the “self-esteem hypothesis”). The findings of the present study supported both of these predictions. Participants rated fans of their favorite musical style significantly more favorably than fans of their least favorite musical style. The present findings also offer, for the first time, evidence of significant positive correlations between an individual’s self-esteem and the in-group bias shown to those who share their musical tastes. However, significant relationships with in-group identification also indicate that self-esteem is unlikely to be the sole factor responsible for this apparent in-group bias. Together these findings suggest that those who share our musical taste are likely to be regarded as in-group members and will be subject to in-group favoritism according to our self-esteem and how strongly we identify with our fellow music fans.

Keywords: in-group bias, in-group favoritism, musical taste, self-esteem, social identity


The higher the participants rated their own IQ, the higher their own ratings of EQ (EmotionalQ), attractiveness, and health; men overestimated more their IQ, attractiveness & health than women did, but not their EQ

Correlates of Self-Estimated Intelligence. Adrian Furnham and Simmy Grover. J. Intell. 2020, 8(1), 6; February 10 2020. https://www.mdpi.com/2079-3200/8/1/6

Abstract: This paper reports two studies examining correlates of self-estimated intelligence (SEI). In the first, 517 participants completed a measure of SEI as well as self-estimated emotional intelligence (SEEQ), physical attractiveness, health, and other ratings. Males rated their IQ higher (74.12 vs. 71.55) but EQ lower (68.22 vs. 71.81) than females but there were no differences in their ratings of physical health in Study 1. Correlations showed for all participants that the higher they rated their IQ, the higher their ratings of EQ, attractiveness, and health. A regression of self-estimated intelligence onto three demographic, three self-ratings and three beliefs factors accounted for 30% of the variance. Religious, educated males who did not believe in alternative medicine gave higher SEI scores. The second study partly replicated the first, with an N = 475. Again, males rated their IQ higher (106.88 vs. 100.71) than females, but no difference was found for EQ (103.16 vs. 103.74). Males rated both their attractiveness (54.79 vs. 49.81) and health (61.24 vs. 55.49) higher than females. An objective test-based cognitive ability and SEI were correlated r = 0.30. Correlations showed, as in Study 1, positive relationships between all self-ratings. A regression showed the strongest correlates of SEI were IQ, sex and positive self-ratings. Implications and limitations are noted.

Keywords: self-estimated; intelligence; sex differences; attitudes



Non-reproducible: About a decade ago, a study documented that conservatives have stronger physiological responses to threatening stimuli than liberals

Conservatives and liberals have similar physiological responses to threats. Bert N. Bakker, Gijs Schumacher, Claire Gothreau & Kevin Arceneaux. Nature Human Behaviour, February 10 2020. https://www.nature.com/articles/s41562-020-0823-z

Abstract: About a decade ago, a study documented that conservatives have stronger physiological responses to threatening stimuli than liberals. This work launched an approach aimed at uncovering the biological roots of ideology. Despite wide-ranging scientific and popular impact, independent laboratories have not replicated the study. We conducted a pre-registered direct replication (n = 202) and conceptual replications in the United States (n = 352) and the Netherlands (n = 81). Our analyses do not support the conclusions of the original study, nor do we find evidence for broader claims regarding the effect of disgust and the existence of a physiological trait. Rather than studying unconscious responses as the real predispositions, alignment between conscious and unconscious responses promises deeper insights into the emotional roots of ideology.

People rated their own faces as more attractive than others rated them, no matter if original or artificially rendered more masculine or feminine

Influence of sexual dimorphism on the attractiveness evaluation of one’s own face. Zhaoyi Li, Zhiguo Hu, Hongyan Liu. Vision Research, Volume 168, March 2020, Pages 1-8. https://doi.org/10.1016/j.visres.2020.01.005

Abstract: The present study aimed to explore the influence of sexual dimorphism on the evaluation of the attractiveness of one’s own face. In the experiment, a masculinized and a feminized version of the self-faces of the participants were obtained by transferring the original faces toward the average male or female face. The participants were required to rate the attractiveness of three types (original, masculine, feminine) of their own faces and the other participants’ faces in same-sex and opposite-sex contexts. The results revealed that the participants rated their own faces as more attractive than other participants rated them regardless of the sexually dimorphic type (original, masculine, feminine) or the evaluation context. More importantly, the male and female participants showed different preferences for the three types of self-faces. Specifically, in the same-sex context, the female participants rated their own original faces as significantly more attractive than the masculine and feminine faces, and the male participants rated their own masculine faces as significantly more attractive than the feminine faces; while in the opposite-sex context, no significant difference among the attractiveness scores of the three types of self-faces was found in both the male and female participants. The present study provides empirical evidence of the influence of sexual dimorphism on the evaluation of the attractiveness of self-faces.


We examined perceptions of the Dark Triad traits in 6 occupations; participants believed musicians & lawyers should be high in the Dark Triad, and teachers should be high in narcissism, but low in Machiavellianism & psychopathy

Insert a joke about lawyers: Evaluating preferences for the Dark Triad traits in six occupations. Cameron S. Kay, Gerard Saucier. Personality and Individual Differences, Volume 159, 1 June 2020, 109863. https://doi.org/10.1016/j.paid.2020.109863

Highlights
•    We examined perceptions of the Dark Triad traits in six occupations.
•    Participants believed musicians and lawyers should be high in the Dark Triad.
•    Participants believed teachers should be high in narcissism.
•    Overall, participants believed others should have the same dark traits they have.

Abstract: The current research examined how perceptions of the Dark Triad traits vary across occupations. Results from two studies (NTOTAL = 933) suggested that participants believe it is acceptable, if not advantageous, for lawyers and musicians to be high in the Dark Triad traits. Participants, likewise, indicated that teachers should be high in narcissism but low in Machiavellianism and psychopathy. Potentially, the performative aspects of narcissism are considered an asset for teachers, while Machiavellianism and psychopathy are considered a liability. The findings further indicated that, regardless of the occupation in question, people high in a specific Dark Triad trait believe others should also be high in that same trait. All results are considered in the context of the attraction-selection-attrition model.

Cultured meat safety: Unlike conventional meat, cultured muscle cells may be safer, without any adjacent digestive organs; but with this high level of cell multiplication, some dysregulation is likely as happens in cancer cells

The Myth of Cultured Meat: A Review. Sghaier Chriki and Jean-François Hocquette. Front. Nutr., February 7 2020. https://doi.org/10.3389/fnut.2020.00007

Abstract: To satisfy the increasing demand for food by the growing human population, cultured meat (also called in vitro, artificial or lab-grown meat) is presented by its advocates as a good alternative for consumers who want to be more responsible but do not wish to change their diet. This review aims to update the current knowledge on this subject by focusing on recent publications and issues not well described previously. The main conclusion is that no major advances were observed despite many new publications. Indeed, in terms of technical issues, research is still required to optimize cell culture methodology. It is also almost impossible to reproduce the diversity of meats derived from various species, breeds and cuts. Although these are not yet known, we speculated on the potential health benefits and drawbacks of cultured meat. Unlike conventional meat, cultured muscle cells may be safer, without any adjacent digestive organs. On the other hand, with this high level of cell multiplication, some dysregulation is likely as happens in cancer cells. Likewise, the control of its nutritional composition is still unclear, especially for micronutrients and iron. Regarding environmental issues, the potential advantages of cultured meat for greenhouse gas emissions are a matter of controversy, although less land will be used compared to livestock, ruminants in particular. However, more criteria need to be taken into account for a comparison with current meat production. Cultured meat will have to compete with other meat substitutes, especially plant-based alternatives. Consumer acceptance will be strongly influenced by many factors and consumers seem to dislike unnatural food. Ethically, cultured meat aims to use considerably fewer animals than conventional livestock farming. However, some animals will still have to be reared to harvest cells for the production of in vitro meat. Finally, we discussed in this review the nebulous status of cultured meat from a religious point of view. Indeed, religious authorities are still debating the question of whether in vitro meat is Kosher or Halal (e.g., compliant with Jewish or Islamic dietary laws).

---
Health and Safety

Advocates of in vitro meat claim that it is safer than conventional meat, based on the fact that lab-grown meat is produced in an environment fully controlled by researchers or producers, without any other organism, whereas conventional meat is part of an animal in contact with the external world, although each tissue (including muscles) is protected by the skin and/or by mucosa. Indeed, without any digestive organs nearby (despite the fact that conventional meat is generally protected from this), and therefore without any potential contamination at slaughter, cultured muscle cells do not have the same opportunity to encounter intestinal pathogens such as E. coli, Salmonella or Campylobacter (10), three pathogens that are responsible for millions of episodes of illness each year (19). However, we can argue that scientists or manufacturers are never in a position to control everything and any mistake or oversight may have dramatic consequences in the event of a health problem. This occurs frequently nowadays during industrial production of chopped meat.

Another positive aspect related to the safety of cultured meat is that it is not produced from animals raised in a confined space, so that the risk of an outbreak is eliminated and there is no need for costly vaccinations against diseases like influenza. On the other hand, we can argue that it is the cells, not the animals, which live in high numbers in incubators to produce cultured meat. Unfortunately, we do not know all the consequences of meat culture for public health, as in vitro meat is a new product. Some authors argue that the process of cell culture is never perfectly controlled and that some unexpected biological mechanisms may occur. For instance, given the great number of cell multiplications taking place, some dysregulation of cell lines is likely to occur as happens in cancer cells, although we can imagine that deregulated cell lines can be eliminated for production or consumption. This may have unknown potential effects on the muscle structure and possibly on human metabolism and health when in vitro meat is consumed (21).

Antibiotic resistance is known as one of the major problems facing livestock (7). In comparison, cultured meat is kept in a controlled environment and close monitoring can easily stop any sign of infection. Nevertheless, if antibiotics are added to prevent any contamination, even occasionally to stop early contamination and illness, this argument is less convincing.

Moreover, it has been suggested that the nutritional content of cultured meat can be controlled by adjusting fat composites used in the medium of production. Indeed, the ratio between saturated fatty acids and polyunsaturated fatty acids can be easily controlled. Saturated fats can be replaced by other types of fats, such as omega-3, but the risk of higher rancidity has to be controlled. However, new strategies have been developed to increase the content of omega-3 fatty acids in meat using current livestock farming systems (23). In addition, no strategy has been developed to endow cultured meat with certain micronutrients specific to animal products (such as vitamin B12 and iron) and which contribute to good health. Furthermore, the positive effect of any (micro)nutrient can be enhanced if it is introduced in an appropriate matrix. In the case of in vitro meat, it is not certain that the other biological compounds and the way they are organized in cultured cells could potentiate the positive effects of micronutrients on human health. Uptake of micronutrients (such as iron) by cultured cells has thus to be well understood. We cannot exclude a reduction in the health benefits of micronutrients due to the culture medium, depending on its composition. And adding chemicals to the medium makes cultured meat more “chemical” food with less of a clean label.

Monday, February 10, 2020

Mexican drug cartels: We see a positive connection between cartel presence & better socioeconomic outcomes at the municipality level; results help understand why drug lords have great support in the communities in which they operate

Following the poppy trail: Origins and consequences of Mexican drug cartels. Tommy E. Murphy, Martín A. Rossi. Journal of Development Economics, Volume 143, March 2020, 102433. https://doi.org/10.1016/j.jdeveco.2019.102433

Highlights
•    We study the origins, and economic and social consequences of Mexican drug cartels.
•    The location of current cartels is strongly linked to the location of Chinese migration at the beginning of the 20th century.
•    We report a positive connection between cartel presence and better socioeconomic outcomes at the municipality level.
•    Our results help to understand why drug lords have great support in the local communities in which they operate.

Abstract: This paper studies the origins, and economic and social consequences of some of the most prominent drug trafficking organizations in the world: the Mexican cartels. It first traces the current location of cartels to the places where Chinese migrated at the beginning of the 20th century, discussing and documenting how both events are strongly connected. Information on Chinese presence at the beginning of the 20th century is then used to instrument for cartel presence today, to identify the effect of cartels on society. Contrary to what seems to happen with other forms of organized crime, the IV estimates in this study indicate that at the local level there is a positive link between cartel presence and better socioeconomic outcomes (e.g. lower marginalization rates, lower illiteracy rates, higher salaries), better public services, and higher tax revenues, evidence that is consistent with the known stylized fact that drug lords tend have great support in the local communities in which they operate.

JEL classification: N36, O15


Increasingly, evidence suggests aggressive video games have little impact on player behavior in the realm of aggression and violence, but most professional guild policy statements failed to reflect these data

Aggressive Video Games Research Emerges from its Replication Crisis (Sort of). Christopher J Ferguson. Current Opinion in Psychology, February 10 2020. https://doi.org/10.1016/j.copsyc.2020.01.002

Highlights
• Previous research on aggressive video games (AVGs) suffered from high false positive rates.
• New, preregistered studies suggest AVGs have little impact on player aggression.
• Prior meta-analyses overestimated the evidence for effects.
• Professional guild statements by the American Psychological Association and American Academy of Pediatrics are inaccurate.
• Consumers may not mimic behaviors seen in fictional media.

Abstract: The impact of aggressive video games (AVGs) on aggression and violent behavior among players, particularly youth, has been debated for decades. In recent years, evidence for publication bias, questionable researcher practices, citation bias and poor standardization of many measures and research designs has indicated that the false positive rate among studies of AVGs has been high. Several studies have undergone retraction. A small recent wave of preregistered studies has largely returned null results for outcomes related to youth violence as well as outcomes related to milder aggression. Increasingly, evidence suggests AVGs have little impact on player behavior in the realm of aggression and violence. Nonetheless, most professional guild policy statements (e.g. American Psychological Association) have failed to reflect these changes in the literature. Such policy statements should be retired or revised lest they misinform the public or do damage to the reputation of these organizations.


The Nuclear Family Was a Mistake: Loneliness, lack of support, fragility

The Nuclear Family Was a Mistake. David Brooks. The Atlantic. Mar 2020. https://www.theatlantic.com/magazine/archive/2020/03/the-nuclear-family-was-a-mistake/605536/

The family structure we’ve held up as the cultural ideal for the past half century has been a catastrophe for many. It’s time to figure out better ways to live together.

Excerpts:

This is the story of our times—the story of the family, once a dense cluster of many siblings and extended kin, fragmenting into ever smaller and more fragile forms. The initial result of that fragmentation, the nuclear family, didn’t seem so bad. But then, because the nuclear family is so brittle, the fragmentation continued. In many sectors of society, nuclear families fragmented into single-parent families, single-parent families into chaotic families or no families.

If you want to summarize the changes in family structure over the past century, the truest thing to say is this: We’ve made life freer for individuals and more unstable for families. We’ve made life better for adults but worse for children. We’ve moved from big, interconnected, and extended families, which helped protect the most vulnerable people in society from the shocks of life, to smaller, detached nuclear families (a married couple and their children), which give the most privileged people in society room to maximize their talents and expand their options. The shift from bigger and interconnected extended families to smaller and detached nuclear families ultimately led to a familial system that liberates the rich and ravages the working-class and the poor.

...

Ever since I started working on this article, a chart has been haunting me [https://www.pewforum.org/2019/12/12/religion-and-living-arrangements-around-the-world/pf_12-12-19_religion-households-00-02/]. It plots the percentage of people living alone in a country against that nation’s GDP. There’s a strong correlation. Nations where a fifth of the people live alone, like Denmark and Finland, are a lot richer than nations where almost no one lives alone, like the ones in Latin America or Africa. Rich nations have smaller households than poor nations. The average German lives in a household with 2.7 people. The average Gambian lives in a household with 13.8 people.

That chart suggests two things, especially in the American context. First, the market wants us to live alone or with just a few people. That way we are mobile, unattached, and uncommitted, able to devote an enormous number of hours to our jobs. Second, when people who are raised in developed countries get money, they buy privacy.

For the privileged, this sort of works. The arrangement enables the affluent to dedicate more hours to work and email, unencumbered by family commitments. They can afford to hire people who will do the work that extended family used to do. But a lingering sadness lurks, an awareness that life is emotionally vacant when family and close friends aren’t physically present, when neighbors aren’t geographically or metaphorically close enough for you to lean on them, or for them to lean on you. Today’s crisis of connection flows from the impoverishment of family life.

I often ask African friends who have immigrated to America what most struck them when they arrived. Their answer is always a variation on a theme—the loneliness. It’s the empty suburban street in the middle of the day, maybe with a lone mother pushing a baby carriage on the sidewalk but nobody else around.

For those who are not privileged, the era of the isolated nuclear family has been a catastrophe. It’s led to broken families or no families; to merry-go-round families that leave children traumatized and isolated; to senior citizens dying alone in a room. All forms of inequality are cruel, but family inequality may be the cruelest. It damages the heart. Eventually family inequality even undermines the economy the nuclear family was meant to serve: Children who grow up in chaos have trouble becoming skilled, stable, and socially mobile employees later on.

Human populations vary substantially & unexpectedly in both the range and pattern of facial sexually dimorphic traits; European & South American populations display larger levels of facial sexual dimorphism than African populations

Kleisner, Karel, Petr Tureček, S. Craig Roberts, Jan Havlicek, Jaroslava V. Valentova, Robert M. Akoko, Juan David Leongómez, et al. 2020. “How and Why Patterns of Sexual Dimorphism in Human Faces Vary Across the World.” PsyArXiv. February 10. doi:10.31234/osf.io/7vdm

Abstract: Sexual selection, including mate choice and intrasexual competition, is responsible for the evolution of some of the most elaborated and sexually dimorphic traits in animals. Although there is clear sexual dimorphism in the shape of human faces, it is not clear whether this is similarly due to mate choice, or whether mate choice affects only part of the facial shape difference between men and women.  Here we explore these questions by investigating patterns of both facial shape and facial preference across a diverse set of human populations. We find evidence that human populations vary substantially and unexpectedly in both the range and pattern of facial sexually dimorphic traits. In particular, European and South American populations display larger levels of facial sexual dimorphism than African populations. Neither cross-cultural differences in facial shape variation, differences in body height between sexes, nor differing preferences for facial sex-typicality across countries, explain the observed patterns of facial dimorphism. Altogether, the association between morphological sex-typicality and attractiveness is moderate for women and weak (or absent) for men. Analysis that distinguishes between allometric and non-allometric components reveals that non-allometric sex-typicality is preferred in women’s faces but not in faces of men. This might be due to different regimes of ongoing sexual selection acting on men, such as stronger intersexual selection for body height and more intense intrasexual physical competition, compared with women.



Caffeine improved global processing, without effect on local information processing, alerting, spatial attention & executive or phonological functions; also was accompanied by faster text reading speed of meaningful sentences

Caffeine improves text reading and global perception. Sandro Franceschini et al. Journal of Psychopharmacology, October 3, 2019. https://doi.org/10.1177/0269881119878178

Abstract
Background: Reading is a unique human skill. Several brain networks involved in this complex skill mainly involve the left hemisphere language areas. Nevertheless, nonlinguistic networks found in the right hemisphere also seem to be involved in sentence and text reading. These areas do not deal with phonological information, but are involved in verbal and nonverbal pattern information processing. The right hemisphere is responsible for global processing of a scene, which is needed for developing reading skills.

Aims: Caffeine seems to affect global pattern processing specifically. Consequently, our aim was to discover if it could enhance text reading skill.

Methods: In two mechanistic studies (n=24 and n=53), we tested several reading skills, global and local perception, alerting, spatial attention and executive functions, as well as rapid automatised naming and phonological memory, using a double-blind, within-subjects, repeated-measures design in typical young adult readers.

Results: A single dose of 200 mg caffeine improved global processing, without any effect on local information processing, alerting, spatial attention and executive or phonological functions. This improvement in global processing was accompanied by faster text reading speed of meaningful sentences, whereas single word/pseudoword or pseudoword text reading abilities were not affected. These effects of caffeine on reading ability were enhanced by mild sleep deprivation.

Conclusions: These findings show that a small quantity of caffeine could improve global processing and text reading skills in adults.

Keywords: Visual perception, reading enhancement, parallel processing, psychostimulant, context processing


Check also Zabelina, Darya, and Paul Silvia. 2020. “Percolating Ideas: The Effects of Caffeine on Creative Thinking and Problem Solving.” PsyArXiv. February 9. https://www.bipartisanalliance.com/2020/02/those-who-consumed-200-mg-of-caffeine.html

And Surprise: Consuming 1–5 cups of coffee/day was related to lower mortality among never smokers; they forgot to discount/adjust for pack-years of smoking, healthy & unhealthy foods, & added sugar
Dietary research on coffee: Improving adjustment for confounding. David R Thomas, Ian D Hodges. Current Developments in Nutrition, nzz142, December 26 2019. https://www.bipartisanalliance.com/2019/12/surprise-consuming-15-cups-of-coffeeday.html

And Inverse association between caffeine intake and depressive symptoms in US adults: data from National Health and Nutrition Examination Survey (NHANES) 2005–2006. Sohrab Iranpour, Siamak Sabour. Psychiatry Research, Nov 2018. https://doi.org/10.1016/j.psychres.2018.11.004

Unbearable psychological pain and hopelessness are overwhelmingly important motivations for suicidal behavior, both for men and women

Motivations for Suicide: Converging Evidence from Clinical and Community Samples. Alexis M. May, Mikayla C. Pachkowski, E. David Klonsky. Journal of Psychiatric Research, February 10 2020. https://doi.org/10.1016/j.jpsychires.2020.02.010

Highlights
•    Unbearable psychological pain and hopelessness are overwhelmingly important motivations for suicidal behavior.
•    Regardless of the time since attempt, pain and hopelessness were critical motivations.
•    Pain and hopelessness were the strongest attempt motivations for both men and women.
•    The Inventory of Motivations for Suicide Attempts (IMSA) quickly assesses individual motivations.

Abstract: Understanding what motivates suicidal behavior is critical to effective prevention and clinical intervention. The Inventory of Motivations for Suicide Attempts (IMSA) is a self-report measure developed to assess a wide variety of potential motivations for suicide. The purpose of this study is to examine the measure’s psychometric and descriptive properties in two distinct populations: 1) adult psychiatric inpatients (n = 59) with recent suicide attempts (median of 3 days prior) and 2) community participants assessed online (n = 222) who had attempted suicide a median of 5 years earlier. Findings were very similar across both samples and consistent with initial research on the IMSA in outpatients and undergraduates who had attempted suicide. First, the individual IMSA scales demonstrated good internal reliability and were well represented by a two factor superordinate structure: 1) Internal Motivations and 2) Communication Motivations. Second, in both samples unbearable mental pain and hopelessness were the most common and strongly endorsed motivations, while interpersonal influence was the least endorsed. Finally, motivations were similar in men and women -- a pattern that previous work was not in a position to examine. Taken together with previous work, findings suggest that the nature, structure, and clinical correlates of suicide attempt motivations remain consistent across diverse individuals and situations. The IMSA may serve as a useful tool in both research and clinical contexts to quickly assess individual suicide attempt motivations.



Minimal Relationship between Local Gyrification (wrinkles in the cerebral cortex) and General Cognitive Ability in Humans

Minimal Relationship between Local Gyrification and General Cognitive Ability in Humans. Samuel R Mathias et al. Cerebral Cortex, bhz319, February 9 2020. https://doi.org/10.1093/cercor/bhz319

Abstract: Previous studies suggest that gyrification is associated with superior cognitive abilities in humans, but the strength of this relationship remains unclear. Here, in two samples of related individuals (total N = 2882), we calculated an index of local gyrification (LGI) at thousands of cortical surface points using structural brain images and an index of general cognitive ability (g) using performance on cognitive tests. Replicating previous studies, we found that phenotypic and genetic LGI–g correlations were positive and statistically significant in many cortical regions. However, all LGI–g correlations in both samples were extremely weak, regardless of whether they were significant or nonsignificant. For example, the median phenotypic LGI–g correlation was 0.05 in one sample and 0.10 in the other. These correlations were even weaker after adjusting for confounding neuroanatomical variables (intracranial volume and local cortical surface area). Furthermore, when all LGIs were considered together, at least 89% of the phenotypic variance of g remained unaccounted for. We conclude that the association between LGI and g is too weak to have profound implications for our understanding of the neurobiology of intelligence. This study highlights potential issues when focusing heavily on statistical significance rather than effect sizes in large-scale observational neuroimaging studies.


Discussion

In the present study, we analyzed data from two samples of related individuals to examine the association between gyrification and general cognitive ability. We used a popular automatic method to calculate LGI across the cortex from MRI images (Schaer et al. 2008), and calculated g from performance on batteries of cognitive tests. We estimated the heritability of height, ICV, and g, as well as the heritability LGI, area, and thickness at all vertices. We estimated phenotypic, genetic, and environmental LGI–g correlations, as well as partial LGI–g correlations with height, ICV, area (at the same vertex), and thickness (at the same vertex) as potential confounding variables. We estimated the amount of phenotypic variance of g explained by all LGIs together via ridge regression, and examined the across-sample consistency of neuroanatomical specificity in heritability of LGI, area, and thickness, as well as LGI–g correlations. Finally, we tested whether heritability estimates and LGI–g correlations were stronger in regions implicated by the P-FIT, a model of the neurological basis of human intelligence (Jung and Haier 2007).
A novel finding of the present study was that LGI was heritable across the cortex, extending a previous study that established the heritability of whole-brain GI (Docherty et al. 2015). This finding was not particularly surprising because many features of brain morphology are heritable. Nevertheless, it was necessary to establish the heritability of LGI before calculating genetic LGI–g correlations, which are only meaningful if both LGI and g are heritable traits. The previous study estimated the heritability of GI to be 0.71, which is much greater than most of the heritability estimates for LGI observed in GOBS or HCP. This result is also not surprising, because GI is likely to be contaminated by less measurement error than LGI. Heritabilities of all other traits were consistent with those published in previous studies.
The present study represents a replication of previous work and provides several important extensions to our understanding of the relationship between gyrification and cognition. First, we replicated previous work by finding positive and significant phenotypic LGI–g correlations (e.g., Gregory et al. 2016). Furthermore, we found that genetic LGI–g correlations were positive and significant (but only in HCP), suggesting that the relationship between gyrification and intelligence may be driven by pleiotropy. Since environmental LGI–g correlations were not significant, their net sign differed across GOBS and HCP, and their spatial patterns showed no consistency across samples, it is reasonable to conclude that they mostly reflected measurement error rather than meaningful shared environmental contributions to LGI and g.
In our view, the most important finding from the present study is that all LGI–g correlations, even the significant ones, were weak. Phenotypically, LGI at a typical vertex poorly predicted g. Even when the predictive ability of all LGIs was considered together via ridge regression, at least 89% of the variance of g remained unaccounted for. Phenotypic and genetic LGI–g correlations were weaker than ICV–g correlations in the same participants, and about the same as area–g correlations. Partialing out ICV or area further reduced LGI–g correlations.
The volume of cortical mantle is often computed as the product of its area and thickness, but at the resolution of meshes typically used to represent the cortex, the variability of area is higher than the variability of thickness such that surface area is the primary contributor to the variability of cortical volume (Winkler et al. 2010), and therefore of its relationship to other measurements; the same holds, more strongly even, for parcellations of the cortex in large anatomical or functional regions. This means that the association between overall brain volume and cognitive abilities reported by previous studies (e.g., Pietschnig et al. 2015) is probably primarily driven by area–g correlations (Vuoksimaa et al. 2015). LGI is strongly correlated with area (Gautam et al. 2015; Hogstrom et al. 2013), which explains why partialing out either ICV or area reduced phenotypic and genetic LGI–g correlations in the present study. Thus, we conclude, based on our results, that the association between gyrification and cognitive abilities to a large extent reflects the already well-established relationship between surface area and cognitive abilities, and that the particular association between the unique portion of gyrification and cognitive abilities is extremely small.
The above conclusion is consistent with that of a previous twin study (Docherty et al. 2015), which examined genetic associations between overall cortical surface area, whole-brain GI, and cognitive abilities. The authors concluded that the genetic GI–g correlation could be more or less fully explained by the area–g correlation. It has been argued previously that focusing on whole-brain GI may miss important neuroanatomical specificity; however, our findings suggest that Docherty et al.’s conclusion holds for both local and global gyrification.
The P-FIT is a popular hypothesis concerning which brain regions matter most for human cognition (Jung and Haier 2007). The P-FIT was initially proposed to explain activation patterns observed during functional MRI experiments, but has been extended to aspects of brain structure. Previous studies have suggested that the association between gyrification and cognitive abilities may be stronger in P-FIT regions than the rest of the brain (Green et al. 2018; Gregory et al. 2016). However, when we tested this hypothesis, we actually found evidence to the contrary. Since neuroanatomical patterns of phenotypic and genetic LGI–g correlations were consistent across GOBS and HCP, this unexpected finding was unlikely to have been caused by a lack of specificity, such as if LGI–g correlations were distributed randomly over the cortex. Instead, while LGI–g correlations exhibited a characteristic neuroanatomical pattern, this pattern did not match the P-FIT. A potential limitation of the present study in this regard is that there is no widely accepted method of matching Brodmann areas (used to define P-FIT regions) to surface-based ROIs (used to group vertices). Therefore, one could argue that our selection of P-FIT regions was incorrect. While our selection was based on that of a previous study (Green et al. 2018), we nevertheless reperformed our analysis several times with different selections of P-FIT regions, and the results remained the same. Importantly, although we argue that the P-FIT is not a good model for the association between gyrification—a purely structural aspect of cortical organization—and cognitive abilities, our results should not be used to criticize the P-FIT as a hypothesis of the brain’s functional organization, because function does not necessarily follow structure.
Most of our results were consistent across samples. However, estimates of heritability and genetic correlations were generally weaker in GOBS than HCP. Notably, some genetic LGI–g correlations were strong enough to surpass the FDR-corrected threshold for significance in HCP, but not GOBS. Such differences could be related to study design. One limitation of all family studies is that polygenic effects are susceptible to inflation due to shared environmental factors, which would cause overestimation of both heritability and genetic correlations. It could be argued that extended-pedigree studies, such as GOBS, are less susceptible to this kind of inflation than twin studies, such as HCP, because there are usually fewer shared environmental factors between distantly related individuals than twins (Almasy and Blangero 2010); this reduction in inflation comes at the expense of a reduction in power to detect polygenic effects, which could also explain the lack of significant genetic correlations in GOBS. It is unlikely that differences in results between samples were caused by differences in scanner or scanning protocol (Han et al. 2006). Furthermore, while GOBS and HCP participants completed different cognitive batteries, both were comprehensive in terms of measured cognitive abilities, ensuring that g indexed a similar construct in both samples.
With the recent emergence of large, open-access data sets and international consortia, neuroimaging and genetics studies have entered a new era characterized by samples comprising many thousands of participants. In such large studies, trivial effects may be labeled as statistically significant. This observation is not new (Berkson 1938) and numerous solutions have been proposed, such as adopting more stringent significance criteria (Benjamin et al. 2018), scaling criteria by sample size (Mudge et al. 2012), testing interval-null rather than point-null hypotheses (Morey and Rouder 2011), and, most radically, abandoning the notion of statistical significance altogether (McShane et al. 2019). One could argue that these solutions suffer from their own drawbacks and are unlikely to be adopted by the scientific mainstream in near future. Therefore, in the meantime, we believe that it is imperative to judge, at least qualitatively, whether the sizes of statistically significant effects are large enough to justify one’s conclusions, particularly when these conclusions may have broad, overarching implications. This idea is not new either (Kelley and Preacher 2012) but deserves to be restated. Based on the results of the present study, we are inclined to believe that gyrification minimally explains variation in cognitive abilities and therefore has somewhat limited implications for our understanding of the neurobiology of human intelligence.

Those who consumed 200 mg of caffeine showed significantly enhanced problem-solving abilities; caffeine had no significant effects on creative generation or on working memory

Zabelina, Darya, and Paul Silvia. 2020. “Percolating Ideas: The Effects of Caffeine on Creative Thinking and Problem Solving.” PsyArXiv. February 9. doi:10.31234/osf.io/6g9av

Abstract: Caffeine is the most widely consumed psychotropic drug in the world, with numerous studies documenting the effects of caffeine on people’s alertness, vigilance, mood, concentration, and attentional focus. The effects of caffeine on creative thinking, however, remain unknown. In a randomized placebo-controlled between-subject double-blind design the present study investigated the effect of moderate caffeine consumption on creative problem solving (i.e., convergent thinking) and creative idea generation (i.e., divergent thinking). We found that participants who consumed 200 mg of caffeine (approximately one 12 oz cup of coffee, n = 44), compared to those in the placebo condition (n = 44), showed significantly enhanced problem-solving abilities. Caffeine had no significant effects on creative generation or on working memory. The effects remained after controlling for participants’ caffeine expectancies, whether they believed they consumed caffeine or a placebo, or for changes in mood. Possible mechanisms and future directions are discussed.