Wednesday, November 6, 2019

Our results suggest that individuals in a more positive mood are less likely to cooperate, and play less efficiently in a repeated Prisoner’s Dilemma

Happiness, cooperation and language. Eugenio Proto, Daniel Sgroi, Mahnaz Nazneen. Journal of Economic Behavior & Organization, November 6 2019. https://doi.org/10.1016/j.jebo.2019.10.006

Abstract: According to existing research across several disciplines (management, psychology, economics and neuroscience), positive mood can have positive effects, engendering more altruistic, open and helpful behaviour, but can also work through a more negative channel by inducing inward-orientation, assertiveness, and reduced use of information. This leaves the impact on cooperation in interactive and strategic situations unclear. We find evidence from 490 participants in a laboratory experiment suggesting that participants in an induced positive mood cooperate less in a repeated Prisoner’s Dilemma than participants in a neutral setting. This is robust to the number of repetitions or the inclusion of pre-play communication. In order to understand why positive mood might damage the propensity to cooperate, we conduct a language analysis of the pre-play communication between players. This analysis indicates that subjects in a more positive mood use more inward-oriented and more negative language.

Keywords: Positive moodAffectHappinessMood induction proceduresCooperationRepeated Prisoner’s DilemmaSocial preferencesSocial dilemmasCognitive skillsProductivityInward-orientationLanguage analysis

JEL classification: C72 (Cooperative games)C91 (Laboratory experiments)D91 (Role and effects of psychologicalemotionalsocialand cognitive factors on decision making)J24 (Productivity)J28 (Life satisfaction)


---
In a previous version:

5 Concluding Remarks

Our results suggest that individuals in a more positive mood are less likely to cooperate, and play less efficiently in a repeated Prisoner’s Dilemma. This supports what we described as the “negative channel” in the introduction, and suggests that this channel dominates the “positive channel” in a situation involving repeated play and strategic interaction. This is true both for the repeated Prisoner’s Dilemma with a known and unknown end date and for sessions both with and without pre-play communication. We also show that the result is not specific to a particular form of mood induction. The result holds right through to the final round of play, though it does not hold if we analyse only the very first round of each supergame.

A novel analysis of the text used in pre-play communication, to our knowledge the first of its kind in an economics laboratory experiment, suggests that those in a more positive mood use more negative language and display greater inward-orientation (through the greater use of the “I” pronoun) than those in a neutral mood which also supports the “negative channel”. We confirm that inward orientation is not specific to any one form of mood induction (it applies equally well to the use of movie clips or Velten statements and music) an our language analysis is. Our findings also support the concept of “mood maintenance” which explains why those with a higher level of happiness might shy away from the risks involved in cooperation: they have more to lose and less to gain compared to those at lower levels of happiness: this is most apparent when looking at the choice of defect where positive mood is associated with a 7.2 percentage point reduction (p-value 0.0232) in the cooperation. These findings are very different from results in the literature typically obtained in oneshot games or which do not involve strategic interaction. A simple explanation (supported by Proto et al. (2017)) is that repeated-interaction games involve more complex tasks where cognitive ability plays a crucial role.

Taken together with one of the key findings in the “negative channel” described earlier, that cognitive ability may be negatively related to positive mood, this might explain why subjects in a neutral mood are better equipped for more complex strategic settings. Finally, we should note that in our study we were specifically interested in the impact of general positive or neutral mood shocks and so elected to have everyone within a session face the same shock. Randomization then occurred across sessions not within sessions. This works well if we wish to consider a situation where everyone faces the same shock. Our work is not well-placed to study situations where individuals face different shocks and in judging how these might interact, for instance if one player has recently become happier while another has not. This is a potential topic for future study.

The rise in the political polarization in recent decades is not accounted for by the dramatic rise in internet use; claims that partisans inhabit wildly segregated echo chambers/filter bubbles are largely overstated

Deri, Sebastian. 2019. “Internet Use and Political Polarization: A Review.” PsyArXiv. November 6. doi:10.31234/osf.io/u3xyb

Abstract
In this paper, I attempt to provide a comprehensive review of the evidence regarding the relationship between political polarization in the US and internet use. In the first part, I examine whether there has indeed been a rise in political polarization in the US in the last several decades. The remaining second and third parts deal with the relationship between polarization and internet use. I begin, in the second part, by reviewing evidence pertaining to the question of whether internet use plays a causal role in bringing about polarization. I then move, in the third part, to exploring the possible means by which internet use might bring about polarization. By analogy to cigarettes and cancer, the second part examines whether cigarette smoking causes cancer, while the third part examines how cigarette smoking causes (or might cause) cancer. One focus, in the third section, is on the most often discussed mechanism of internet-caused polarization: segregated information exposure, which corresponds to claims that polarization is been driven by an internet ecosystem characterized by “echo chambers”, “filter bubbles”, and otherwise partisan information consumption and dissemination.

The brief summary for each of the three parts is this. First, there is evidence that polarization has been on the rise in the U.S. in the recent decades—but it depends what you measure. When comparing Republican and Democrats, there is strongest evidence for increases in affective polarization and policy-based polarization. Second, most analyses would marshal against a version of reality where the rise in the political polarization in recent decades is mostly accounted for by the dramatic rise in internet use over this same time period. However, one notable, well-conducted, large-scale randomized direct intervention study confirms that de-activating a social media account (Facebook) resulted in significant and non-trivially sized decreases in polarization, specifically related to political opinions and policy preferences (Allcott, Braghieri, Eichmeyer, & Gentzkow, 2019). Finally, the evidence is murkiest regarding how internet use might drive polarization. With regard to polarization via segregated information exposure, claims that partisans inhabit wildly segregated “echo chambers” or “filter bubbles” are largely overstated. Nevertheless, there are significant and meaningful differences in the political content that partisans of different political orientations consume online, comparable to the degree of segregation in national print newspaper readership. Causal evidence linking this differential exposure to political polarization is not as strong as evidence that differential exposure exists. Evidence for other mechanisms of polarization is suggestive but awaits strong empirical confirmation.



Do exonerees face employment discrimination similar to actual offenders?

Do exonerees face employment discrimination similar to actual offenders? Jeff Kukucka, Heather K. Applegarth, Abby L. Mello. Legal and Criminological Psychology, November 6 2019. https://doi.org/10.1111/lcrp.12159

Abstract
Purpose: Given that criminal offenders face employment discrimination (Ahmed & Lang, 2017, IZA Journal of Labor Policy, 6) and wrongly convicted individuals are stereotyped similarly to offenders (Clow & Leach, 2015, Legal and Criminological Psychology, 20, 147), we tested the hypothesis that exonerees – despite their innocence – face employment discrimination comparable to actual offenders.

Methods: Experienced hiring professionals (N = 82) evaluated a job application that was identical apart from the applicant's criminal history (i.e., offender, exoneree, or none).

Results: As predicted, professionals formed more negative impressions of both the exoneree and offender – but unexpectedly, they stereotyped exonerees and offenders somewhat differently. Compared to the control applicant, professionals desired to contact more of the exoneree's references, and they offered the exoneree a lower wage.

Conclusions: Paradoxically, exonerees may be worse off than offenders to the extent that exonerees also face employment discrimination but have access to fewer resources. As the exoneree population continues to grow, research can and should inform policies and legislation in ways that will facilitate exonerees’ reintegration.


Discussion

Our findings suggest that exonerees–despite their innocence–may face hiringdiscrimination similar to actual offenders. Compared to an applicant with no criminalhistory, hiring professionals formed less favourable impressions of exoneree and offenderapplicants, desired to contact more of the exoneree’s references, and were more likely tooffer the exoneree a low wage–all despite their applications being otherwise identical.Notably, the observed effects were consistent in magnitude with those seen in meta-analyses of race-based (Quillian, Pager, Hexel, & Midtbøen, 2017) and gender-based(Koch, D’Mello, & Sackett, 2015) hiring discrimination. For offenders, employment is animportant predictor of post-release adjustment (Bahr, Harris, Fisher, & Armstrong, 2010;Uggen, Wakefield, & Western, 2005), including lower recidivism. Similarly for exonerees,studies have found a positive relationship between employment and mental health(Wildemanet al., 2011) and a negative relationship between financial compensation andpost-release criminality (Mandery, Shlosberg, West, & Callaghan, 2013). Our findings thuscarry potentially broad implications for exonerees’ post-release well-being.

Like Clow and Leach (2015), we found that hiring professionals negatively stereotypedboth offenders and exonerees–but we also unexpectedly found some evidence that theywere stereotyped differently. While both were seen as less trustworthy than the controlapplicant, exonerees were generally seen as intellectually deficient (i.e., less intelligent,competent, and articulate), whereas offenders were generally seen asmotivationallydeficient (i.e., less conscientious and responsible). If that is the case, then discrimination against these populations may depend on the requirements of the job in question. In our study, applicants sought a job that required both intellect and leadership, which may have made exonerees and offenders equally undesirable candidates. Still, this finding is rather tentative; future research should more carefully explore the possibility that these populations are stereotyped differently and therefore face discrimination under different circumstances.

The tendency to stereotype exonerees as unintelligent suggests that professionals may have attributed the exoneree’s conviction to dispositional rather than situational factors (Gilbert & Malone, 1995; Ross, 1977). Just world theory–which posits that people have afundamental need to view the world as fair (Hafer & B egue, 2005; Lerner & Miller, 1978)–may shed light on why exonerees would be blamed for their own plight: When faced withinjustice, people preserve their belief in a just worldby blaming the victim. In turn, peopleare less helpful to those who appear responsible for their own plight (Farwell & Weiner,2000; Weiner, Perry, & Magnusson, 1988)–and indeed, recent work has found that blaming exonerees for their own conviction predicted lower support for post-exoneration services (Kukucka & Evelo, 2019; Scherr, Normile, & Putney, 2018). This literature may thus explain why professionals stereotyped exonerees as unintelligent and why they more often offered exonerees a low wage. Perhaps educating employers about thesystemic causes of wrongful conviction would reduce discrimination against exonerees. Consistent with this possibility, Ricciardelli and Clow (2012) found that students’attitudes towards exonerees became more positive after hearing a lecture on the causes ofwrongful conviction.

Our professionals also wanted to contact more of the exoneree’s references, and they were equally likely to cite criminal history as a negative quality of the exoneree andoffender. These findings may indicate that professionals doubted the exoneree’s innocence. Qualitative studies abound with examples of exonerees who the publicpresumed guilty even after their exoneration (Scott, 2010; Westervelt & Cook, 2010), andother findings suggest that laypeople are often unconvinced of exonerees’ innocence(Scherr, Normile, & Sarmiento, 2018). If our professionals felt similarly, then it isunsurprising that they were equally apprehensive about the exoneree’s and offender’scriminal history. Alternatively, professionals may have accepted the exoneree’sinnocence but feared that incarceration had tainted them. This possibility is consistentwith research on stigma by association as well as the ‘magical law of contagion’–that is,the belief that people take on the properties of others with whom they have contact (e.g.,Rozin & Royzman, 2001). In other words, people may believe that exonerees take on thesame traits as the offenders with whom they cohabitated in prison (Clowet al., 2012). Future research should explore whether exonerees are stigmatized because they are mistakenly thought to be offenders or because they are known to have cohabitated with offenders.

Amazing how much we may hate the others — The Harmful Side of Thanks: Thankful Responses to High-Power Group Help Undermine Low-Power Groups’ Protest, Pacifying Them

Amazing how much we may hate the others — The Harmful Side of Thanks: Thankful Responses to High-Power Group Help Undermine Low-Power Groups’ Protest. Inna Ksenofontov, Julia C. Becker. Personality and Social Psychology Bulletin, October 9, 2019. https://doi.org/10.1177/0146167219879125

Abstract: Giving thanks has multiple psychological benefits. However, within intergroup contexts, thankful responses from low-power to high-power group members could solidify the power hierarchy. The other-oriented nature of grateful expressions could mask power differences and discourage low-power group members from advocating for their ingroup interests. In five studies (N = 825), we examine the novel idea of a potentially harmful side of “thanks,” using correlational and experimental designs and a follow-up. Across different contexts, expressing thanks to a high-power group member who transgressed and then helped undermined low-power group members’ protest intentions and actual protest. Thus, the expression of thanks can pacify members of low-power groups. We offer insights into the underlying process by showing that forgiveness of the high-power benefactor and system justification mediate this effect. Our findings provide evidence for a problematic side of gratitude within intergroup relations. We discuss social implications.

Keywords: expressions of thanks, protest, intergroup helping, system justification, forgiveness

---
How can you thank a man for giving you what’s already yours? How then can you thank him for giving you only part of what’s already yours?
             —Malcolm X, “The Ballot or the Bullet,” 1964

Mild alcohol use is shown to improve bargaining efficiency in labs; the effect does not arise from changes in mood, altruism, or risk aversion; may be caused by impairment in information processing ability, diminishing self-interest

From 2016... Deal or no deal? The effect of alcohol drinking on bargaining. Pak HungAu, Jipeng Zhang. Journal of Economic Behavior & Organization, Volume 127, July 2016, Pages 70-86. https://doi.org/10.1016/j.jebo.2016.04.011

Highlights
•    Mild alcohol use is shown to improve bargaining efficiency in experiments.
•    The effect does not arise from changes in mood, altruism, or risk aversion.
•    The effect can be caused by impairment in information processing ability.

Abstract: Alcohol drinking during business negotiation is a very common practice, particularly in some East Asian countries. Does alcohol consumption affect negotiator's strategy and consequently the outcome of the negotiation? If so, what is the mechanism through which alcohol impacts negotiator's behavior? We investigate the effect of a moderate amount of alcohol on negotiation using controlled experiments. Subjects are randomly matched into pairs to play a bargaining game with adverse selection. In the game, each subject is given a private endowment. The total endowment is scaled up and shared equally between the pair provided that they agree to collaborate. It is found that a moderate amount of alcohol consumption increases subjects’ willingness to collaborate, thus improving their average payoff. We find that alcohol consumption increases neither subjects’ preference for risk nor altruism. A possible explanation for the increase in the likelihood of collaboration is that subjects under the influence of alcohol are more “cursed” in the sense of Eyster and Rabin (2005), which is supported by the estimation results of a structural model of quantal response equilibrium.


---
5. Concluding remarks

Given the harmful effects of excessive alcohol consumption on health are well-known, it is not clear and thereforeinteresting to investigate why aggressive business drinking has become a routine, and even an accepted culture in manycountries. In this study, we make the first attempt to study the effect of a mild amount of alcohol consumption on bargainingunder incomplete information. We find a positive effect of alcohol consumption on the efficiency of bargaining in a specificexperimental setting. Our finding suggests that consuming a mild to moderate amount of alcoholic drink in business meetings can potentially help smooth the negotiation process.

Out of the concern of health risk, the alcohol consumption of subjects in our experiment is mild relative to businessdrinking in real world. Our results still can shed useful light on understanding the effect of business drinking. First, thea lcohol intoxication effects, especially on information processing and working memory, have shown to be present even at amild dose of alcohol similar to that used in our experiment (Dry et al., 2012). Moreover, the intoxication effect is increasingin BAC up to a moderate level. We thus conjecture that a slight increase in dosage would strengthen our results. Second,the medical literature has well documented that chronic alcohol consumption makes the drinker develop tolerance to someof alcohol’s effects.20Consequently, the amount of alcohol needed to achieve a certain level of intoxication for a graduatestudent (who do not drink much typically) can be much smaller than the amount for a businessman (who drinks moreheavily and frequently).

Despite the aforementioned positive effect for a mild dose of alcohol, caution must be exercised in extrapolating theresults too far. It is well known that an excessive dose of alcohol can lead to a range of harmful effects, including aggressiveand violent behaviors (Dougherty et al., 1999), as well as impairment in problem solving ability (Streufert et al., 1993). Therefore, it is almost certain that excessive drinking would hamper efficiency in bargaining.

What are the channels through which alcohol use affects bargaining strategies and outcomes in our setting? It is commonly accepted that alcohol use lowers one’s ability to make appropriate reasoning and inference from available information. Therefore, in settings in which skepticism can lead to a breakdown in negotiation, alcohol consumption can make people drop their guard for each others’ actions, thus facilitating reaching an agreement. Our QRE estimation of a cursed equilibrium provides some support for this channel.

Other conceivable channels can be ruled out as follows. First, in line with the existing literature on the effect of alcoholuse, we find that a mild does of alcohol has little (if not zero) effect on our subjects’ risk aversion and altruism. Therefore,the increase in willingness to collaborate does not arise from a decrease in risk aversion, and/or an increase in altruism.

Second, the positive effect of alcohol in social setting has often been attributed to creating a more comforting and relaxingatmosphere. Our experiment is conducted in a laboratory, and each subject consumed the given beer individually. As such,the socializing effect of alcohol is clearly absent in our setting. Third, alcohol consumption has been suggested to have asignaling value that one is trustworthy and is ready to commit to a relationship. (See for example, Haucap and Herr (2014)and Schweitzer and Kerr (2000).) In our study, treatments are randomized and enforced by the experimenters: subjects donot get to choose whether and what type of drink to consume, so they cannot signal their private information. Whereas ourexperiment design abstracts away from the second and the third channels discussed above, future research can consideralcohol’s effects on relieving tension and building trust in a social setting.

Thank God for My Successes (Not My Failures): Feeling God’s Presence Explains a God Attribution Bias

Thank God for My Successes (Not My Failures): Feeling God’s Presence Explains a God Attribution Bias. Amber DeBono, Dennis Poepsel, Natarshia Corley. Psychological Reports, November 4, 2019. https://doi.org/10.1177/0033294119885842

Abstract: Little research has investigated attributional biases to God for positive and negative personal events. Consistent with past work, we predicted that people who believe in God will attribute successes more to God than failures, particularly for highly religious people. We also predicted that believing that God is a part of the self would increase how much people felt God’s presence which would result in giving God more credit for successes. Our study (N = 133) was a two-factor, between-subject experimental design in which participants either won or lost a game and were asked to attribute the cause of this outcome to themselves, God, or other factors. Furthermore, participants either completed the game before or after responding to questions about their religious beliefs. Overall, there was support for our predictions. Our results have important implications for attribution research and the practical psychological experiences for religious people making attributions for their successes and failures.

Keywords: Religion, God, attribution, self

---
Discussion

The results of this study provided substantial evidence for our two primarygoals. First, we demonstrated that people who believe in God attributed successes more to God than their failures. Furthermore, we showed that thiseffect was stronger for people who identified as more religious. We thereforeconceptually replicated previous research (Spilka & Schmidt, 1983), demonstrating that this God attribution style is a reliable effect, not limited to hypo-thetical scenarios.

Moreover, the percentage attributed to God for a win appeared to be bestpredicted by believing God is a part of them. This relationship was bestexplained by simply feeling God’s presence during the experimental task.These findings are consistent with previous research that showed the importanceof the overlap between God and self in addition to feeling God’s presence (e.g.,Hodges et al., 2013; Sharp et al., 2017). In contrast with Spilka and Schmidt’s(1983) findings, our results indicated that the overlap between God and the selfmay provide a better explanation than religious commitment for how peopleattribute successes to God, by feeling God’s presence. Our review of the literature suggests this may be the first study to investigate these concepts as explan-ations for differing attribution styles for failures and successes.

Strengths  and  implications

Until now, little research has investigated why and how God-believers attributetheir successes to God more than their failures. We replicated the results of a setof over 30-year-old studies (Spilka & Schmidt, 1983). Contrary to these originalstudies, our research did not use hypothetical events; our participants experienced real-life successes and failures. Despite this seemingly stable effect, little research has explained why people who believe in God experienced a God attri-bution bias instead of a self-serving bias. We again showed that a God attribution bias may be a result of religious commitment. In addition to conceptually replicating these over 30-year-old findings (which is important in and of itself), we also found evidence for some possible mechanisms to explain this God attri-bution bias. That is, believing God is a part of them, a variable potentially moreimportant than religiosity, appeared to increase feeling God’s presence whichresulted in greater attributions to God for successes. This is the first setof studies that show these beliefs may play an important role in the God attribution bias.

These results also indicate that a more nuanced approach is needed to under-stand why people attribute successes more to God than failures and how thisimpacts people’s thinking and behavior. Although vignettes are better thansimple survey measures (Alexander & Becker, 1978), they are problematic aspeople might believe they would make attributions one way when in reality theymay do another (Barter & Renold, 1999). Our study is the first to examine God attributional styles for actual experiences of failures and successes by the participants. Nevertheless, the results of our study were consistent with the vignetteresearch: while religious individuals were more likely to use this God-serving attributional style, we saw that people generally tended to give God more credit for successes than failures. We also found support for the idea that feeling Godas a part of the self, which resulted in feeling God’s presence also predictedgiving God greater credit, but only for successes. Religious commitment did notexplain this effect as well as feeling God is a part of the self.

Although the Battleship game held little consequences for participants(whether they won or lost resulted in no benefit or penalty), even with thisinconsequential task, we saw that people will attribute successes more to Godand failures to themselves. Yet, the successes and failures in life often result inreal consequences. Our study showed that even inconsequential failures andsuccesses can lead to God attributional biases seen in previous research. Thus,we would predict a similar God attributional pattern between both consequen-tial and inconsequential tasks, inside and outside of the laboratory.

Our results also suggest that people who are especially religious may be morelikely to attribute their successes more to God than their failures. People whouse this attributional style should be more mindful of these attribution tenden-cies, as giving God credit for successes and taking credit for failures could resultin depression. Potentially, this could explain slumps we see in highly religiousathletes. If athletes give credit to God for successes on the field, this may appearas humility to some, but this type of thinking could quickly lead to the samedownward spiral thinking that we see in people suffering from depression (Alloy& Abramson, 1988). It would be prudent for all of us, especially for people whobelieve in God, to be aware of how much credit we are taking for our successesand failures. As such, Sports Psychologists may consider heeding this line ofthinking in their religious athletes, so that they can break out of their “slumps.”

Our findings also further our understanding of SIT, by showing that religiousidentity may be less important for explaining attributions to God for successesthan experiencing God as part of the self. Although religiosity, an aspect ofone’s collective identity, moderated the effect of wins on attributions to God,experiencing God as part of the self predicted feeling God’s presence, which thenpredicted attributing the win to God. Religious commitment did not explain thiseffect as well. Future research should continue to examine these two variables,religiosity and experiencing God as part of the self, when attempting to explainattributional styles.

On the Mathematics of the Fraternal Birth Order Effect and the Genetics of Homosexuality


On the Mathematics of the Fraternal Birth Order Effect and the Genetics of Homosexuality. Tanya Khovanova. Archives of Sexual Behavior, November 5 2019. https://link.springer.com/article/10.1007/s10508-019-01573-1

Abstract: Mathematicians have always been attracted to the field of genetics. The mathematical aspects of research on homosexuality are especially interesting. Certain studies show that male homosexuality may have a genetic component that is correlated with female fertility. Other studies show the existence of the fraternal birth order effect, that is, the correlation of homosexuality with the number of older brothers. This article is devoted to the mathematical aspects of how these two phenomena are interconnected. In particular, we show that the fraternal birth order effect implies a correlation between homosexuality and maternal fecundity. Vice versa, we show that the correlation between homosexuality and female fecundity implies the increase in the probability of the younger brothers being homosexual.

Keywords: Fraternal birth order effect Male homosexuality Fecundity Genetics Sexual orientation

---
Introduction

According to the study by Blanchard and Bogaert (1996): “[E]ach additional older brother increased the odds of [male] homosexuality by 34% ” (see also Blanchard [2004], Bogaert [2006], Bogaert  et al. [2018], and a recent survey by Blan-chard [2018]). The current explanation is that carrying a boy to term changes their mother’s uterine environment. Male fetuses produce H–Y antigens which may be responsible for this environmental change for future fetuses.

The research into a genetic component of male gayness shows that there might be some genes in the X chromosome that influence male homosexuality. It also shows that the same genes might be responsible for increased fertility in females (see Ciani, Cermelli, & Zanzotto [2008] and Iemmola & Ciani [2009]).

In this article, we compare two mathematical models. In these mathematical models, we disregard girls for the sake of clarity and simplicity.

The first mathematical model of the Fraternal Birth Order Effect (FBOE), which we denote FBOE-model, assumes that each next-born son becomes homosexual with increased probability. This probability is independent of any other factor.

The second mathematical model of Female Fecundity (FF), which we denote FF-model, assumes that a son becomes homosexual with probability depending on the total number of chil-dren and nothing else.

We show mathematically how FBOE-model implies correlation with family size and FF-model implies correlation with birth order. That means these two models are math-ematically intertwined.We also propose the Brother Effect. Brothers share a lot of the same genes. It is not surprising that brothers are more probable to share traits. With respect to homosexuality, we call the correlation that homosexuals are more probable to have a homosexual brother than a non-homosexual the Brother Effect. The existence of genes that increase predisposition to homo-sexuality implies the Brother Effect. The connection between the FBOE-model and the Brother Effect is more complicated.

We also discuss how to separate FBOE and FF in the data.

The “Extreme Examples” section contains  extreme mathematical examples that amplify the results of this article. The “FBOE-model and the family size” section shows how FBOE-model implies the correlation with family size. The “FF-model implies birth order correlation” section shows how FF-model implies the correlation with birth order. In the “Brothers” section, we discuss the connection between FBOE-model and the Brother Effect. In the “Separating Birth Order and Female Fecundity” section, we discuss how to separate the birth order from the family size.

Tuesday, November 5, 2019

Moderate drinking: Enhanced cognition and lower dementia risk, substantive reductions in risk for cardiovascular and diabeters events are reported (but robust conclusions remain elusive)

Clarifying the neurobehavioral sequelae of moderate drinking lifestyles and acute alcohol effects with aging. Sara Jo Nixon, Ben Lewis. International Review of Neurobiology, November 5 2019. https://doi.org/10.1016/bs.irn.2019.10.016

Abstract: Epidemiological estimates indicate not only an increase in the proportion of older adults, but also an increase in those who continue moderate alcohol consumption. Substantial literatures have attempted to characterize health benefits/risks of moderate drinking lifestyles. Not uncommonly, reports address outcomes in a single outcome, such as cardiovascular function or cognitive decline, rather than providing a broader overview of systems. In this narrative review, retaining focus on neurobiological considerations, we summarize key findings regarding moderate drinking and three health domains, cardiovascular health, Type 2 diabetes (T2D), and cognition. Interestingly, few investigators have studied bouts of low/moderate doses of alcohol consumption, a pattern consistent with moderate drinking lifestyles. Here, we address both moderate drinking as a lifestyle and as an acute event.
Review of health-related correlates illustrates continuing inconsistencies. Although substantive reductions in risk for cardiovascular and T2D events are reported, robust conclusions remain elusive. Similarly, whereas moderate drinking is often associated with enhanced cognition and lower dementia risk, few benefits are noted in rates of decline or alterations in brain structure. The effect of sex/gender varies across health domains and by consumption levels. For example, women appear to differentially benefit from alcohol use in terms of T2D, but experience greater risk when considering aspects of cardiovascular function. Finally, we observe that socially relevant alcohol doses do not consistently impair performance in older adults. Rather, older drinkers demonstrate divergent, but not necessarily detrimental, patterns in neural activation and some behavioral measures relative to younger drinkers. Taken together, the epidemiological and laboratory studies reinforce the need for greater attention to key individual differences and for the conduct of systematic studies sensitive to age-related shifts in neurobiological systems.

Keywords: AlcoholModerate drinkingHealthCognitionBehaviorAgingOlder adultsAcute administrationNeurophysiology

In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly

The Eighty Five Percent Rule for optimal learning. Robert C. Wilson, Amitai Shenhav, Mark Straccia & Jonathan D. Cohen. Nature Communications, volume 10, Article number: 4646 (2019). November 5 2019. https://www.nature.com/articles/s41467-019-12552-4

Abstract: Researchers and educators have long wrestled with the question of how best to teach their clients be they humans, non-human animals or machines. Here, we examine the role of a single variable, the difficulty of training, on the rate of learning. In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly. We derive conditions for this sweet spot for a broad class of learning algorithms in the context of binary classification tasks. For all of these stochastic gradient-descent based learning algorithms, we find that the optimal error rate for training is around 15.87% or, conversely, that the optimal training accuracy is about 85%. We demonstrate the efficacy of this ‘Eighty Five Percent Rule’ for artificial neural networks used in AI and biologically plausible neural networks thought to describe animal learning.


Discussion

In this article we considered the effect of training accuracy on learning in the case of binary classification tasks and stochastic gradient-descent-based learning rules. We found that the rate of learning is maximized when the difficulty of training is adjusted to keep the training accuracy at around 85%. We showed that training at the optimal accuracy proceeds exponentially faster than training at a fixed difficulty. Finally we demonstrated the efficacy of the Eighty Five Percent Rule in the case of artificial and biologically plausible neural networks.
Our results have implications for a number of fields. Perhaps most directly, our findings move towards a theory for identifying the optimal environmental settings in order to maximize the rate of gradient-based learning. Thus the Eighty Five Percent Rule should hold for a wide range of machine learning algorithms including multilayered feedforward and recurrent neural networks (e.g. including ‘deep learning’ networks using backpropagation9, reservoir computing networks21,22, as well as Perceptrons). Of course, in these more complex situations, our assumptions may not always be met. For example, as shown in the Methods, relaxing the assumption that the noise is Gaussian leads to changes in the optimal training accuracy: from 85% for Gaussian, to 82% for Laplacian noise, to 75% for Cauchy noise (Eq. (31) in the “Methods”).
More generally, extensions to this work should consider how batch-based training changes the optimal accuracy, and how the Eighty Five Percent Rule changes when there are more than two categories. In batch learning, the optimal difficulty to select for the examples in each batch will likely depend on the rate of learning relative to the size of the batch. If learning is slow, then selecting examples in a batch that satisfy the 85% rule may work, but if learning is fast, then mixing in more difficult examples may be best. For multiple categories, it is likely possible to perform similar analyses, although the mapping between decision variable and categories will be more complex as will be the error rates which could be category specific (e.g., misclassifying category 1 as category 2 instead of category 3).
In Psychology and Cognitive Science, the Eighty Five Percent Rule accords with the informal intuition of many experimentalists that participant engagement is often maximized when tasks are neither too easy nor too hard. Indeed it is notable that staircasing procedures (that aim to titrate task difficulty so that error rate is fixed during learning) are commonly designed to produce about 80–85% accuracy17. Similarly, when given a free choice about the difficulty of task they can perform, participants will spontaneously choose tasks of intermediate difficulty levels as they learn23. Despite the prevalence of this intuition, to the best of our knowledge no formal theoretical work has addressed the effect of training accuracy on learning, a test of which is an important direction for future work.
More generally, our work closely relates to the Region of Proximal Learning and Desirable Difficulty frameworks in education24,25,26 and Curriculum Learning and Self-Paced Learning7,8 in computer science. These related, but distinct, frameworks propose that people and machines should learn best when training tasks involve just the right amount of difficulty. In the Desirable Difficulties framework, the difficulty in the task must be of a ‘desirable’ kind, such as spacing practice over time, that promotes learning as opposed to an undesirable kind that does not. In the Region of Proximal Learning framework, which builds on early work by Piaget27 and Vygotsky28, this optimal difficulty is in a region of difficulty just beyond the person’s current ability. Curriculum and Self-Paced Learning in computer science build on similar intuitions, that machines should learn best when training examples are presented in order from easy to hard. In practice, the optimal difficulty in all of these domains is determined empirically and is often dependent on many factors29. In this context, our work offers a way of deriving the desired difficulty and the region of proximal learning in the special case of binary classification tasks for which stochastic gradient-descent learning rules apply. As such our work represents the first step towards a more mathematical instantiation of these theories, although it remains to be generalized to a broader class of circumstances, such as multi-choice tasks and different learning algorithms.

[...] our work points to a mathematical theory of the state of ‘Flow’34. This state, ‘in which an individual is completely immersed in an activity without reflective self-consciousness but with a deep sense of control’ [ref. 35, p. 1], is thought to occur most often when the demands of the task are well matched to the skills of the participant. This idea of balance between skill and challenge was captured originally with a simple conceptual diagram (Fig. 5) with two other states: ‘anxiety’ when challenge exceeds skill and ‘boredom’ when skill exceeds challenge. These three qualitatively different regions (flow, anxiety, and boredom) arise naturally in our model. Identifying the precision, β, with the level of skill and the level challenge with the inverse of true decision variable, 1/Δ, we see that when challenge equals skill, flow is associated with a high learning rate and accuracy, anxiety with low learning rate and accuracy and boredom with high accuracy but low learning rate (Fig. 5b, c). Intriguingly, recent work by Vuorre and Metcalfe, has found that subjective feelings of Flow peaks on tasks that are subjectively rated as being of intermediate difficulty36. In addition work on learning to control brain computer interfaces finds that subjective, self-reported measures of ‘optimal difficulty’, peak at a difficulty associated with maximal learning, and not at a difficulty associated with optimal decoding of neural activity37. Going forward, it will be interesting to test whether these subjective measures of engagement peak at the point of maximal learning gradient, which for binary classification tasks is 85%.

What We Know And Don't Know About Stressful Life Events and Disease Risk

Ten Surprising Facts About Stressful Life Events and Disease Risk. Sheldon Cohen, Michael L.M. Murphy, and Aric A. Prather. Annual Review of Psychology, Vol. 70:577-597. https://doi.org/10.1146/annurev-psych-010418-102857

Abstract: After over 70 years of research on the association between stressful life events and health, it is generally accepted that we have a good understanding of the role of stressors in disease risk. In this review, we highlight that knowledge but also emphasize misunderstandings and weaknesses in this literature with the hope of triggering further theoretical and empirical development. We organize this review in a somewhat provocative manner, with each section focusing on an important issue in the literature where we feel that there has been some misunderstanding of the evidence and its implications. Issues that we address include the definition of a stressful event, characteristics of diseases that are impacted by events, differences in the effects of chronic and acute events, the cumulative effects of events, differences in events across the life course, differences in events for men and women, resilience to events, and methodological challenges in the literature.

Keywords: stressors, life events, health, disease

---
REFLECTIONS AND CONCLUSIONS

What We Know About Stressful Life Events and Disease Risk

What we can be sure of is that stressful life events predict increases in severity and progression of multiple diseases, including depression, cardiovascular diseases, HIV/AIDS, asthma, and autoimmune diseases. Although there is also evidence for stressful events predicting disease onset, challenges in obtaining sensitive assessments of premorbid states at baseline (for example, in cancer and heart disease) make interpretation of much of these data as evidence for onset less compelling. In general, stressful life events are thought to influence disease risk through their effects on affect, behavior, and physiology. These effects include affective dysregulation such as increases in anxiety, fear, and depression. Additionally, behavioral changes occurring as adaptations or coping responses to stressors, such as increased smoking, decreased exercise and sleep, poorer diets, and poorer adherence to medical regimens, provide important pathways through which stressors can influence disease risk. Two endocrine response systems, the hypothalamic-pituitaryadrenocortical (HPA) axis and the sympathetic-adrenal-medullary (SAM) system, are particularly reactive to psychological stress and are also thought to play a major role in linking stressor exposure to disease. Prolonged or repeated activation of the HPA axis and SAM system can interfere with their control of other physiological systems (e.g., cardiovascular, metabolic, immune), resulting in increased risk for physical and psychiatric disorders (Cohen et al. 1995b, McEwen 1998).

Chronic stressor exposure is considered to be the most toxic form of stressor exposure because chronic events are the most likely to result in long-term or permanent changes in the emotional, physiological, and behavioral responses that influence susceptibility to and course of disease. These exposures include those to stressful events that persist over an extended duration (e.g., caring for a spouse with dementia) and to brief focal events that continue to be experienced as overwhelming long after they have ended (e.g., experiencing a sexual assault). Even so, acute stressors seem to play a special role in triggering disease events among those with underlying pathology (whether premorbid or morbid), such as asthma and heart attacks.

One of the most provocative aspects of the evidence linking stressful events to disease is the broad range of diseases that are presumed to be affected. As discussed above, the range of effects may be attributable to the fact that many behavioral and physiological responses to stressors are risk factors for a wide range of diseases. The more of these responses to stressful events are associated with risk for a specific disease, the greater is the chance that stressful events will increase the risk for the onset and progression of that disease. For example, risk factors for CVD include many of the behavioral effects of stressors (poor diet, smoking, inadequate physical activity). In addition, stressor effects on CVD (Kaplan et al. 1987, Skantze et al. 1998) and HIV (Capitanio et al. 1998, Cole et al. 2003) are mediated by physiological effects of stressors (e.g., sympathetic activation, glucocorticoid regulation, and inflammation).

It is unlikely that all diseases are modulated by stressful life event exposure. Rare conditions, such as those that are genetic and of high penetrance, leave little room for stressful life events to play a role in disease onset. For example, Tay-Sachs disease is an autosomal recessive disorder expressed in infancy that results in destruction of neurons in both the spinal cord and brain. This disease is fully penetrant, meaning that, if an individual carries two copies of the mutation in the HEXA gene, then they will be affected. Other inherited disorders, such as Huntington’s disease, show high penetrance but are not fully penetrant, leaving room for environmental exposures, behavioral processes, and interactions among these factors to influence disease onset. Note that, upon disease onset, it is unlikely that any disease is immune to the impact of stressor exposure if pathways elicited by the stressor are implicated in the pathogenesis or symptom course of the disease.


What We Do Not Know About Stressful Life Events and Disease Risk

There are still a number of key issues in understanding how stressful events might alter disease pathogenesis where the data are still insufficient to provide clear answers. These include the lack of a clear conceptual definition of what constitutes a stressful event. Alternative approaches (adaptation, threat, goal interruption, demand versus control) overlap in their predictions, providing little leverage for empirically establishing the unique nature of major stressful events. The lack of understanding of the primary nature of stressful events also obscures the reasons for certain events (e.g., interpersonal, economic) being more potent.

Two other important questions for which we lack consistent evidence are whether the stress load accumulates with each additional stressor and whether previous or ongoing chronic stressors moderate responses to current ones. The nature of the cumulative effects of stressors is key to obtaining sensitive assessments of the effects of stressful events on disease and for planning environmental (stressor-reduction) interventions to reduce the impact of events on our health. Evidence that single events may be sufficient to trigger risk for disease has raised two important questions. First, are some types of events more potent than others? Wea ddress this question above (in the section titled Fact 6: Certain Types of Stressful Events Are Particularly Potent) using the existing evidence, but it is important to emphasize the relative lack of studies comparing the impact of different stressors on the same outcomes (for some exceptions, see Cohen et al. 1998, Kendler et al. 2003, Murphy et al. 2015). Second, are specific types of events linked to specific diseases? This question derives from scattered evidence of stressors that are potent predictors of specific diseases [e.g., social loss for depression (Kendler et al. 2003), work stress for CHD (Kivim¨aki et al. 2006)] and of specific stress biomarkers [e.g., threats to social status leading to cortisol responses (Denson et al. 2009, Dickerson & Kemeny 2004)]. While it is provocative, there are no direct tests of the stressor-disease specificity hypothesis. A proper examination of this theory would require studies that not only conduct broad assessments of different types of stressful life events, but also measure multiple unique diseases to draw comparisons. Such studies may not be feasible due to the high costs of properly assessing multiple disease outcomes and the need for large numbers of participants to obtain sufficient numbers of persons developing (incidence) or initially having each disease so as to measure progression. Comparisons of limited numbers of diseases proposed to have different predictors (e.g., cancer and heart disease) are more efficient and may be a good initial approach to this issue.

Another area of weakness is the lack of understanding of the types of stressful events that are most salient at different points in development. For example, although traumatic events are the type of events studied most often in children, the relative lack of focus on more normative events leaves us with an incomplete understanding of how different events influence the current and later health of young people. Overall, the relative lack of comparisons of the impact of the same events (or equivalents) across the life course further muddies our understanding of event salience as we age.

It is noteworthy that the newest generation of instruments designed to assess major stressful life events has the potential to provide some of the fine-grained information required to address many of the issues raised in this review (for a review, see Anderson et al. 2010; see also Epel et al. 2018). For example, the Life Events Assessment Profile (LEAP) (Anderson et al. 2010) is a computer-assisted, interviewer-administered measure designed to mimic the LEDS. Like the LEDS, the LEAP assesses events occurring within the past 6–12 months, uses probing questions to better define events, assesses exposure duration, and assigns objective levels of contextual threat based on LEDS dictionaries. Another instrument, the Stress and Adversity Inventory (STRAIN) (Slavich & Shields 2018), is a participant-completed computer assessment of lifetime cumulative exposure to stressors. The STRAIN assesses a range of event domains and timing of events (e.g., early life, distant, recent) and uses probing follow-up questions. Both the LEAP and the STRAIN are less expensive and time consuming than the LEDS and other interview techniques and are thus more amenable to use in large-scale studies.

The fundamental question of whether stressful events cause disease can only be rigorously evaluated by experimental studies. Ethical considerations prohibit conducting experimental studies in humans of the effects of enduring stressful events on the pathogenesis of serious disease. A major limitation of the correlational studies is insufficient evidence of (and control for) selection in who gets exposed to events, resulting in the possibility that selection factors such as environments, personalities, or genetics are the real causal agents. The concern is that the social and psychological characteristics that shape what types of stressful events people are exposed to may be directly responsible for modulating disease risk. Because it is not possible to randomly assign people to stressful life events, being able to infer that exposure to stressful events causally modulates disease will require the inclusion of covariates representing obvious individual and environmental confounders, as well as controls for stressor dependency—the extent to which individuals are responsible for generating the stressful events that they report.

Even with these methodological limitations, there is evidence from natural experiments that capitalize on real-life stressors occurring outside of a person’s control, such as natural disasters, economic downsizing, or bereavement (Cohen et al. 2007). There have also been attempts to reduce progression and recurrence of disease using experimental studies of psychosocial interventions. However, clinical trials in this area tend to be small, methodologically weak, and not specifically focused on determining whether stress reduction accounts for intervention-induced reduction in disease risk. Moreover, trials that do assess stress reduction as a mediator generally focus on the reduction of nonspecific perceptions of stress and negative affect instead of on the elimination or reduction of the stressful event itself. In contrast, evidence from prospective cohort studies and natural experiments is informative. These studies typically control for a set of accepted potentially confounding demographic and environmental factors such as age, sex, race or ethnicity, and SES. It is also informative that the results of these studies are consistent with those of laboratory experiments showing that stress modifies disease-relevant biological processes in humans and with those of animal studies that investigate stressors as causative factors in disease onset and progression (Cohen et al. 2007).

Despite many years of investigation, our understanding of resilience to stressful life events is incomplete and even seemingly contradictory (e.g., Brody et al. 2013). Resilience generally refers to the ability of an individual to maintain healthy psychological and physical functioning in the face of exposure to adverse experiences (Bonanno 2004). This definition suggests that when a healthy individual is exposed to a stressful event but does not get sick and continues to be able to function relatively normally, this person has shown resilience. What is less clear is whether there are certain types of stressful events for which people tend to show greater resilience than for others. It seems likely that factors that increase stressor severity, such as imminence of harm, uncontrollability, and unpredictability, also decrease an event’s potential to be met with resilience. Additionally, it may be possible that stressful events that are more commonly experienced are easier to adapt to due to shared cultural experiences that provide individuals with expectations for how to manage events. Conversely, less common events (e.g., combat exposure) or experiences that carry significant sociocultural stigma (e.g., rape) might be less likely to elicit resilience. As efforts to test interventions to promote resilience continue to be carried out, careful characterizations of stress exposures, including the complexities discussed in this review, will be critical to understanding the heterogeneity in physical and mental health outcomes associated with stressful life events.


Check also, from 2018...Salutogenic effects of adversity and the role of adversity for successful aging. Jan Höltge. Fall 2018, University of Zurich, Faculty of Arts, PhD Thesis. https://www.bipartisanalliance.com/2019/10/from-2018salutogenic-effects-of.html

And Research has predominantly focused on the negative effects of adversity on health and well-being; but under certain circumstances, adversity may have the potential for positive outcomes, such as increased resilience and thriving (steeling effect):
A Salutogenic Perspective on Adverse Experiences. The Curvilinear Relationship of Adversity and Well-Being. Jan Höltge et al. European Journal of Health Psychology (2018), 25, pp. 53-69. https://www.bipartisanalliance.com/2018/08/research-has-predominantly-focused-on.html

Monday, November 4, 2019

Are Daylight Saving Time Changes Bad for the Brain?

Are Daylight Saving Time Changes Bad for the Brain? Beth A. Malow et al. JAMA Neurol., November 4, 2019. doi:10.1001/jamaneurol.2019.3780

Excerpts (full text, map, references, etc., in the DOI above):

Daylight saving time (DST) begins on the second Sunday in March and ends on the first Sunday in November. During this period, clocks in most parts of the United States are set 1 hour ahead of standard time. First introduced in the United States in 1918 to mimic policies already being used in several European countries during World War I, DST was unpopular and abolished as a federal policy shortly after World War I ended.1 It was reinstated in 1942 during World War II but covered the entire year and was called “war time.” After World War II ended, it became a local policy. Varying DST policies across cities and states led to the Uniform Time Act of 1966, which mandated DST starting on the last Sunday in April until the last Sunday in October. States were allowed to exempt themselves from observing DST (including parts of the state that were within a different time zone [eg, Michigan and Indiana]).

The US Department of Transportation (DOT) is responsible for enforcing and evaluating DST. In 1974, DOT reported that the potential benefits to energy conservation, traffic safety, and reductions in violent crime were minimal.2 In 2008, the US Department of Energy assessed the potential effects to national energy consumption of an extended DST and found a reduction in total primary energy consumption of 0.02%. The DOT is currently reviewing the literature associated with DST in response to a request from the US House Committee on Energy and Commerce.

Since 2015, multiple states have proposed legislation to change their observance of DST (Figure).1,2 These efforts include proposals to exempt a state from DST observance, which is allowable under existing law, and proposals that would establish permanent DST, which would require Congress to amend the Uniform Time Act of 1966.

[Figure]

State Legislation Related to Daylight Saving Time

Map of the United States depicting current practices or legislation pending as of August 2019.1,2 Note the exception of the Navajo Nation in Arizona, which participates in the daylight saving time (DST) transition. Most states have either adopted permanent DST or standard time (ST) or have legislation being considered. While Indiana does not have DST legislation being considered, it is considering legislation in which the entire state would be located within the central time zone.

This Viewpoint reviews data associated with the DST transition. The effects of permanent DST have received less attention and are beyond the scope of this review.

Clinical Implications

The transition to DST has been associated with health consequences, including on cerebrovascular and cardiovascular function. The rate of ischemic stroke was significantly higher during the first 2 days after DST transition, with women, older age, and malignancy showing increased susceptibility.3 A meta-analysis based on several studies including more than 100 000 participants documented a modest increased risk of acute myocardial infarction in the week after the DST spring transition (about 5%).4 This increased risk may be associated with the effect of acute partial sleep deprivation, changes in sympathetic activity with increased heart rate and blood pressure, and the release of proinflammatory cytokines.
The association of DST with self-reported life satisfaction scores was assessed using mixed-effect models and found to be negatively associated with individual well-being.5 The effect of DST was more significant for men and those with full-time employment. In a survey of sleep patterns in 55 000 participants, adjustments to the autumn time zone shift were easier, but adjustments were more difficult in the spring. There was a lower quality of sleep reported in participants up to 2 weeks afterwards during the spring season.6
Using time use data (eg, how individuals spent their time) in the week before and after the transition to DST, the transition to DST resulted in an average of 15 to 20 fewer minutes of sleep.1 High school students studied during the DST transition showed reduced weeknight sleep duration (approximately 30 minutes), as measured by actigraphy.7 The average sleep duration was 7 hours and 51 minutes on pre-DST transition weeknights and 7 hours, 19 minutes post-DST weeknights. In addition, longer reaction times, increased lapses in vigilance, and increased daytime sleepiness were documented. While it is important to recognize that this study only involved 40 students and was limited to the week following the DST transition, an American Academy of Sleep Medicine consensus statement has recommended 8 to 10 hours of sleep for adolescents on a regular basis.8 These recommendations were based on a detailed literature review that documented adverse effects of chronic sleep loss on attention, behavior, learning problems, depression, and self-harm. Additional studies will be needed to document whether transitions to DST have more long-term associations with adolescent sleep and contribute to adverse effects.

Genetics of Circadian Disruption

The negative health outcomes associated with the DST transition may be associated with disruptions in the underlying genetic mechanisms that contribute to the expression of the circadian clock and its behavioral manifestations in neurology (ie, chronotype).9 It is well established that genetic factors help to regulate the sleep-wake cycle in humans by encoding the circadian clock, which is an autoregulatory feedback loop. When sleep time shifts there is global disruption in peripheral gene expression, and even the short-term sleep deprivation that occurs following the transition to DST may alter the epigenetic and transcriptional profile of core circadian clock genes.10 While it is unclear how disruptive a 1-hour time change is to otherwise healthy individuals, it is possible that individuals with extreme manifestations of chronotype or circadian rhythm sleep-wake disorders, neurological disorders, or children and adolescents whose brains are still developing are more susceptible to the adverse health effects that occur following the DST transition.

Conclusions
Transitions to DST have documented detrimental associations with the brain, specifically ischemic stroke, with the risk of myocardial infarction and well-being also affected. A lower quality of sleep, shorter sleep duration, and decreased psychomotor vigilance have also been reported. Additional studies are needed to understand the causes of these detrimental effects and the role of sleep deprivation and circadian disruption. Based on these data, we advocate for the elimination of transitions to DST.

From 2018... A Simple Combinatorial Model of Technological Change that Explains the Industrial Revolution

From 2018... A Simple Combinatorial Model of World Economic History. Roger Koppl, Abigail Devereaux, Jim Herriot, Stuart Kauffman. Nov 2018. arXiv:1811.04502

Abstract: We use a simple combinatorial model of technological change to explain the Industrial Revolution. The Industrial Revolution was a sudden large improvement in technology, which resulted in significant increases in human wealth and life spans. In our model, technological change is combining or modifying earlier goods to produce new goods. The underlying process, which has been the same for at least 200,000 years, was sure to produce a very long period of relatively slow change followed with probability one by a combinatorial explosion and sudden takeoff. Thus, in our model, after many millennia of relative quiescence in wealth and technology, a combinatorial explosion created the sudden takeoff of the Industrial Revolution.

Men speak more abstractly than women; gender differences were larger for older adults than for teenagers, suggesting that gender differences in communicative abstraction may be reinforced by one’s experiences

Gender differences in communicative abstraction. Joshi, Priyanka D., Wakslak, Cheryl J., Appel, Gil, Huang, Laura. Journal of Personality and Social Psychology, Oct 14 , 2019. https://psycnet.apa.org/doiLanding?doi=10.1037%2Fpspa0000177

Abstract: Drawing on construal level theory, which suggests that experiencing a communicative audience as proximal rather than distal leads speakers to frame messages more concretely, we examine gender differences in linguistic abstraction. In a meta-analysis of prior studies examining the effects of distance on communication, we find that women communicate more concretely than men when an audience is described as being psychologically close. These gender differences in linguistic abstraction are eliminated when speakers consider an audience whose distance has been made salient (Study 1). In studies that follow, we examine gender differences in linguistic abstraction in contexts where the nature of the audience is not specified. Across a written experimental context (Study 2), a large corpus of online blog posts (Study 3), and the real-world speech of congressmen and congresswomen (Study 4), we find that men speak more abstractly than women. These gender differences in speech abstraction continue to emerge when subjective feelings of power are experimentally manipulated (Study 5). We believe that gender differences in linguistic abstraction are the result of several interrelated processes—including but not limited to social network size and homogeneity, communication motives involving seeking proximity or distance, perceptions of audience homogeneity and distance, as well as experience of power. In Study 6, we find preliminary support for mediation of gender differences in linguistic abstraction by women’s tendency to interact in small social networks. We discuss implication of these gender differences in communicative abstraction for existing theory and provide suggestions for future research.

General Discussion

Across a series of six studies, we find that men communicate more abstractly than women. We find that gender differences in communicative abstraction persist across experimental (Studies 1, 2, 5, and 6) and field (Studies 3 and 4) contexts. The effects conceptually replicate across various measures of abstraction, including emphasizing desirability vs. feasibility (Study 2), using more concrete words (Studies 3 and 4), and adopting higher levels of action identification (Studies 5 and 6).

Effects ranged from small to moderate across studies, with larger effect sizes in our controlled laboratory-style studies than our archival data ones. The former studies allowed for greater control, constraining the topic of communication and potential ways in which a participant was able to communicate. They largely also used paradigms that have been specifically designed in prior work to capture variation in communicative abstraction. On the other hand, our archival data studies gave us an opportunity to examine gender differences in contexts that were much less constrained in terms of numerous factors such as the topic of conversation, number of words, intended audience, length of speech, and overall purpose of conversation. Given the size of the corpora involved, they also relied on an automated method for coding abstraction, which gives up some amount of precision compared to hand coding (see Johnson et al., 2019, for a related discussion). Thus, it is not especially surprising to find smaller effects in these contexts.

What Drives these Effects?

We suspect that the gender difference in communicative abstraction we identify emerges from a set of converging reasons, including women’s social interactions in closely knit small groups, their historical occupation of lower status roles compared to men, their desire to establish close interpersonal relationships, their caution in signaling power and judgmentalness, and their desire to establish their competence. While we do not argue for any one particular process, several of our specific findings across studies may speak to the various (potentially interrelated) processes that might underlie this effect.

For example, in Study 1, we find that when an audience’s distance is made salient, gender differences in linguistic abstraction are eliminated. The moderating role of distance is consistent with this factor playing a role in explaining gender differences in communicative abstraction. That is, women may be relatively more inclined to create proximity with others (indeed, we found this gender pattern in Study 6, although it did not negatively correlate with the use of abstraction) or to conceptualize others as proximal; emphasizing that an audience is distant may block the proximating tendency of women and minimize gender effects in communication. Also important to consider in the context of Study 1’s findings are our findings in Study 3, which showed differences in the communicative abstraction of male and female bloggers. On the one hand, this may be surprising given Study 1’s findings (given that bloggers communicate with a sizeable audience). At the same time, unlike the experimental studies on audience size which made salient the size and/or heterogeneity of the audience and likely reduced variation in perceptions of it, the blogging context preserves the opportunity for variation in perception of one’s audience. For example, female bloggers may differ from male bloggers in terms of their perception of their audience’s homogeneity, size, and similarity to themselves; although speculative, such variation may support the emergence of gender differences in communicative abstraction within the blogging context.

We also considered the role of power in explaining gender effects on communicative abstraction. Across samples, we find effects both when respondents have relatively low levels of power (e.g., students, Mturk respondents) and when they have higher levels of power (members of Congress). Indeed, even within our Congress dataset (Study 4) we find no variation in the gender effect based on relative amount of power (House of Representative members vs. Senators). This is consistent as well with results of Study 5, which experimentally manipulated power and found that this did not interact with gender. These findings, however, do not preclude a role of the subjective experience of power. That is, even when in similar positions, men and women may differ in how powerful they feel. Study 5, which found that women reported lower subjective experience of power than men when power was experimentally primed, and that this subjective experience of power mediated the effects of gender on communicative abstraction, is consistent with a role of subjective power in explaining gender differences in speech. In Study 6, however, we did not find any evidence for gender differences in subjective experience of power, and subjective experience of power did not mediate the effects of gender on communicative abstraction. This suggests at a broad level that while subjective power may play a role in some contexts (e.g., most likely ones in which subjective power is salient, as in Study 5), the routine experience of power is unlikely to be the main driver of these gender effects across variable contexts.

Indeed, in Study 6 we considered a broader set of mediators of a gender effect on abstraction. Gender differences on the measures we collected supported many of our earlier arguments based on the gender literature: women reported greater motivation than men to seek closeness in communication contexts, greater likelihood of interacting in small and homogeneous networks, and greater concerns about establishing their competence. Further, we found that the tendency of women to establish and interact in small groups mediated their tendency to use concrete speech. As mentioned earlier, we certainly don’t see these results as ruling out alternative explanations, but they do suggest the plausibility of communication audience size playing an important role.

Also thought-provoking are the findings from Study 3. In a dataset that allowed us to capture writings of adolescents as well as adults, we found that gender differences were larger for older adults than for teenagers, suggesting that gender differences in communicative abstraction may be reinforced by one’s experiences. This is broadly consistent with our argument that women and men are acculturated in a variety of ways over time that are consistent with the development of different communicative abstraction tendencies. We call for future work to continue to explore this divergence in women and men’s speech, and how these are shaped through one’s interpersonal experiences.

Lando's data — the overall positive experience of his hospitalization — didn’t match David Rosenhan’s thesis that institutions are uncaring, ineffective and even harmful places, and so they were discarded

Stanford professor who changed America with just one study was also a liar. Susan Cahalan. NY Post, Nov 2 2019. https://nypost.com/2019/11/02/stanford-professor-who-changed-america-with-just-one-study-was-also-a-liar/

About Stanford psychology and law professor David Rosenhan and his work

[...]

His research work was also groundbreaking. In 1973, Rosenhan published the paper “On Being Sane in Insane Places” in the prestigious journal Science, and it was a sensation. The study, in which eight healthy volunteers went undercover as “pseudopatients” in 12 psychiatric hospitals across the country, discovered harrowing conditions that led to national outrage. His findings helped expedite the widespread closure of psychiatric institutions across the country, changing mental-health care in the US forever.

Fifty years later, I tried to find out how Rosenhan had convinced his subjects to go undercover as psychiatric patients and discovered a whole lot more. Yes, Rosenhan had charm. He had charisma. He had chutzpah to spare. And, as I eventually uncovered, he was also not what he appeared to be.

I stumbled across Rosenhan and his study six years ago while on a book tour for my memoir “Brain on Fire,” which chronicled my experiences with a dangerous misdiagnosis, when doctors believed that my autoimmune disorder was a serious mental illness. After my talk, a psychologist and researcher suggested that I could be considered a “modern-day pseudopatient” from Rosenhan’s famous study.

Rosenhan’s eight healthy pseudopatients allegedly each followed the same script to gain admittance to psychiatric hospitals around the country. They each told doctors that they heard voices that said, “Thud, empty, hollow.” Based on this one symptom alone, the study claimed, all of the pseudopatients were diagnosed with a mental illness — mostly schizophrenia.

And once they were labeled with a mental illness, it became impossible to prove otherwise. All eight were kept hospitalized for an average of 19 days — with the longest staying an unimaginable 52. They each left “against medical advice,” meaning the doctors believed that they were too sick to leave. A total of 2,100 pills — serious psychiatric drugs — were reportedly prescribed to these otherwise healthy individuals.

At the time, the collective American imagination was deeply suspicious of psychiatry and its institutions. It was the era of Ken Kesey’s “One Flew Over the Cuckoo’s Nest” and movies like “Shock Corridor” and “The Snake Pit.” Rosenhan — who was both an insider who studied abnormal psychology, and an outsider who was a psychologist rather than a psychiatrist — was the perfect person to pull back the curtain on psychiatry’s secrets.

[...]

“It all started out as a dare,” Rosenhan told a local newspaper. “I was teaching psychology at Swarthmore College, and my students were saying that the course was too conceptual and abstract. So I said, ‘OK, if you really want to know what mental patients are like, become mental patients.’ ”

Soon after that, Rosenhan went undercover for nine days at Haverford State Hospital in Haverford, Pa., in February 1969. His diary and book describe a host of indignities: soiled bathrooms without doors, inedible food, sheer boredom and ennui, rank disregard by the staff and doctors. Rosenhan even witnessed an attendant sexually assault one of the more disturbed patients. The only time when Rosenhan was truly “seen” as a human by the staff was when an attendant mistook him for a doctor.

The experience was harrowing. After nine days he pushed for a release and made sure that his undergraduate students — who were planning to follow him as undercover patients into the hospital — would not be allowed to go. Colleagues described a shaken, changed man after his experience.

I dug deeper. If his own students were forbidden from pursuing the experiment after this dismaying event, who were the others who had willingly followed in Rosenhan’s footsteps? Why did they put their mental health — even their lives — on the line for this experiment?

The further I explored, the greater my concerns. With the exception of one paper defending “On Being Sane in Insane Places,” Rosenhan never again published any studies on psychiatric hospitalization, even though this subject made him an international success.

He had also landed a lucrative book deal and had even written eight chapters, well over a hundred pages of it. But then Rosenhan suddenly refused to turn over the manuscript. Seven years later his publisher sued him to return his advance. Why would he have given up on the subject that made him famous?

I also started to uncover serious inconsistencies between the documents I had found and the paper Rosenhan published in Science. For example, Rosenhan’s medical record during his undercover stay at Haverford found that he had not, as he had written in his published paper, only exhibited one symptom of “thud, empty, hollow.” Instead, he had told doctors that he put a “copper pot” up to his ears to drown out the noises and that he had been suicidal. This was a far more severe — and legitimately concerning — description of his illness than he had portrayed in his paper.

Meanwhile, I looked for the seven other pseudopatients and spent the next months of my life chasing ghosts. I hunted down rumors, pursuing one dead end after the next. I even hired a private detective, who got no further than I had.

After years of searching, I found only one pseudopatient who participated in the study and whose experience matched that of Rosenhan: Bill Underwood, who’d been a Stanford graduate student at the time.

The only other participant I discovered, Harry Lando, had a vastly different take. Lando had summed up his 19-day hospitalization at the US Public Health Service Hospital in San Francisco in one word: “positive.”

Even though he too was misdiagnosed with schizophrenia, Lando felt it was a healing environment that helped people get better.

“The hospital seemed to have a calming effect. Someone might come in agitated and then fairly quickly they would tend to calm down. It was a benign environment,” Lando, now a psychology professor at the University of Minnesota, recalled in an interview.

But instead of incorporating Lando into the study, Rosenhan dropped him from it.

Lando felt it was pretty obvious what had happened, and I agree: His data — the overall positive experience of his hospitalization — didn’t match Rosenhan’s thesis that institutions are uncaring, ineffective and even harmful places, and so they were discarded.

“Rosenhan was interested in diagnosis, and that’s fine, but you’ve got to respect and accept the data, even if the data are not supportive of your preconceptions,” Lando told me.

Rosenhan, I began to realize, may have been the ultimate unreliable narrator. And I believe it’s possible some of the other pseudopatients he mentioned in his study never existed at all.

As a result, I am now seriously questioning a study I had once admired and had originally planned to celebrate. In my new book “The Great Pretender” (Grand Central Publishing), out this week, I paint the picture of a brilliant but flawed psychologist who is likely also a fabulist.

[...]

Psychologist Peter Gray told me that he sees the work of researchers such as Zimbardo and Rosenhan as prime examples of studies that “fit our biases … There is a kind of desire to expose the problems of society but in the process cut corners or even make up data.”

This may explain Rosenhan. He saw real problems in society: The country was warehousing very sick people in horror houses pretending to be hospitals, our diagnostic systems were flawed and psychiatrists in many ways had too much power — and very little substance. He saw how psychiatric labels degraded people and how doctors see patients through the prism of their mental illness. All of this was true. In many ways, it is still true.

[...]


---

Check also Pseudopatient or pseudoscience: a reviewer's perspective. Mark Zimmerman. Journal of Nervous & Mental Disease 193(11):740-2, December 2005. https://www.researchgate.net/publication/7506090

Pattern of conditional party loyalty... While partisan loyalty is strong, it is finite: the average voter is more likely than not to vote for the co-partisan candidate until that candidate takes dissonant stances on four or more salient issues

The Limits of Partisan Loyalty. Jonathan Mummolo, Erik Peterson, Sean Westwood. Political Behavior, November 4 2019. https://link.springer.com/article/10.1007/s11109-019-09576-3

Abstract: While partisan cues tend to dominate political choice, prior work shows that competing information can rival the effects of partisanship if it relates to salient political issues. But what are the limits of partisan loyalty? How much electoral leeway do co-partisan candidates have to deviate from the party line on important issues? We answer this question using conjoint survey experiments that characterize the role of partisanship relative to issues. We demonstrate a pattern of conditional party loyalty. Partisanship dominates electoral choice when elections center on low-salience issues. But while partisan loyalty is strong, it is finite: the average voter is more likely than not to vote for the co-partisan candidate until that candidate takes dissonant stances on four or more salient issues. These findings illuminate when and why partisanship fails to dominate political choice. They also suggest that, on many issues, public opinion minimally constrains politicians.

Keywords: Party cues Public opinion Voting

Making the Right First Impression: Sexual Priming Encourages Attitude Change and Self-Presentation Lies during Encounters with Potential Partners

Making the Right First Impression: Sexual Priming Encourages Attitude Change and Self-Presentation Lies during Encounters with Potential Partners. Gurit E Birnbaum, Mor Iluz, Harry Reis. Journal of Experimental Social Psychology 86 (2020) 103904. DOI: 10.1016/j.jesp.2019.103904

Abstract: Recent studies have shown that activation of the sexual system encourages enactment of relationship-initiating behaviors (Birnbaum et al., 2017). In four studies, we expand on this work to explore whether people are more inclined to lie to impress a potential partner following sexual priming. In all studies, participants were exposed to sexual stimuli (versus non-sexual stimuli) and then interacted with an opposite-sex stranger. In Study 1, unacquainted participants resolved a dilemma while each represented opposing positions. In Study 2, participants rated their preferences, and after viewing a confederate's preferences, re-rated them in a profile shown to the confederate. In Studies 3 and 4, participants reported their number of lifetime sexual partners in anonymous questionnaires and during a chat (Study 3) or while completing an online profile (Study 4). Results indicated that following sexual priming, participants were more likely to conform to the stranger's views (Studies 1 and 2) and reported fewer sexual partners during actual and potential online interactions than in the questionnaires (Study 3). Although the results of Study 4 did not replicate the findings of Study 3, they were directionally consistent with them. Overall, the findings suggest that sexual priming motivates impression management even when it involves lying.

6. General discussion

In a world of seemingly abundant mating opportunities, competing
for a partner has become a challenging endeavor. In this climate, people
do the best they can to attract desirable partners. Investing effort in
making the right impression plays a key role in the success of their
pursuit. When considering the impression they wish to leave on prospective
partners, people are simultaneously motivated by the competing
desires of being wanted for whom they truly are and of putting
forward their best face (Ellison et al., 2006; Toma et al., 2008). The
present research set out to examine when the latter motivation would
prevail, pushing people to impress a potential partner even at the cost
of engaging in deceptive self-presentation.
In four studies, we showed that following activation of the sexual
system, people were more likely to enhance their efforts to create a
favorable impression and deceptively change their self-presentation in
an attempt to appear more desirable to prospective mates. Study 1 indicated
that compared to participants in the control condition, participants
in the sexual activation condition were more likely to outwardly
express agreement with a contrary opinion advocated by an oppositesex
participant. Study 2 showed that subliminal sexual priming led
participants to conform to a potential partner's preferences in various
life domains. Studies 3 and 4 revealed that following sexual priming,
participants were more likely to lie to a potential mate about their
number of lifetime sexual partners.
Past studies have already shown that activation of the sexual system
motivates people to use strategies that help them initiate a relationship
with potential mates, such as disclosing personal information or providing
help (Birnbaum et al., 2017; Birnbaum et al., 2019). The present
research extends these studies by demonstrating the dark side of relationship
initiation (or a human foible that reveals itself during this
process) and pointing out when it is likely to surface. Scholars and lay
persons alike have long recognized that relationship-initiating strategies
include enactment of behaviors that aim not only to genuinely
support long-term bonding (e.g., provision of responsiveness and help;
Birnbaum, Ein-Dor, Reis, & Segal, 2014; Birnbaum et al., 2019) but also
to deliberately mislead prospective partners (e.g., Rowatt et al., 1998;
Toma et al., 2008). Our research speaks to one important psychological
circumstance under which certain self-presentational goals may become
more pronounced. Specifically, exposure to sexual cues, which activates
the sexual system and induces sexual arousal, may render people more
determined in their pursuit of desirable mates, encouraging them to
present a shiny façade, even though presenting a distorted view of self
may eventually thwart their goal of establishing a trustworthy intimate
relationship.
As our research indicates, this principle appears to hold true for
both men and women. Previous studies have found that men are more
likely than women to take the lead in sexual and relationship initiation
(Birnbaum & Laser-Brandt, 2002; Diamond, 2013; O'Sullivan & Byers,
1992). The present findings suggest that activation of the sexual system
motivates human beings to connect, regardless of gender. It does so by
inspiring interest in potential partners and motivating men and women
to impress prospective partners. To be sure, missing desirable mating
opportunities is costly for men and women alike, in the sense that when
such opportunities arise, both genders, and not only men, tend to use
deceptive self-presentational strategies (Lo et al., 2013; Rowatt et al.,
1999).
It remains unclear, however, whether deceptive self-presentation in
this context is motivated by short-term relationship goals or by selfpresentational
pressures per se, as both may be more salient under
sexual activation. Indeed, when sexually aroused, some people may
present themselves deceitfully to potential mates in order to obtain
casual sexual favors (Ariely & Loewenstein, 2006). Others, in contrast,
may wish to build a meaningful relationship but are induced by their
insecurities or perceived competition in the dating scene to present a
false façade. Indeed, people themselves may not be able to distinguish
short-term (sexual) and long-term (relationship initiation) goals in their
initial phases of attraction (Eastwick, Keneski, Morgan, McDonald, &
Huang, 2018).
Another limitation of the present research is that the evidence for
the proposed effect of sexual priming on explicit lying about previous
sexual partners is not strong, given that we found a non-significant
interaction in Study 4, which was better-powered than Study 3. It is
possible that conforming to a potential partner's views while being
sexually aroused (as indicated in Studies 1 and 2) is morally less problematic
than explicitly lying to a potential partner, especially in the
case of violating one's own earlier statements (Batson, Thompson,
Seuferling, Whitney, & Strongman, 1999). It is also possible that Study
4 yielded weaker results than Study 3 due to less experimental realism –
recall that Study 4 was conducted entirely online, whereas participants
in Study 3 were present in a lab session. We also did not determine
whether participants were aware of their deception or whether this was
an unconscious move to be closer to a potential partner and did not
assess the motives behind engaging in deceptive self-presentation. Future
studies should investigate how these distinctive motives affect the
unfolding over time of interactions that are based on deceptive communication
and whether they may evolve into meaningful relationships.
Further research should explore whether people also tend to see
what they want to see while being sexually aroused and thus are less
likely to detect inauthenticity in potential partners.
In conclusion, activation of the sexual system may initiate a process
of endeavoring to become more attractive to a stranger, a process that
may eventually build an emotionally and sexually satisfying connection
between previously unacquainted people (Birnbaum, 2018; (Birnbaum
and Finkel, 2015) Birnbaum et al., 2019; Birnbaum & Reis, 2019). In
everyday life, the attractiveness of a potential partner or the sexy ambience
of a first date may lead people to disclose personal information
about themselves in order to initiate a potential relationship with a
desired mate (Birnbaum et al., 2017). Our research suggests that the
content of this disclosure is less likely to reflect the true self following
sexual activation, as sexual arousal may make people more focused on
saying what needs to be said to create a positive impression while being
less cognizant of the potential long-term costs of this tendency. Our
research also underscores the dual potential of the sexual system for
eliciting behavior that may simultaneously facilitate relationship initiation
while at the same time undermining their authenticity. Whether
sexual arousal heightens perceptions of a prospective partner's mate
value, creates a state of urgency, or instills a sense partner scarcity is a
question for future research.

An RT/fMRI study showed a single cluster of activity that pertains to logical negation, distinct from clusters that were activated by numerical comparison and from the traditional language regions

Logical negation mapped onto the brain. Yosef Grodzinsky et al. Brain Structure and Function, November 4 2019. https://link.springer.com/article/10.1007/s00429-019-01975-w

Abstract: High-level cognitive capacities that serve communication, reasoning, and calculation are essential for finding our way in the world. But whether and to what extent these complex behaviors share the same neuronal substrate are still unresolved questions. The present study separated the aspects of logic from language and numerosity—mental faculties whose distinctness has been debated for centuries—and identified a new cytoarchitectonic area as correlate for an operation involving logical negation. A novel experimental paradigm that was implemented here in an RT/fMRI study showed a single cluster of activity that pertains to logical negation. It was distinct from clusters that were activated by numerical comparison and from the traditional language regions. The localization of this cluster was described by a newly identified cytoarchitectonic area in the left anterior insula, ventro-medial to Broca’s region. We provide evidence for the congruence between the histologically and functionally defined regions on multiple measures. Its position in the left anterior insula suggests that it functions as a mediator between language and reasoning areas.

Keywords: Language Logic Numerosity Functional neuroanatomy Functional neuroimaging Cytoarchitecture Brain mapping Negation Sentence verification Left anterior insula Modularity

Discussion 
Taken together, the anatomical and functional clusters exhibit bi-uniqueness: area Id7 is cytoarchitectonically distinct from its neighbors, and represents a new, independent cortical area of the anterior insula (Fig. 7, Movie). The functional NetNegInt coincides largely with Id7, overlaps with no other cortical region, and NetNegInt intensity correlates with RT at the individual participant level. 

At a minimum, these results allow us to conclude that there is a single, anatomically and functionally cohesive core area involved in negation—Id7/NetNegInt. It is distinct from areas 44, 45, long believed to support syntax and from areas supporting core compositional semantic processes in the left temporal pole (Del Prato and Pylkkänen 2014). This distinctness and cohesiveness illustrates how relatively small elements of cognition can be neurally individuated and correlated with cytoarchitectonically defined areas. It also supports a modular view of cognitive functioning (Fodor 1983) and moreover seems to provide an answer, albeit partial, to the perennial debate about language and logic. If evidence from neuroscience bears on the debate, then Frege, Russell, and their followers were right: language and at least some aspects of logic are distinct. Finally, our results suggest that the border between the insula and Broca’s region is where language stops and logic begins.

We are not in a position to establish a connection between our results and other roles attributed to the anterior insula such as interoception. Yet, there is a differ-ence in pattern: typically, the anterior insula is activated bilaterally (Zaccarella and Friederici 2015), and tends to co-activate with the anterior cingulate (Craig 2009; Engstrom et al. 2014), to which the left and right insulae appear to be massively connected (Mesulam and Mufson 1982) and have a similar histologic makeup (Ghaziri et al. 2017). Our study documented no bi-lateral co-activation. Recent lesion data, moreover, relate interoceptive deficits to regions that seem to exclude the here defined left Id7 (Salomon et al. 2018).

So what can we conclude and where do we go from here? Our experiment demonstrates that the processing of one log-ical connective, ¬, has a distinct neurocognitive signature, supported by a histologically coherent piece of neural tissue, the left Id7, that is, outside the traditional language regions, lying between them and decision making areas. While we believe that this set of findings provides the basis for an important argument for language–logic dissociation, we are aware that it is based on a single set of results, one that needs to be further enriched in the same spirit. Convergent results from related explorations of other logical connectives will no doubt help to bolster our claims. E.g., if experiments can be designed to successfully isolate disjunction, conjunction, and the like, and their results converge, solid foundations for a new perspective on language–logic relations would be constructed. 

ith this qualification, can we conclude that the phi-losophers were right? Gottlob Frege, in his Begriffschrift, famously asserted that linguistic rules relate to logic as the eye compares to a microscope (van Heijenoort 1967): language is flexible, but logic is more rigid—mediating between linguistic expressions and objects suitable to reasoning. While Frege and Russell had no cognitive perspective, let alone a neurological one, we feel free to add one and assign an anatomical construal of Frege’s assertion in regards to the spatial position of the left Id7: like a microscope, this area may “translate” linguistic objects into logical forms. A mediating role has already been proposed for the posteriorly adjacent, middle left insula, claimed to mediate between motor planning and speech (Dronkers 1996). In a similar vein, it is proposed that the left Id7 mediates between the language regions and prefrontal areas engaged in reasoning (Baggio et al. 2016; Monti et al. 2007). By doing so, it seems to play a crucial role in what could be a core neural network that underlies our humanity.