Friday, September 10, 2021

Higher nut intake was associated with reductions in body weight and body fat; current evidence demonstrates the concern that nut consumption contributes to increased adiposity appears unwarranted

Are fatty nuts a weighty concern? A systematic review and meta-analysis and dose–response meta-regression of prospective cohorts and randomized controlled trials. Stephanie K. Nishi, Effie Viguiliouk, Sonia Blanco Mejia, Cyril W. C. Kendall, Richard P. Bazinet, Anthony J. Hanley, Elena M. Comelli, Jordi Salas Salvadó, David J. A. Jenkins, John L. Sievenpiper. Obesity Reviews, September 8 2021. https://doi.org/10.1111/obr.13330

Summary: Nuts are recommended for cardiovascular health, yet concerns remain that nuts may contribute to weight gain due to their high energy density. A systematic review and meta-analysis of prospective cohorts and randomized controlled trials (RCTs) was conducted to update the evidence, provide a dose–response analysis, and assess differences in nut type, comparator and more in subgroup analyses. MEDLINE, EMBASE, and Cochrane were searched, along with manual searches. Data from eligible studies were pooled using meta-analysis methods. Interstudy heterogeneity was assessed (Cochran Q statistic) and quantified (I2 statistic). Certainty of the evidence was assessed by Grading of Recommendations Assessment, Development, and Evaluation (GRADE). Six prospective cohort studies (7 unique cohorts, n = 569,910) and 86 RCTs (114 comparisons, n = 5873) met eligibility criteria. Nuts were associated with lower incidence of overweight/obesity (RR 0.93 [95% CI 0.88 to 0.98] P < 0.001, “moderate” certainty of evidence) in prospective cohorts. RCTs presented no adverse effect of nuts on body weight (MD 0.09 kg, [95% CI −0.09 to 0.27 kg] P < 0.001, “high” certainty of evidence). Meta-regression showed that higher nut intake was associated with reductions in body weight and body fat. Current evidence demonstrates the concern that nut consumption contributes to increased adiposity appears unwarranted.

4 DISCUSSION

The present systematic review and meta-analysis of nut consumption and adiposity involving six prospective cohort studies and 86 RCTs (114 trial comparisons) did not illustrate an increased risk of overweight/obesity or raise other measures of adiposity studied in adults.

Based on the long-term findings from the prospective cohort studies, a significant inverse association was observed across outcomes assessed. These findings align with those proposed by the systematic review of prospective studies by Eslami and colleagues.35 Suggesting that nut consumption may have a protective effect on risk of adiposity accumulation. This is further supported by the results of the present aggregate analyses from the RCTs, which showed a lack of a causal effect of nut consumption on the reported measures of adiposity. Previous systematic reviews and meta-analyses of trials involved differing inclusion and exclusion criteria yet showed similar findings in regard to a lack of effect of nut consumption on body weight, BMI, or waist circumference.3637 The lack of effect of nut consumption on waist circumference is further supported by Blanco Mejia and colleagues in their systematic review and meta-analysis assessing nuts and metabolic syndrome.3

Significant heterogeneity in the current analysis did exist. While this heterogeneity could not be adequately assessed categorically for the cohorts as there were too few cohort studies, subgroup analyses and meta-regression of the trials identified potential sources of heterogeneity. For the trials, similar to the previous publications,3637 energy balance was identified as a potential source of heterogeneity. However, in the current analysis, incorporating nuts into a dietary pattern involving an overall negative energy balance compared to a negative energy balance without nuts was observed to favour nuts in regard to not increasing body weight, BMI, or waist-to-hip ratio. Inclusion of nuts as a part of a dietary pattern without concern for increased body weight or adiposity measures is further supported by findings from the PREDIMED trial, where inclusion of nuts as part of a Mediterranean dietary pattern saw slightly reduced body weight and adiposity measures with no significant differences when compared with the Mediterranean dietary pattern with olive oil or the low fat dietary pattern.144 A sensitivity analysis involving the inclusion of the PREDIMED trial did not significantly affect the magnitude or direction of the current findings. In addition to energy balance, nut dose was detected as a potential effect modifier of body weight and body fat, where greater reductions were observed with increasing nut dose. In categorical analyses, nut doses ≥45.5 g/day indicated lower adiposity measures compared to lower doses. As nut doses of 1 to 1.5 ounces (~28 to 42.5 g) per day are often noted in dietary guidelines, as well as the FDA qualified health claim for coronary heart disease risk reduction, this suggests the provision often seen following nut recommendations, as well as stated at the end of the applicable qualified health claims asserting “see nutrition information for fat [and calorie] content” with the implied message that foods high in fat and calories lead to increased adiposity may be unwarranted.17-19 Likewise, continuous linear meta-regression identified dose-dependent relationships between nut consumption with both body weight and body fat, where nut dose was inversely correlated with body weight and fat. However, significant departures from linearity were observed in BMI, waist circumference, and waist-to-hip ratio, where the maximum protective dose appeared to be around 50 g/day based on waist-to-hip ratio. Although the waist-to-hip ratio may have been confounded by the nonsignificant positive correlation observed between waist circumference and nut consumption. This positive association between nut consumption and waist circumference differs from findings in the literature, where nut and seed consumption has been associated with significantly decreased pericardial fat, and trends toward decreased visceral fat,145 and monounsaturated fat intake, which is prevalent in nuts, compared to carbohydrate intake has been shown to prevent central fat redistribution.146

4.1 Strengths and limitations

Strengths of the present systematic review and meta-analysis include its comprehensive design, comprising both prospective cohort studies and RCTs, using the GRADE approach to evaluate the certainty of evidence. The prospective cohort studies provide assessment of nut consumption over the long term in a large sample of participants in free-living conditions in relation to adiposity. The design of RCTs provides the best protection against bias; there were also a substantial number of trials identified (106 trial comparisons) for the primary outcome of body weight; the median follow-up period was 8 weeks, which allows for the assessment of a moderate duration of intervention. In addition, the meta-regression and subgroup analyses provide further insight as to various factors that have previously been hypothesized to influence the impact of nut consumption on adiposity.

These analyses are not without limitations. For the prospective cohort studies, we downgraded the certainty of the evidence for serious inconsistency in the estimates across the studies for body weight change as there was evidence of unexplained heterogeneity (92%). Although the inconsistency may have related to measurement error as there was a lack of repeated measurement of intake of nuts, use of a food frequency questionnaire measure that was not specifically validated for nut intake, and adiposity measures were mainly self-reported by participants. Risk of bias was also observed for body weight change as participants were primarily comprised of well-educated individuals, many of whom were health professionals, including university graduates from SUN and health professionals recruited from NHS, NHS II, and HPFS, and thus may not be generalizable to other populations.

For the RCTs, we downgraded the certainty of evidence for serious inconsistency in the estimates due to unexplained heterogeneity in all the outcomes assessed, except BMI. Subgroup analyses indicated potential sources of heterogeneity; however, this was often observed when the covariate was unevenly distributed, as well as the differences in treatment effects between subgroups are unlikely to otherwise alter clinical decisions.

Weighing these strengths and limitations using GRADE, the certainty of evidence ranged from “very low” to “high.” A reason for the “very low” certainty of evidence observed is due to the GRADE approach starting observational studies at “low” certainty. Overall, the prospective cohort studies showed mostly “moderate” and the RCTs showed equally “high” and “moderate” certainty of evidence.

4.2 Potential mechanisms of action

There are several biological mechanisms which may explain the association, more specifically, the lack of association observed between nut consumption with overweight/obesity risk and other measures of adiposity, including: (1) unsaturated fatty acid content, (2) satiating effect, and (3) physical structure, each in a way associated with the bioavailability of nuts when consumed. Nuts are rich in unsaturated fatty acids (monounsaturated fatty acids [MUFAs] and polyunsaturated fatty acids [PUFAs]), which are suggested to be more readily oxidized147 and have a greater thermogenic effect148 compared to saturated fatty acids, leading to less fat accumulation. Nuts are also rich in protein and fiber and dietary components associated with increased satiety.149-151 In addition to the protein and dietary fiber content of nuts, the physical structure may also contribute to their satiating effect since the mastication process involved in mechanically reducing nuts to a particle size small enough to swallow activates signaling systems that may modify appetite sensations.152 The physical structure of nuts may also contribute to fat malabsorption due to the fat content in nuts being contained within walled cellular structures that are incompletely masticated and/or digested.153-156 Thus, due to these biological mechanisms which may be associated with decreased bioavailability, the Atwater Factor, a system for determining the energy value of foods which was founded over a century ago, associated with nuts, may overestimate the calories obtained by the body from nut consumption by approximately 16% to 25% depending on the nut type and form.157-159 This may potentially explain the present findings of a protective effect of nut consumption on measures of adiposity.

4.3 Practical implications

Current clinical practice guidelines already suggest the incorporation of nuts for the improvement of glycemic control and cardiovascular risk factors; however, there are often qualifiers regarding their fat content and energy density.14-16 With overweight and obesity respectively affecting 39% and 13% of adults globally and increased adiposity being a modifiable risk factor for diabetes and cardiovascular diseases, body weight management is an important consideration in dietary and lifestyle recommendations.160 Evidence from this systematic review and meta-analysis suggests that nuts may continue to be highlighted as a nutrient dense component of dietary patterns for their cardiometabolic benefits without concerns of an adverse effect on weight control. Nuts are currently recommended as part of the Mediterranean, Portfolio, and DASH dietary patterns, yet despite tree nut and peanut intake increasing over the past 10 years, intake worldwide remains low at an estimated 16.7 g/day with about 15.2 g being contributed by peanuts.20 This is far below current recommendations of 1 to 1.5 ounces per day (approximately 28.3 to 42.5 g/day).617-19 Based on the median nut intake in the trials of the current analyses and FDA qualified health claims, a dose of 42.5 g/day of nuts could easily be integrated into a daily dietary pattern by incorporating them into meals and/or consuming them as snacks. Except for individuals with nut allergies, no increase in side effects compared with control groups was reported in any of the cohort studies or trials, suggesting that dietary patterns which incorporate nuts as a regularly consumed component are safe. Future research may further assess the impact of different varieties of nuts and formats in which they may be consumed and how they are incorporated into the diet.

Thursday, September 9, 2021

Average generation time is 26.9 years across the past 250,000 years, with fathers consistently older (30.7 y) than mothers (23.2 y), a disproportionate increase in female generation times over the past several thousand years

Human generation times across the past 250,000 years. Richard J. Wang et al. bioRxiv Sep 7 2021. https://doi.org/10.1101/2021.09.07.459333

Abstract: The generation times of our recent ancestors can tell us about both the biology and social organization of prehistoric humans, placing human evolution on an absolute timescale. We present a method for predicting historic male and female generation times based on changes in the mutation spectrum. Our analyses of whole-genome data reveal an average generation time of 26.9 years across the past 250,000 years, with fathers consistently older (30.7 years) than mothers (23.2 years). Shifts in sex-averaged generation times have been driven primarily by changes to the age of paternity rather than maternity, though we report a disproportionate increase in female generation times over the past several thousand years. We also find a large difference in generation times among populations, with samples from current African populations showing longer ancestral generation times than non-Africans for over a hundred thousand years, reaching back to a time when all humans occupied Africa.



Babbling in bat pups is characterized by the same eight features as babbling in human infants, including the conspicuous features reduplication and rhythmicity

Babbling in a vocal learning bat resembles human infant babbling. Ahana A. Fernandez, Lara S. Burchardt, Martina Nagy, Mirjam Knörnschild. Science, Aug 20 2021, Vol 373, Issue 6557, pp. 923-926. https://www.science.org/lookup/doi/10.1126/science.abf9279

Abstract: Babbling is a production milestone in infant speech development. Evidence for babbling in nonhuman mammals is scarce, which has prevented cross-species comparisons. In this study, we investigated the conspicuous babbling behavior of Saccopteryx bilineata, a bat capable of vocal production learning. We analyzed the babbling of 20 bat pups in the field during their 3-month ontogeny and compared its features to those that characterize babbling in human infants. Our findings demonstrate that babbling in bat pups is characterized by the same eight features as babbling in human infants, including the conspicuous features reduplication and rhythmicity. These parallels in vocal ontogeny between two mammalian species offer future possibilities for comparison of cognitive and neuromolecular mechanisms and adaptive functions of babbling in bats and humans.


The ultimatum and dictator games were developed to help identify the fundamental motivators of human behavior, typically by asking participants to share windfall endowments with other persons

If you've earned it, you deserve it: ultimatums, with Lego. Adam Oliver. Behavioural Public Policy, September 9 2021. https://www.cambridge.org/core/journals/behavioural-public-policy/article/if-youve-earned-it-you-deserve-it-ultimatums-with-lego/EB5907A941220FB244234AC8C355DBA5

Abstract: The ultimatum and dictator games were developed to help identify the fundamental motivators of human behavior, typically by asking participants to share windfall endowments with other persons. In the ultimatum game, a common observation is that proposers offer, and responders refuse to accept, a much larger share of the endowment than is predicted by rational choice theory. However, in the real world, windfalls are rare: money is usually earned. I report here a small study aimed at testing how participants react to an ultimatum game after they have earned their endowments by either building a Lego model or spending some time sorting out screws by their length. I find that the shares that proposers offer and responders accept are significantly lower than that typically observed with windfall money, an observation that is intensified when the task undertaken to earn the endowment is generally less enjoyable and thus perhaps more effortful (i.e., screw sorting compared to Lego building). I suggest, therefore, that considerations of effort-based desert are often important drivers behind individual decision-making, and that laboratory experiments, if intended to inform public policy design and implementation, ought to mirror the broad characteristics of the realities that people face.

The policy relevance

My small study of course has many limitations, several of which have already been acknowledged. The participants, for example, were chosen for their convenience, and are hardly representative of the general population. Moreover, to reiterate, some of the questions were not financially incentivized – sometimes, it is argued, after considering the merits and demerits of different methods, but nonetheless the potential problems with the approach adopted are fully appreciated.

Limitations aside, I contend that the results suggest that effort-based desert matters to people, and that if, rather than receiving windfalls, they have to earn their endowments, then, if asked, they will be willing to share, and be expected to share, a lower proportion of their endowments with others. This general conclusion applies not only to windfall versus earned endowments but also across different earnings-related tasks. For example, a task (or indeed a job) that is perceived to be generally more effortful (or less enjoyable) may provoke lower levels of generosity and less punishment for an apparent lack of generosity than those that generally require less effort. Or at least this will be the observation at face value, for if the different levels of effort are controlled for, we may find that generosity and punishment remain quite stable.

The recognition of the importance of effort-based desert leads me to propose that rewarding people for their effort sustains their effort. This was reflected in Akerlof's (1982) contention that a wage higher than the minimum necessary is met by employee effort that is higher than egoism dictates, because employees now think that employers deserve a fair return. In real work scenarios, there is a general acceptance of desert-based rewards that results in unequal distributions (Starmans et al.2017), but, as noted above, the voluminous literature on the dictator and ultimatum games that uses windfall endowments fails to acknowledge the importance of desert. That being the case, this body of research lacks real-world policy relevance in relation to peoples’ propensities to share their resources with others or, in the case of the ultimatum game, propensities to punish others for perceived insufficiencies in sharing, at least beyond the limited circumstances where one might experience windfalls. At most, this research offers only very general conclusions that might be relevant to policy design, principally that people often appear to be strategically self-interested when they are aware that they may be punished for blatant acts of selfishness, but, at the same time, many people like to see an element of distributional fairness over final outcomes if no party can claim property rights over an endowment.

In short, the research using windfall endowments decontextualises decision-making too much, which is a little ironic if one is interested in real-world implications, given that the essence of behavioral public policy is that context matters. Of course, the research that uses earned outcomes also in many ways departs from the circumstances that people actually face – in terms of the small study reported in this article, for instance, there are very few people who earn an income from constructing Lego models. (NB. Sorting screws might be different – quite a few participants asked me if I was paying them to tidy up my garage.) But by requiring participants to at least do something to earn their endowments the study – like those principally focussed on the dictator game summarized in Table 1 – took them one step closer to reality. The policy lesson emerging from this body of work is that people respect property rights and that there is broad recognition and acceptance of effort-based desert. Consequently, when considering an endowment that one party to an exchange has earned, the willingness of that party to share, and the tendency for other parties to punish a perceived lack of generosity by that person, are much closer to the predictions of rational choice theory than the evidence using windfall endowments, where close to no effort is expended by participants, typically implies.

More generally, for laboratory studies of human motivations to hold relevance for policy design and implementation the context of the study ought to match, as far as possible, the circumstances that people actually face. I fear that insufficient attention is sometimes paid to this basic premise. For instance, in the real world, some people suffer extreme shortages, others face moderate scarcity, and still others enjoy abundance, and different motivational forces will come to the fore to facilitate flourishing, or even survival, in these different circumstances. Behavioral experiments ought to aim to reflect these (and other) circumstances to enable their results to offer better insights into what drives people as they navigate their way through life.

Our analyses do not establish causality; the small effect sizes suggest that increased screen time is unlikely to be directly harmful (mental health, behavioral problems, academic performance, peer relationships) to 9 & 10-yo children

Paulich KN, Ross JM, Lessem JM, Hewitt JK (2021) Screen time and early adolescent mental health, academic, and social outcomes in 9- and 10- year old children: Utilizing the Adolescent Brain Cognitive Development ℠ (ABCD) Study. PLoS ONE 16(9): e0256591, Sep 8 2021. https://doi.org/10.1371/journal.pone.0256591

Abstract: In a technology-driven society, screens are being used more than ever. The high rate of electronic media use among children and adolescents begs the question: is screen time harming our youth? The current study draws from a nationwide sample of 11,875 participants in the United States, aged 9 to 10 years, from the Adolescent Brain Cognitive Development Study (ABCD Study®). We investigate relationships between screen time and mental health, behavioral problems, academic performance, sleep habits, and peer relationships by conducting a series of correlation and regression analyses, controlling for SES and race/ethnicity. We find that more screen time is moderately associated with worse mental health, increased behavioral problems, decreased academic performance, and poorer sleep, but heightened quality of peer relationships. However, effect sizes associated with screen time and the various outcomes were modest; SES was more strongly associated with each outcome measure. Our analyses do not establish causality and the small effect sizes observed suggest that increased screen time is unlikely to be directly harmful to 9-and-10-year-old children.

Discussion

These results have important implications. The lack of consistently significant interactions between screen time and sex—but often significant main effects for both screen time and sex—demonstrate that generally, both screen time and sex predict the outcome variables, but that the effect of screen time on the outcome variables often does not depend on sex, and vice versa. For the outcome measures with non-significant interaction terms but significant main effects of both/either screen time and/or sex, it appears that screen time and sex are independent predictors of the outcome measure. For these outcome measures, the effect of either screen time or sex on the outcome variable did not depend on the other independent variable. A potential reason for that finding could be sex differences in how screens are being used. The only outcome measure demonstrating a significant interaction term, for Part 1 and for Part 2, is number of close friends who are males. It is possible that, because males in this study tend to use screen time for video gaming—which is often a social activity—more than females do (refer to Table 1), screen time and sex interact such that the effect of screen time (e.g., using screens for video gaming) on number of close male friends depends on the sex of the participant, where male participants who spend more time on screens video gaming have more male friends.

Screen time—above and beyond both SES and race/ethnicity—is a significant predictor of some internalizing symptoms, behavioral problems, academic performance, sleep quality and quantity, and the strength of peer relationships for 9- to 10-year-old children, in both boys and girls. However, the effect of screen time was small (<2% of the variance explained) for all outcomes, with SES—which was demonstrated to be a significant predictor for the nearly all outcome variables of interest—accounting for much more of the variance (~5%), perhaps because parent SES contributes to nearly every facet of children’s physical and mental health outcomes [28]. Taken together, our results imply that too much time spent on screens is associated with poorer mental health, behavioral health, and academic outcomes in 9- and 10- year old children, but that negative impact on the subjects is likely not clinically harmful at this age.

The significant association between screen time and externalizing disorder symptoms was in line with previous research [13]. However, this association is not necessarily causal; for example, it has been suggested that parents/guardians of children who display externalizing disorder symptoms, along with oppositional defiance disorder and conduct disorder, are more likely to place their child in front of a screen as a distraction [29], so it is possible that externalizing disorder symptoms feed into additional screen time rather than the reverse.

The negative association between screen time and academic performance may be of some concern to parents; another group of researchers reported a similar trend in a sample of Chinese adolescents [30]. We speculate that more time dedicated to recreational screen use detracts from time spent on schoolwork and studying for exams, though this proposed explanation should be examined further. In data collection for the ABCD Study, academic screen time (e.g., using a computer to complete an academic paper) was not recorded; it is possible that academic screen time could be positively associated with academic performance, suggesting, as previous studies [2223] point out, that the type of screen time use is more important to consider than screen time itself.

The negative association between screen time and amount of sleep has been demonstrated previously [17] and, as in the case of academic performance, it is possible that time on screens takes away from time asleep. The positive association between sleep disorder score and screen time is of interest, though how that relationship is mediated is a topic of future research. It could be that when children and adolescents struggle with sleep, they turn to electronic media as a way to distract themselves or in an attempt to lull themselves back to sleep, or that screen use contributes to delayed bedtime, as has been suggested in previous literature [17].

The lack of significant relationships between screen time and internalizing disorder symptoms (i.e., depression and anxiety) was surprising and does not align with prior findings by researchers who also used the ABCD study to examine screen time as a predictor variable. To examine the discrepancy, we conducted a replication of their study [11], using the early release data of 4528 participants, which is less than half the sample size used in the current study. We replicated their findings closely, which suggests that the discrepancy in our results primarily arises from the differences in the sample as it doubled in size. Overall, both the current study and the previous [11] find only weak associations of screen time with internalizing problems in the baseline ABCD sample. It is possible that because internalizing disorders typically develop throughout childhood and adolescence [3132], 9- and 10- year old children are simply not displaying immediately noticeable internalizing symptoms.

The finding that more screen time is associated with a greater number of close friends, both male and female, is in line with previous research [21] and suggests that when on screens, adolescents are communicating with their friends via texting, social media, or video chat, and the social nature of such screen time use strengthens relationships between peers and allows them to stay connected even when apart.

The current study is not without limitations. Because participants are 9 and 10, they simply are not using screens as much as their older peers; means for screen time use are low, especially for texting and social media, two aspects of screen time that may have the most impact on peer relationships and mental health outcomes [21]. The frequencies of mature gaming and viewing of R-rated movies are also low. Similarly due to the age of the sample, the majority of participants do not display signs of mental ill health. Follow-up interview studies conducted as the sample ages would likely be more powered as adolescents increase in their screen use and they evidence more mental health issues at older ages. Beneficially, however, the longitudinal nature of the ABCD Study will allow continuation of study of these potential associations over the course of the participants’ adolescence. Next, the measures used by the ABCD Study at baseline have some limitations. By restricting the screen time maximum label to “4+ hours” for all subsets of screen time apart from total screen time, it was not possible to examine extremes in screen time (e.g., the present data do not differentiate between four hours of texting and 15 hours. Additionally, the majority of outcome measures were evaluated through parent report rather than child self-report, and it is possible that parent evaluations are inaccurate, especially for more subtle symptoms such as internalizing problems. However, for the majority of outcome variables, parents responded to the Child Behavior Checklist, which demonstrates strong psychometric validity [33]. Additionally, parent report is preferred for assessing some outcome measures of interest; in externalizing problems and attention problems specifically, the positive illusory bias skews youth self-report to overly positive reports of their performance in comparison to criteria that reflects actual performance [3435].

Increasing interaction with others enhanced well-being as expected, up to some point, after which the effect of interaction quantity was reduced or became nearly negligible (but did not turn negative)

Ren, Dongning, Olga Stavrova, and Wen Wei Loh. 2021. “Nonlinear Effect of Social Interaction Quantity on Psychological Well-being: Diminishing Returns or Inverted U?.” PsyArXiv. September 8. doi:10.31234/osf.io/nm2ds

Abstract: Social contact is an important ingredient of a happy and satisfying life. But is more social contact necessarily better? While it is well-established that increasing the quantity of social interactions on the low end of its spectrum promotes psychological well-being, the effect of interaction quantity on the high end remains largely unexplored. We propose that the effect of interaction quantity is nonlinear; specifically, at high levels of interaction quantity, its positive effects may be reduced (Diminishing Returns Hypothesis) or even reversed (Inverted U Hypothesis). To test these two competing hypotheses, we conducted a series of six studies involving a total of 161,836 participants using experimental (Study 1), cross-sectional (Studies 2 & 3), daily diary (Study 4), experience sampling (Study 5), and longitudinal survey designs (Study 6). Consistent evidence emerged across the studies supporting the Diminishing Returns Hypothesis. On the low end of the interaction quantity spectrum, increasing interaction quantity enhanced well-being as expected; whereas on the high end of the spectrum, the effect of interaction quantity was reduced or became nearly negligible, but did not turn negative. Taken together, the present research provides compelling evidence that the well-being benefits of social interactions are nearly negligible after moderate quantities of interactions are achieved.


Incels, who struggle with a lack of sexual & romantic intimacy, negative body image, shyness, & poor social skills, have a view that celibacy is a permanent state and that life is hopeless (ideology known as ‘’blackpill’’)

Stijelja, Stefan. 2021. “The Psychological Profile of Involuntary Celibates (incels): A Literature Review.” PsyArXiv. September 8. doi:10.31234/osf.io/9mutg

Abstract: This narrative review provides a qualitative synthesis of more than 40 years of research on involuntary celibacy, late sexual onset, and adult virginity. Studies suggest that Incels struggle with a lack of sexual and romantic intimacy, and that their negative body image, shyness, poor social skills compounded by inexperience with sexual and romantic relationships contribute to further restrict their opportunities to build rapport with potential romantic or sexual partners. In line with life course theory, many feel as though they have missed an important development milestone and, consequently, feel ‘’off time’’ relative to their peers with regard to sexuality. This can lead to a view that celibacy is a permanent state and that life is hopeless, a feeling encapsuled in an ideology known as ‘’blackpill’’. Stereotypical standards of masculinity and masculine sexual scripts may contribute to further increase the sense of embarrassment and stigma among reluctant virgins. While it is important for future studies to ascertain whether these various mental health issues were present prior or after their ‘’Inceldom’’, current results nonetheless describe a community characterized by a high prevalence of mental health problems.



Wednesday, September 8, 2021

Both parents and non-parents lie and do so to a similar extent; however, when parents are reminded of their children prior to the task, they lie less compared to a treatment without a reminder

Kajackaite, Agne, and Pawel Niszczota. 2021. “Lying (non-)parents: Being a Parent Does Not Reduce Dishonesty.” PsyArXiv. September 8. doi:10.31234/osf.io/zry9k

Abstract: Many studies point to how parenthood can affect behavior. Here, we provide a large-sample (N = 2,008) analysis of whether people with children are less likely to cheat in a private die-rolling task. Our findings suggest that both parents and non-parents lie and do so to a similar extent. However, when parents are reminded of their children prior to the task, they lie less compared to a treatment without a reminder.


Cultural similarity among coreligionists within and between countries: There are a pervasive cultural signature of religion & a role of world religions in sustaining superordinate identities that transcend geographical boundaries

Cultural similarity among coreligionists within and between countries. Cindel J. M. White, Michael Muthukrishna, and Ara Norenzayan. Proceedings of the National Academy of Sciences, September 14, 2021 118 (37) e2109650118; https://doi.org/10.1073/pnas.2109650118

Significance: Do people who affiliate with the same religious tradition share cultural traits even if they live in different countries? We found unique patterns of cultural traits across religious groups and found that members of world religions (Christianity, Islam, Judaism, Hinduism, and Buddhism) show cultural similarity among coreligionists living in different countries. People who share a particular religious tradition and level of commitment to religion were more culturally similar, both within and across countries, than those that do not, even after excluding overtly religious values. Despite their heterogeneity, religious denominations reflect superordinate cultural identities, and shared traits persist across geographic and political boundaries. These findings inform cultural evolutionary theories about the place of religion and secularity in the world’s cultural diversity.

Abstract: Cultural evolutionary theories suggest that world religions have consolidated beliefs, values, and practices within a superethnic cultural identity. It follows that affiliation with religious traditions would be reliably associated with global variation in cultural traits. To test this hypothesis, we measured cultural distance between religious groups within and between countries, using the Cultural Fixation Index (CFST) applied to the World Values Survey (88 countries, n = 243,118). Individuals who shared a religious tradition and level of commitment to religion were more culturally similar, both within and across countries, than those with different affiliations and levels of religiosity, even after excluding overtly religious values. Moreover, distances between denominations within a world religion echoed shared historical descent. Nonreligious individuals across countries also shared cultural values, offering evidence for the cultural evolution of secularization. While nation-states were a stronger predictor of cultural traits than religious traditions, the cultural similarity of coreligionists remained robust, controlling for demographic characteristics, geographic and linguistic distances between groups, and government restriction on religion. Together, results reveal the pervasive cultural signature of religion and support the role of world religions in sustaining superordinate identities that transcend geographical boundaries.

Keywords: religionculturecultural evolution


In a large and diverse international sample of older adults, they found that abstinence from alcohol is associated with an increased risk for all-cause dementia

Mewton, Louise, Rachel Visontay, Nicholas Hoy, Darren Lipnicki, John D. Crawford, Ben Chun Pan Lam, Dr., Tim Slade, et al. 2021. “The Relationship Between Alcohol Use and Dementia: A Combined Analysis of Prospective, Individual-participant Data from 15 International Studies.” PsyArXiv. September 8. doi:10.31234/osf.io/7835k

Abstract

Objective: To synthesise international findings on the alcohol-dementia relationship and provide a cross-national comparison of the alcohol-dementia relationship with critical evidence for the relationship between alcohol use and dementia in under-studied populations.

Design and setting: Individual participant data meta-analysis of 15 prospective epidemiological cohort studies from countries situated in five continents. Cox regression investigated the dementia risk associated with alcohol use. Sensitivity analyses compared lifetime abstainers with former drinkers, adjusted extensively for demographic and clinical characteristics, and assessed the competing risk of death. Participants: 24,472 community-dwelling individuals without a history of dementia at baseline and at least one follow-up dementia assessment.

Main outcome measure: All-cause dementia as determined by clinical interview.

Results: During 151,574 person-years of follow-up, there were 2,137 incident cases of dementia (14.1 per 1,000 person-years). In the combined sample, when compared with occasional drinkers (<1.3g/day), the risk for dementia was higher for current abstainers (HR: 1.29; 95% CI: 1.13, 1.48) and lower for moderate drinkers (25g/day-44.9g/day; HR: 0.79; 95% CI: 0.64, 0.98). When the combined sample was stratified by sex and gross domestic product, current abstainers had a greater risk of incident dementia when compared with light-to-moderate drinkers in both sexes and in the higher income countries. When comparing lifetime abstainers and former drinkers there were no consistent differences in dementia risk. Among current drinkers, there was no consistent evidence to suggest that the amount of alcohol consumed in later life was significantly associated with dementia risk. Adjusting for additional demographic and clinical covariates, and accounting for competing risk of death, did not substantially affect results. When analysed at the cohort level, there was considerable heterogeneity in the alcohol-dementia relationship. 

Conclusions: In a large and diverse international sample of older adults, the current study found that abstinence from alcohol is associated with an increased risk for all-cause dementia. Among current drinkers, there was no consistent evidence to suggest that the amount of alcohol consumed in later life was significantly associated with dementia risk.


Sex differences in genetic architecture in the UK Biobank: Our results suggest the need to consider sex-aware analyses for future studies to shed light onto possible sex-specific molecular mechanisms

Sex differences in genetic architecture in the UK Biobank. Elena Bernabeu, Oriol Canela-Xandri, Konrad Rawlik, Andrea Talenti, James Prendergast & Albert Tenesa. Nature Genetics volume 53, pages1283–1289, Sep 7 2021. https://www.nature.com/articles/s41588-021-00912-0

Abstract: Males and females present differences in complex traits and in the risk of a wide array of diseases. Genotype by sex (GxS) interactions are thought to account for some of these differences. However, the extent and basis of GxS are poorly understood. In the present study, we provide insights into both the scope and the mechanism of GxS across the genome of about 450,000 individuals of European ancestry and 530 complex traits in the UK Biobank. We found small yet widespread differences in genetic architecture across traits. We also found that, in some cases, sex-agnostic analyses may be missing trait-associated loci and looked into possible improvements in the prediction of high-level phenotypes. Finally, we studied the potential functional role of the differences observed through sex-biased gene expression and gene-level analyses. Our results suggest the need to consider sex-aware analyses for future studies to shed light onto possible sex-specific molecular mechanisms.


Systematic Bias in the Progress of Research: The authors analyze the extent to which citing practices may be driven by strategic considerations

Systematic Bias in the Progress of Research. Amir Rubin and Eran Rubin. Journal of Political Economy, Volume 129, Number 9, September 2021. https://www.journals.uchicago.edu/doi/10.1086/715021

Abstract: We analyze the extent to which citing practices may be driven by strategic considerations. The discontinuation of the Journal of Business (JB) in 2006 for extraneous reasons serves as an exogenous shock for analyzing strategic citing behavior. Using a difference-in-differences analysis, we find that articles published in JB before 2006 experienced a relative reduction in citations of approximately 20% after 2006. Since the discontinuation of JB is unrelated to the scientific contributions of its articles, the results imply that the referencing of articles is systematically affected by strategic considerations, which hinders scientific progress.

Alex Tabarrok comments Strategic Citing - Marginal REVOLUTIONRubin and Rubin have a unique test of this behavior. For administrative reasons, the Journal of Business, a top journal in finance, stopped publication in 2006. Thus, after 2006, there were fewer strategic reasons to cite JOB papers even though the scientific reasons to cite these papers remained constant. The authors test this by matching articles in the JOB with articles in similar journals published in the same year and having the same number of citations in the two years following publication–thus they match on similar articles with a similar citation trajectory. What they find is that post-2006 the citation count of the JOB articles falls substantially off the expected trajectory. [graph]

The finding is robust to controlling for self-citations, own-journal citations, and a variety of other possibilities. The authors also show that deceased authors get fewer citations than matched living authors. For example, living Nobel prize winners get more citations than dead ones even when they were awarded the prize jointly.

[...]

---

Discussion: Additional implications of the results

In this section, we discuss implications driven by the parallels that exist between academic research and firm innovation. First, citations of patents may also be subject to strategic citations (of different sorts), which requires caution in inferences made in innovation studies. Second, we suggest that if authors of academic studies were to include more information on references cited (as done in patent applications), it could potentially benefit academic research and help reduce adverse citing practices. The finance literature has recently seen a growth in studies devoted to innovation (Lerner and Seru, 2017). Most researchers use two types of proxies to measure the innovation output of a company: the number of patents it is granted (e.g., in a given year) and the number of citations its granted patents receive following their approval.32 The disadvantage of the former proxy is that not all patents are of similar quality, so the latter is widely considered the better proxy for the scientific contribution of the firm.33 In the literature, patent citation counts are most often considered an (exogenous) outcome determined by the innovation of the firm or its CEO. However, citation counts of patents may be affected by strategic considerations of the firms citing them. Consider, for example, the relation between the decision to go public and the firm’s future innovation (Acharya and Xu, 2017; Bernstein, 2015). Once a firm becomes public, it is more visible, has more resources, and is likely to be serviced by more competent attorneys. It is possible that these facts may lead its competitors to cite the public firm’s patents more often (compared to its pre-IPO period), because after its IPO, the company is more likely to be capable of suing others for violating its intellectual property rights. Hence, if the researcher observes a higher level of citation counts in the post-IPO period, it may be due to not only a higher level of innovation in the post-IPO period but also to a change in the citing behavior of its competitors. Similarly, citing practices may change after a merger not only because of synergies (Bena and Li, 2014) but also because former rivals become cooperators, which may alter the strategic citing behavior. There is also evidence that patents of firms with overconfident CEOs obtain more citations (Hirshleifer, Low, and Teoh, 2012). It would be interesting to learn the extent to which the citations differ due to these CEOs’ preference to engage in risky innovations and the extent to which competing firms change their citing behavior because they are more wary of overconfident CEOs’ aggressiveness, which may lead to prosecution in courts. The strategic citing behavior that we uncover seems to be facilitated by the difficulty associated with monitoring it, as more trivial, easy to monitor, agency related citations, as in the case of citing editors' papers, do not seem to be pervasive in the data (see the appendix B analysis). As such, adverse citing practices of top-tier publications can benefit by borrowing from the higher level of resolution in information that currently exists in patent applications. References of patents are classified as either provided by the inventor (firm) or by the examiner of the patent. If one wants to follow the knowledge trail of the innovation process, only the inventors’ citations matter, because the examiners’ citations are added only ex-post, after the patent was actually filed (Alcacer and Gittelman, 2006). In academic research, the situation is similar in that cited references are not equally important for a given study. Some of the cited papers are building blocks for arguments, some yield similar conclusions, and some provide opposing interpretations. Most importantly, some papers overturn a previous result because of a possible mistake or an overlooked fact stated in that previously published paper. Similar to patent citation categorization, it could be helpful if academic authors are required to classify their references in terms of the way they were used in their research. A recent paper by Catalini, Lacetera, and Oettl (2015) suggests that even a simple characterization of references in terms of whether they are cited based on their contributions or flaws can increase the field’s understanding of the merits of research articles. It is possible that if authors were to indicate their perception of their references’ categories, the relevance of the cited work would become clearer, and consequently, the academic research process would improve. A reference categorization process should reduce the tendency of authors to engage in agency citations, and monitoring of the classification may become one of the important tasks of referees. Related to this, it may be worthwhile to provide some descriptive information about the references, such as the fraction of top-tier articles in the list (a high fraction may be indicative of a adverse citing practices) and the number of cases in which a reference is a sole contributor to a particular point (possible evidence of negligence of others). Finally, based on our findings of increased agency citations as the number of authors increase, it may be beneficial to require the identification of the author that is responsible for the integrity of the reference list so that it relates to the appropriate previous work. For example, it may be stated that the corresponding author is the responsible entity for this issue.


 32 Kogan et al. (2017) provide evidence that a measure of market reaction to patents is able to better explain economic growth stemming from the patent than citation counts (e.g., Moser, Ohmstedt, and Rhode, 2018; Abrams, Akcigit, and Popadak, 2013). One possibility for this is that strategic citations distort the citation count measure from reflecting a patent’s scientific value.

33 Note that in academic research, the number of publications (analogous to the number of patents) is often perceived as a poor measure of an author’s contribution, and measures such as h-10 (Google Scholar) ignore publications with no citations. This raises the question of whether the benefits of having two measures for robustness, as commonly done in the innovation literature, outweigh the costs of a noisy measure that can yield different results. In fact, one could use the differences between the two measures for a better identification of the strategic aspects of the innovation process. For example, it is known that firms may issue a patent not to open a new field (which tends to lead to future citations) but rather as a boundary of scope to prevent others from pursuing inventions in a certain area. The difference between the two measures could potentially proxy for such a tendency.


Neurodualism... People Assume that the Brain Affects the Mind more than the Mind Affects the Brain, & distort claims about the brain from the wider culture to fit their dualist belief that minds and brains are distinct, interacting entities

Neurodualism: People Assume that the Brain Affects the Mind more than the Mind Affects the Brain. Jussi Valtonen, Woo-kyoung Ahn, Andrei Cimpian. Cognitive Science ,September 7 2021. https://doi.org/10.1111/cogs.13034

Abstract: People commonly think of the mind and the brain as distinct entities that interact, a view known as dualism. At the same time, the public widely acknowledges that science attributes all mental phenomena to the workings of a material brain, a view at odds with dualism. How do people reconcile these conflicting perspectives? We propose that people distort claims about the brain from the wider culture to fit their dualist belief that minds and brains are distinct, interacting entities: Exposure to cultural discourse about the brain as the physical basis for the mind prompts people to posit that mind–brain interactions are asymmetric, such that the brain is able to affect the mind more than vice versa. We term this hybrid intuitive theory neurodualism. Five studies involving both thought experiments and naturalistic scenarios provided evidence of neurodualism among laypeople and, to some extent, even practicing psychotherapists. For example, lay participants reported that “a change in a person's brain” is accompanied by “a change in the person's mind” more often than vice versa. Similarly, when asked to imagine that “future scientists were able to alter exactly 25% of a person's brain,” participants reported larger corresponding changes in the person's mind than in the opposite direction. Participants also showed a similarly asymmetric pattern favoring the brain over the mind in naturalistic scenarios. By uncovering people's intuitive theories of the mind–brain relation, the results provide insights into societal phenomena such as the allure of neuroscience and common misperceptions of mental health treatments.

7 General discussion

We investigated intuitive theories of minds and brains in five studies with both lay participants and professional psychotherapists. We hypothesized that when reasoning about minds and brains, people rely on neurodualism—a hybrid intuitive theory that assimilates aspects of physicalist beliefs into pre-existing dualist intuitions, attributing more causal power to the brain over the mind than vice versa.

In all experiments and across several different tasks involving both thought experiments and naturalistic scenarios, untrained participants believed that interventions acting on the brain would affect the mind more than interventions acting on the mind would affect the brain, supporting our proposal. This causal asymmetry was strong and replicated reliably with untrained participants. Moreover, the extent to which participants endorsed popular dualism was only weakly correlated with their endorsement of neurodualism, supporting our proposal that a more complex set of beliefs is involved. In the last study, professional psychotherapists also showed evidence of endorsing neurodualism—albeit to a weaker degree—despite their scientific training and stronger reluctance, relative to lay participants, to believe that psychiatric medications affect the mind.

Our results both corroborate and extend prior findings regarding intuitive reasoning about minds and brains. Our results corroborate prior findings by showing, once again, that both laypeople and trained mental health professionals commonly hold dualistic beliefs. If their reasoning had been based on (folk versions of) a physicalist model, such as identity theory or supervenience, participants should not have expected mental events to occur in the absence of neural events. However, both lay participants and professional psychotherapists did consistently report that mental changes can occur (at least sometimes) even in situations in which no neural changes occur.

Our findings also extend prior findings by demonstrating that intuitive theories of minds and brains are considerably more complex than has previously been acknowledged. While it is widely agreed that dualistic beliefs are common (Ahn et al., 2017; Bloom, 2004; Forstmann & Burgmer, 2015; Miresco & Kirmayer, 2006; Mudrik & Maoz, 2014; Stanovich, 1989), how exactly people reason about the mind and brain in relation to each other has remained unclear. Our findings show that the fuller picture of intuitive theories is more nuanced than a mere belief that the mind and the brain are separate interacting entities. That intuitive theories can contain aspects of both popular-dualist and physicalist beliefs helps to explain why people's beliefs often seem internally inconsistent: While people often agree with the statement that the mind is not separable from the brain, they also endorse the view that the mind is not fundamentally physical (Demertzi et al., 2009). Similarly, even professional neuroscientists—who presumably endorse physicalist views—commonly discuss the brain in terms that conflict with physicalism (Greene, 2011; Mudrik & Maoz, 2014). Inconsistencies such as these are to be expected if people intuitively think of the mind as neither purely physical nor entirely independent of the brain, but rather embrace aspects of both of those views simultaneously. In fact, it is not uncommon for intuitive theories to take the form of hybrids that incorporate novel beliefs into existing theories whose original core is not lost even as the theories become increasingly complex (e.g., Hussak & Cimpian, 2019; Shtulman & Lombrozo, 2016).

Moreover, the current study sheds light on what this hybrid theory looks like. The results suggest that even if (and when) people are dualists, they perceive the brain neither as causally irrelevant for the mind nor as unresponsive to mental changes, but rather see the brain as a more commanding and robust causal agent than the mind. Future research will hopefully be able to capture further subtleties in intuitive theories of minds and brains. It seems likely that if researchers search for more fine-grained options than dichotomous dualist/antidualist positions in lay intuitions, increasingly fine-grained aspects may become visible.

7.1 Broader implications for theory and practice

7.1.1 Relation to the popular allure of neuroscience

Our findings may help to explain the intense fascination that the general public and mass media show for neuroscience research (Beck, 2010; O'Connor, Rees, & Joffe, 2012). If the general public is reluctant to believe that changes in the mind always correspond to changes in the brain, neuroscience findings showing that what happens in our minds happens in our brains as well contradict this belief and may thereby be particularly intriguing. Neurodualism may also help explain why people find brain-related statements informative in the context of psychological explanations even when the statements are irrelevant (Weisberg, Keil, Goodstein, Rawson, & Gray, 2008; Fernandez-Duque, Evans, Christian, & Hodges, 2015). Conceivably, the intuitive tendency to privilege causal patterns in the brain-to-mind direction (i.e., neurodualism) may bias people to perceive causal brain-to-mind connections even when none exist, which may in turn make the addition of neuroscience evidence to a psychological explanation seem informative. Also consistent with this argument, Fernandez-Duque et al. (2015) found that (popular) dualistic beliefs alone did not predict their participants’ reasoning in these contexts. In future research, it would be useful to test whether endorsement of neurodualism does predict the tendency to view information about the brain as particularly explanatory even in cases where it is not.

On a different note, some authors have suggested that the allure of neuroscience explanations is not specific to beliefs about minds and brains but related to a more general preference for reductive information. Hopkins, Weisberg, and Taylor (2016) found that across different scientific disciplines, people generally preferred explanations that referred to processes perceived as more fundamental, even when these processes were logically irrelevant to the explanation. According to this view, information about the brain may be seen as particularly informative because it is perceived as operating at the next level of analysis below psychological phenomena (Fernandez-Duque, 2017). It seems likely, however, that the neurodualist intuitive theory identified in the present research and this general preference for reductive information are independent inputs into the public's fascination with neuroscience explanations. Importantly, neurodualism itself is not a reductionist theory: For instance, people report that changes in mental states are only sometimes accompanied by changes in brain states (see Studies 3–5). Beliefs such as these are not easily interpreted as evidence that people are treating the terms “mind” and “brain” as referring to the same phenomenon at different levels of analysis. A more plausible account is that a neurodualist intuitive theory and the preference for reductive explanations are two independent factors contributing to the public appeal of neuroscience.

7.1.2 Implications for reasoning about mental illness and health

The current results may help to make sense of common beliefs regarding treatment efficacy in mental health. When people think that the source of a mental health issue such as depression is in the brain, they perceive psychological interventions as less likely to be helpful (Ahn et al., 2017; Deacon & Baird, 2009; Kemp et al., 2014). The belief that a psychological treatment cannot be effective if the problem is reflected in brain processes is at odds with both a physicalist view of the mind and the empirical evidence (e.g., Linden, 2006; Lozano, 2011; Deacon, 2013). These beliefs are unfortunate from a practical viewpoint as well because prognostic beliefs often predict treatment outcomes (Rutherford, Wager, & Roose, 2010). That is, pessimistic expectancies can become self-fulfilling prophecies: Neurobiological causal attributions are associated with both lower treatment expectations and poorer psychosocial treatment outcomes in depression (Schroder et al., 2020). Our findings suggest that part of the reason for these effects may lie in the intuitive theories people use for reasoning about the mind and brain. Biological causal explanations may foster pessimism about the efficacy of psychotherapy partly because of an underlying intuitive theory ascribing relatively little power to the mind over the brain.

Fortunately, targeted education about the malleability of neurobiological factors in depression can help reduce prognostic pessimism and strengthen patients’ beliefs about their own ability to regulate their moods in depression (Lebowitz & Ahn, 2015), suggesting that these intuitions are not fixed or immutable. In future work, it would be worthwhile to investigate whether interventions that target people's intuitive theories of the relation between the mind and brain could also help mitigate the negative consequences of biological attributions for disorders such as depression.

While participants in our studies were reluctant to believe that acting on the mind can result in changes in the brain, they were more willing to endorse that acting on the brain can result in changes in the mind. This may help, in part, to explain why Western societies have so enthusiastically come to favor neurobiologically centered approaches to mental illness despite people's dualistic intuitions. Pharmacological treatments have become the predominant societal response to mental health conditions over the past decades. Although it is widely agreed that an adequate response to mental distress needs to address several nonreducible levels, Western cultures have allowed “the biopsychosocial model to become the bio-bio-bio model,” in the words of a previous president of the American Psychiatric Association (Sharfstein, 2005). Arguably, neither the enthusiasm nor the scale at which this approach has been implemented is easy to explain from a purely evidence-based perspective (Deacon, 2013; Whitaker & Cosgrove, 2015; UN Human Rights Council, 2017; Lacasse & Leo, 2005; Healy, 2015; Moncrieff & Cohen, 2006), and its success has been controversial at best (Danborg & Gøtzsche, 2019; Gøtzsche, Young, & Crace, 2015; Haslam & Kvaale, 2015; Hengartner, 2020; Ioannidis, 2019; Jakobsen et al., 2017; Munkholm, Paludan-Müller, & Boesen, 2019; Sohler et al., 2015). Why, then, do we continue to operate based “on faith that neuroscience will eventually revolutionize mental health practice,” if “[d]ecades of extraordinary investment in biomedical research have not been rewarded with improved clinical tools or outcomes” (Deacon, 2013, p. 858)? While numerous societal and institutional factors undoubtedly affect the situation in all its complexity (e.g., Moncrieff, 2006; Whitaker & Cosgrove, 2015), from a strictly cognitive perspective, it is conceivable that our intuitive theories—in particular, our willingness to believe in the brain as an asymmetrically powerful causal agent that can influence the mind—may have contributed and made the public prone to believe overstated neuroscientific claims. In a self-reinforcing cycle, the widescale implementation of any neurobiologically centered practices likely also loops back and shapes people's intuitive theories in ways that further increase the appeal of these practices.

7.1.3 Relation to the broader historical context

The intuitive theories documented here are undoubtedly a product of the current historical context: Several authors have suggested that many cultures are undergoing a transition toward understanding mindbrain relations in more materialistic terms (e.g., Mudrik & Maoz, 2014). As scientific inquiry has progressed, we as a culture have increasingly come to believe that it is the brain which controls faculties formerly associated with the soul, such as memory, language, and emotion. If the suggestion is correct that we are in the process of intuitively giving up the mind's and/or soul's functions to material brains (Greene, 2011), it is interesting to consider what are “the soul's last stands”—the most immaterial of our nonphysical capacities, the ones not yet outsourced to the brain.

7.2 Conclusion

It is important to keep in mind that, philosophically, the mindbody problem remains an unresolved paradox. Although materialist and physicalist views have been the working assumption of contemporary psychologists and neuroscientists and also the prevailing position in philosophy over the past decades, this does not mean that the original mindbody problem itself was resolved. It remains, to this day, extremely difficult to see how, if the mind is a nonphysical thing and the body is a physical thing, one could simply just be the other (or how they could interact, if we are dualists). It is helpful to remember that not only the general public but also (at least some) contemporary philosophers find the claim inherently implausible that the mind simply is a physical thing (e.g., Westphal, 2016). What people think the mind is, however, and how exactly they think it is related to the brain seems worth investigating further, both for theoretical and practical reasons.