Tuesday, May 15, 2018

Unexamined assumptions and unintended consequences of routine screening for depression

Unexamined assumptions and unintended consequences of routine screening for depression. Lisa Cosgrove et al. Journal of Psychosomatic Research, Volume 109, June 2018, Pages 9-11. https://doi.org/10.1016/j.jpsychores.2018.03.007

1. Assumption 1: The condition has a detectable early asymptomatic stage, but is progressive and, without early treatment, there will be worse health outcomes

2. Assumption 2: In the absence of screening, patients will not be identified and treated

3. Assumption 3: Depression treatments are effective for patients who screen positive but have not reported symptoms

4. Unintended consequence 1: overdiagnosis and overtreatment

5. Unintended consequence 2: the nocebo effect

6. Unintended consequence 3: misuse of resources

7. Conclusion
The therapeutic imperative in medicine means that we are good at rushing to do things that might “save lives” but not good at not doing, or undoing [30] (p348).

Sensible health care policy should be congruent with evidence. As Mangin astutely noted, our goodhearted desire to “do something” often undermines our ability to interrogate our assumptions and accept empirical evidence. Before implementing any screening program there must be high-quality evidence from randomized controlled trials (RCTs) that the program will result in sufficiently large improvements in health to justify both the harms incurred and the use of scarce healthcare resources.

Helping people who struggle with depression is a critically important public health issue. But screening for depression, over and above clinical observation, active listening and questioning, will lead to over-diagnosis and over-treatment, unnecessarily create illness identities in some people, and exacerbate health disparities by reducing our capacity to care for those with more severe mental health problems—the ones, often from disadvantaged groups—who need the care the most.

Research shows that “evidence-based” therapies are weak treatments. Their benefits are trivial. Most patients do not get well. Even the trivial benefits do not last.

Where Is the Evidence for “Evidence-Based” Therapy? Jonathan Shedler. Psychiatric Clinics of North America, Volume 41, Issue 2, June 2018, Pages 319-329. https://doi.org/10.1016/j.psc.2018.02.001

Buzzword. noun. An important-sounding u sually technical word or phrase often oflittle meaning used chiefly to impress.
“Evidence-based therapy” has become a marketing buzzword. The term “evidence based” comes from medicine. It gained attention in the 1990s and was initially a call for critical thinking. Proponents of evidence-based medicine recognized that “We’ve always done it this way” is poor justification for medical decisions. Medical decisions should integrate individual clinical expertise, patients’ values and preferences, and relevant scientific research.1

But the term evidence based has come to mean something very different for psychotherapy.  It has been appropriated to promote a specific ideology and agenda. It is now used as a code word for manualized therapy—most often brief, one-sizefits- all forms of cognitive behavior therapy (CBT). “Manualized” means the therapy is conducted by following an instruction manual. The treatments are often standardized or scripted in ways that leave little room for addressing the needs of individual patients.

Behind the “evidence-based” therapy movement lies a master narrative that increasingly dominates the mental health landscape. The master narrative goes something like this: “In the dark ages, therapists practiced unproven, unscientific therapy.  Evidence-based therapies are scientifically proven and superior.” The narrative has become a justification for all-out attacks on traditional talk therapy—that is, therapy aimed at fostering self-examination and self-understanding in the context of an ongoing, meaningful therapy relationship.

Here is a small sample of what proponents of “evidence-based” therapy say in public: “The empirically supported psychotherapies are still not widely practiced. As a result, many patients do not have access to adequate treatment” (emphasis added).2 Note the linguistic sleight-of-hand: If the therapy is not “evidence based” (read, manualized), it is inadequate. Other proponents of “evidence-based” therapies go further in denigrating relationship-based, insight-oriented therapy: “The disconnect between what clinicians do and what science has discovered is an unconscionable embarrassment.”3 The news media promulgate the master narrative. The Washington Post ran an article titled “Is your therapist a little behind the times?” which likened traditional talk therapy to pre-scientific medicine when “healers commonly used ineffective and often injurious practices such as blistering, purging and bleeding.” Newsweek sounded a similar note with an article titled, “Ignoring the evidence: Why do Psychologists reject science?”

Note how the language leads to a form of McCarthyism. Because proponents of brief, manualized therapies have appropriated the term “evidence-based,” it has become nearly impossible to have an intelligent discussion about what constitutes good therapy. Anyone who questions “evidence-based” therapy risks being branded anti-evidence and anti-science.

One might assume, in light of the strong claims for “evidence-based” therapies and the public denigration of other therapies, that there must be extremely strong scientific evidence for their benefits. There is not. There is a yawning chasm between what we are told research shows and what research actually shows.  Empirical research actually shows that “evidence-based” therapies are ineffective for most patients most of the time. First, I discuss what empirical research really shows. I then take a closer look at troubling practices in “evidence-based” therapy research.

PART I: WHAT RESEARCH REALLY SHOWS

Research shows that “evidence-based” therapies are weak treatments. Their benefits are trivial. Most patients do not get well. Even the trivial benefits do not last.

The neuronal circuitry associated with higher intelligence is organized in a sparse and efficient manner, fostering more directed information processing and less cortical activity during reasoning

Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence. Erhan Genç, Christoph Fraenz, Caroline Schlüter, Patrick Friedrich, Rüdiger Hossiep, Manuel C. Voelkle, Josef M. Ling, Onur Güntürkün & Rex E. Jung. Nature Communications, volume 9, Article number: 1905 (2018), doi:10.1038/s41467-018-04268-8

Abstract: Previous research has demonstrated that individuals with higher intelligence are more likely to have larger gray matter volume in brain areas predominantly located in parieto-frontal regions. These findings were usually interpreted to mean that individuals with more cortical brain volume possess more neurons and thus exhibit more computational capacity during reasoning. In addition, neuroimaging studies have shown that intelligent individuals, despite their larger brains, tend to exhibit lower rates of brain activity during reasoning. However, the microstructural architecture underlying both observations remains unclear. By combining advanced multi-shell diffusion tensor imaging with a culture-fair matrix-reasoning test, we found that higher intelligence in healthy individuals is related to lower values of dendritic density and arborization. These results suggest that the neuronal circuitry associated with higher intelligence is organized in a sparse and efficient manner, fostering more directed information processing and less cortical activity during reasoning.

Patients with troublesome alcohol history had a significantly lower prevalence of cardiovascular disease events, even after adjusting for demographic and traditional risk factors, despite higher tobacco use & male sex predominance

Cardiovascular Events in Alcoholic Syndrome With Alcohol Withdrawal History: Results From the National Inpatient Sample. Parasuram Krishnamoorthy, Aditi Kalla, Vincent M. Figueredo. The American Journal of the Medical Sciences, Volume 355, Issue 5, May 2018, Pages 425-427. https://doi.org/10.1016/j.amjms.2018.01.005

Abstract

Background: Epidemiologic studies suggest reduced cardiovascular disease (CVD) events with moderate alcohol consumption. However, heavy and binge drinking may be associated with higher CVD risk. Utilizing the Nationwide Inpatient Sample, we studied the association between a troublesome alcohol history (TAH), defined as those with diagnoses of both chronic alcohol syndrome and acute withdrawal history and CVD events.

Methods: Patients >18 years with diagnoses of both chronic alcohol syndrome and acute withdrawal using the International Classification of Diseases-Ninth Edition-Clinical Modification (ICD-9-CM) codes 303.9 and 291.81, were identified in the Nationwide Inpatient Sample 2009-2010 database. Demographics, including age and sex, as well as CVD event rates were collected.

Results: Patients with TAH were more likely to be male, with a smoking history and have hypertension, with less diabetes, hyperlipidemia and obesity. After multimodal adjusted regression analysis, odds of coronary artery disease, acute coronary syndrome, in-hospital death and heart failure were significantly lower in patients with TAH when compared to the general discharge patient population.

Conclusions: Utilizing a large inpatient database, patients with TAH had a significantly lower prevalence of CVD events, even after adjusting for demographic and traditional risk factors, despite higher tobacco use and male sex predominance, when compared to the general patient population.

Is Accurate, Positive, or Inflated Self-perception Most Advantageous for Psychological Adjustment? Better Inflated

Humberg, Sarah, Michael Dufner, Felix D Schönbrodt, Katharina Geukes, Roos Hutteman, Albrecht Kuefner, Maarten van Zalk, Jaap J Denissen, Steffen Nestler, and Mitja Back 2018. “Preprint of "is Accurate, Positive, or Inflated Self-perception Most Advantageous for Psychological Adjustment? A Competitive Test of Key Hypotheses"”. PsyArXiv. April 15. doi:10.17605/OSF.IO/9W3BH. Final version: Journal of Personality and Social Psychology, 116(5), 835-859. http://dx.doi.org/10.1037/pspp0000204

Abstract: Empirical research on the (mal-)adaptiveness of favorable self-perceptions, self-enhancement, and self-knowledge has typically applied a classical null-hypothesis testing approach and provided mixed and even contradictory findings. Using data from five studies (laboratory and field, total N = 2,823), we employed an information-theoretic approach combined with Response Surface Analysis to provide the first competitive test of six popular hypotheses: that more favorable self-perceptions are adaptive versus maladaptive (Hypotheses 1 and 2: Positivity of self-view hypotheses), that higher levels of self-enhancement (i.e., a higher discrepancy of self-viewed and objectively assessed ability) are adaptive versus maladaptive (Hypotheses 3 and 4: Self-enhancement hypotheses), that accurate self-perceptions are adaptive (Hypothesis 5: Self-knowledge hypothesis), and that a slight degree of self-enhancement is adaptive (Hypothesis 6: Optimal margin hypothesis). We considered self-perceptions and objective ability measures in two content domains (reasoning ability, vocabulary knowledge) and investigated six indicators of intra- and interpersonal psychological adjustment. Results showed that most adjustment indicators were best predicted by the positivity of self-perceptions, there were some specific self-enhancement effects, and evidence generally spoke against the self-knowledge and optimal margin hypotheses. Our results highlight the need for comprehensive simultaneous tests of competing hypotheses. Implications for the understanding of underlying processes are discussed.

---
Altogether, the SK Hypothesis (Self-Knowledge H.) was unable to compete against the other hypotheses for any of the regarded outcome categories: Each analysis suggested that it was unlikely that SK effects underlie the empirical data.19 That is, persons with accurate knowledge of their intelligence did not seem to be better adjusted than persons with less accurate self-perceptions (Allport, 1937; Higgins, 1996; Jahoda, 1958). Similarly, our findings did not support the conjecture that persons who see their intelligence slightly more positively than it really is are better adjusted (OM Hypothesis; Baumeister, 1989).


Conclusions
In the present article, we theoretically disentangled all central hypotheses on the adaptiveness of self-perceptions, highlighted the need for a simultaneous empirical evaluation of these hypotheses, presented a methodological framework to this aim, and employed it to five substantive datasets. With some exceptions, the rule “the higher self-perceived intelligence, the better adjusted” seemed to hold for most outcomes we considered. By contrast, we found that individual differences in neither the accuracy of self-perceptions nor an optimal margin of self-viewed versus real ability predicted intra- or interpersonal adjustment. Similarly, intellectual self-enhancement was largely found to be unrelated to the considered adjustment indicators, with two exceptions (i.e., SE concerning reasoning ability seemed detrimental for peer-perceived communal attributes; SE concerning vocabulary knowledge seemed beneficial for some self-perceived adjustment indicators). We hope that future research will make use of the approach outlined here to replicate and extend our results, thereby shedding more light on the intra- and interpersonal consequences of self-perceptions.

Testosterone may influence social behavior by increasing the frequency of words related to aggression, sexuality, & status, & it may alter the quality of interactions with an intimate partner by amplifying emotions via swearing

Preliminary evidence that androgen signaling is correlated with men's everyday language. Jennifer S. Mascaro et al. American Journal of Human Biology, https://doi.org/10.1002/ajhb.23136

Objectives: Testosterone (T) has an integral, albeit complex, relationship with social behavior, especially in the domains of aggression and competition. However, examining this relationship in humans is challenging given the often covert and subtle nature of human aggression and status‐seeking. The present study aimed to investigate whether T levels and genetic polymorphisms in the AR gene are associated with social behavior assessed via natural language use.

Methods: We used unobtrusive, behavioral, real‐world ambulatory assessments of men in partnered heterosexual relationships to examine the relationship between plasma T levels, variation in the androgen receptor (AR) gene, and spontaneous, everyday language in three interpersonal contexts: with romantic partners, with co‐workers, and with their children.

Results: Men's T levels were positively correlated with their use of achievement words with their children, and the number of AR CAG trinucleotide repeats was inversely correlated with their use of anger and reward words with their children. T levels were positively correlated with sexual language and with use of swear words in the presence of their partner, but not in the presence of co‐workers or children.

Conclusions: Together, these results suggest that T may influence social behavior by increasing the frequency of words related to aggression, sexuality, and status, and that it may alter the quality of interactions with an intimate partner by amplifying emotions via swearing.

The religiosity-moral self-image link was most strongly explained by personality traits and individual differences in prosociality/empathy, rather than a desirability bias; the link is minimally accounted for by impression management

Religion and moral self-image: The contributions of prosocial behavior, socially desirable responding, and personality. Sarah J. Ward, Laura A. King. Personality and Individual Differences, Volume 131, 1 September 2018, Pages 222–231. https://doi.org/10.1016/j.paid.2018.04.028

Highlights
•    The religiosity-moral self-image link was most strongly explained by prosocial traits.
•    This association was only minimally accounted for by impression management.
•    Even when under a fake lie detector, religious people still reported high moral self-image.

Abstract: Often, the high moral self-image held by religious people is viewed with skepticism. Three studies examined the contributions of socially desirable responding (SDR), personality traits, prosocial behavior, and individual differences in prosocial tendencies to the association between religiosity and moral self-image. In Studies 1 and 2 (N's = 346, 507), personality traits (agreeableness, conscientiousness) and individual differences in empathy/prosociality were the strongest explanatory variables for religiosity's association with moral self-image measures; SDR and prosocial behavior contributed more weakly to this association. In Study 3 (N = 180), the effect of a bogus pipeline manipulation on moral self-image was moderated by religiosity. Among the highly religious, moral self-image remained high even in the bogus pipeline condition. These studies show that the association between religiosity and moral self-image is most strongly explained by personality traits and individual differences in prosociality/empathy, rather than a desirability response bias.

Keywords: Religion; Morality; Moral self-image; Prosociality

Monday, May 14, 2018

Sex differences in human brain pain pathways are present from birth: More sensitiviy in girls

The distribution of pain activity across the human neonatal brain is sex dependent. Madeleine Verriotis et al. NeuroImage, https://doi.org/10.1016/j.neuroimage.2018.05.030

Highlights
•    Noxious stimulation causes widespread pain related potentials in the neonatal brain.
•    This widespread pain response is more likely to occur in female babies.
•    Brain responses to touch do not differ between male and female babies.
•    Sex differences in human brain pain pathways are present from birth.

Abstract: In adults, there are differences between male and female structural and functional brain connectivity, specifically for those regions involved in pain processing. This may partly explain the observed sex differences in pain sensitivity, tolerance, and inhibitory control, and in the development of chronic pain. However, it is not known if these differences exist from birth. Cortical activity in response to a painful stimulus can be observed in the human neonatal brain, but this nociceptive activity continues to develop in the postnatal period and is qualitatively different from that of adults, partly due to the considerable cortical maturation during this time. This research aimed to investigate the effects of sex and prematurity on the magnitude and spatial distribution pattern of the long-latency nociceptive event-related potential (nERP) using electroencephalography (EEG). We measured the cortical response time-locked to a clinically required heel lance in 81 neonates born between 29 and 42 weeks gestational age (median postnatal age 4 days). The results show that heel lance results in a spatially widespread nERP response in the majority of newborns. Importantly, a widespread pattern is significantly more likely to occur in females, irrespective of gestational age at birth. This effect is not observed for short latency somatosensory waveforms in the same infants, indicating that it is selective for the nociceptive component of the response. These results suggest the early onset of a greater anatomical and functional connectivity reported in the adult female brain, and indicate the presence of pain-related sex differences from birth.

Keywords: Pain; EEG; Nociception; Sex; Neonatal; Brain

Male Sexlessness is Rising, But Not for the Reasons Incels Claim

Male Sexlessness is Rising, But Not for the Reasons Incels Claim. Lyman Stone. Institute of Family Studies, May 2018. https://ifstudies.org/blog/male-sexlessness-is-rising-but-not-for-the-reasons-incels-claim

A recent terrorist attack in Toronto, which left 10 people dead, has brought global attention to the “incel” movement, which stands for “involuntarily celibate.” The term refers to a growing number of people, particularly young men, who feel shut out of any possibility for romance, and have formed a community based around mourning their celibacy, supporting each other, and, in some cases, stoking a culture of impotent bitterness and rage at the wider world. In a few cases, this rage has spilled over in the form of terrorist attacks by “incels.” While the incels’ misogyny deserves to be called out and condemned, their ideas are unlikely to just go away. As such, the question must be posed: is the incel account of modern sexual life correct or not?

Incel communities tend to believe a few key facts about modern mating practices. First, they tend to believe women have become very sexually promiscuous over time, and indeed that virtually all women are highly promiscuous. The nickname incels use for an attractive, sexually available woman is “Stacy.” Second, they believe a small number of males dominate the market for romance, and that their dominance is growing. They call these alpha-males “Chads.” Finally, they tend to argue that the market for sex is winner-take-all, with a few “Chads” conquering all the “Stacies.” The allegedly handsome and masculine Chads are helped along by social media, Tinder, and an allegedly vacuous and appearance-focused dating scene, such that modern society gives Chads excessive amounts of sex while leaving a growing number of males with no sexual partner at all. These left out men are the incels.

This view is basically wrong. But it turns out to be wrong in an interesting and informative way.

How Much Sex Are People Having?

First of all, we may wonder about the actual trends in sexual behavior. Using data from the General Social Survey (GSS), it’s possible to estimate about how often people of different groups have sex. For this article, I will focus on individuals aged 22-35 who have never been married, and particularly males within that group.

Most groups of people age 22-35 have broadly similar amounts of sex; probably something like 60-100 sexual encounters per year. Never-married people have the least sex, about 60-80 encounters per year, while ever-married people have more sex, about 70-110 encounters per year, on average. Historically, never-married men have reported higher sexual frequency than never-married women. However, in the 2014 and 2016 GSS samples, that changed: never-married men now report slightly lower sexual frequency than never-married women. This is mostly because men are reporting less sex, not that women are reporting more sex. Female sexual frequency is essentially unchanged since 2000. In other words, a key piece of the incel story about rising female promiscuity just isn’t there.

But sexual frequency may be dominated by “Chads” and “Stacies.” What we really want to know is what share of these men and women have not had any sex. The graph below shows what share of these young men and women had not had sex at all in the last 12 months, by their sex and marital status. .

[Full text and charts at the link above.]

Putting the “Sex” into “Sexuality”: Understanding Online Pornography using an Evolutionary Framework

Putting the “Sex” into “Sexuality”: Understanding Online Pornography using an Evolutionary Framework. Catherine Salmon, Maryanne L. Fisher. EvoS Journal, 2018, NEEPS XI, pp. 1-15. -1-. http://evostudies.org/evos-journal/about-the-journal/

ABSTRACT: One encounters an obvious problem when using an evolutionary framework to understand online pornography. On the one hand, theories of sex specificity in mating strategies and evolved human nature lead to the prediction that there are commonalities and universals in the content people would seek in online pornography. That is, due to the fact that men have faced a distinct set of issues over the duration of human evolution, research suggests general tendencies in mate preferences, and presumably in the types of pornography that men therefore consume. Likewise, women have dealt with sex-specific challenges during human evolutionary history, resulting in patterns of mate preferences that are reflected in the types of online pornography they consume. Consequently, although the sexes likely differ in the content they prefer, there also should be a rather limited range of material that addresses male and female evolved heritages. Looking online, however, we can immediately ascertain that this limited focus is not the case, and hence, the dilemma. There is a wide range of pornographic material available online, to the extent that we are left with no option but to agree with Rule 34: "If it exists, there is porn of it." This problem demands a solution; how can there be evolved tendencies and yet such diversity in the content of online pornography? We review who the consumers of online pornography are, how frequently they consume it, and the type of content that is most commonly consumed. Our goal is to address the issue of common sexual interests and the diversity of online pornography. We discuss not just sex-specific content but also the variety of interests that are seen within online pornography and erotic literature.

KEYWORDS: Mate Preferences, Pornography, Internet, Sex Differences, Sexual Selection




A model of the dynamics of household vegetarian and vegan rates in the U.K.: A persistent vegetarian campaign has a significantly positive effect on the rate of vegan consumption

A model of the dynamics of household vegetarian and vegan rates in the U.K. James Waters. Appetite, https://doi.org/10.1016/j.appet.2018.05.017

Abstract: Although there are many studies of determinants of vegetarianism and veganism, there have been no previous studies of how their rates in a population jointly change over time. In this paper, we present a flexible model of vegetarian and vegan dietary choices, and derive the joint dynamics of rates of consumption. We fit our model to a pseudo-panel with 23 years of U.K. household data, and find that while vegetarian rates are largely determined by current household characteristics, vegan rates are additionally influenced by their own lagged value. We solve for equilibrium rates of vegetarianism and veganism, show that rates of consumption return to their equilibrium levels following a temporary event which changes those rates, and estimate the effects of campaigns to promote non-meat diets. We find that a persistent vegetarian campaign has a significantly positive effect on the rate of vegan consumption, in answer to an active debate among vegan campaigners.

Keywords: Vegetarianism; Veganism; Food choice; Dietary change; Social influence; Animal advocacy

---
Strange... See this (Rolf Degen): 84 percent of all vegetarians return to meat https://plus.google.com/101046916407340625977/posts/JPsRvnMtbYo

The Goldilocks Placebo Effect: Placebo Effects Are Stronger When People Select a Treatment from an Optimal Number of Choices

The Goldilocks Placebo Effect: Placebo Effects Are Stronger When People Select a Treatment from an Optimal Number of Choices. Rebecca J. Hafner, Mathew P. White and Simon J. Handley. The American Journal of Psychology, Vol. 131, No. 2 (Summer 2018), pp. 175-184. http://www.jstor.org/stable/10.5406/amerjpsyc.131.2.0175

Abstract: People are often more satisfied with a choice (e.g., chocolates, pens) when the number of options in the choice set is “just right” (e.g., 10–12), neither too few (e.g., 2–4) nor too many (e.g., 30–40). We investigated this “Goldilocks effect” in the context of a placebo treatment. Participants reporting nonspecific complaints (e.g., headaches) chose one of Bach's 38 Flower Essences from a choice set of 2 (low choice), 12 (optimal choice), or 38 (full choice) options to use for a 2-week period. Replicating earlier findings in the novel context of a health-related choice, participants were initially more satisfied with the essence they selected when presented with 12 versus either 2 or 38 options. More importantly, self-reported symptoms were significantly lower 2 weeks later in the optimal (12) versus nonoptimal choice conditions (2 and 38). Because there is no known active ingredient in Bach's Flower Essences, we refer to this as the Goldilocks placebo effect. Supporting a counterfactual thinking account of the Goldilocks effect, and despite significantly fewer symptoms after 2 weeks, those in the optimal choice set condition were no longer significantly more satisfied with their choice at the end of testing. Implications for medical practice, especially patient choice, are discussed.

How Many Atheists Are There? Indirect estimate is 26%

How Many Atheists Are There? Will M. Gervais, Maxine B. Najle. Social Psychological and Personality Science, https://doi.org/10.1177/1948550617707015

Abstract: One crucible for theories of religion is their ability to predict and explain the patterns of belief and disbelief. Yet, religious nonbelief is often heavily stigmatized, potentially leading many atheists to refrain from outing themselves even in anonymous polls. We used the unmatched count technique and Bayesian estimation to indirectly estimate atheist prevalence in two nationally representative samples of 2,000 U.S. adults apiece. Widely cited telephone polls (e.g., Gallup, Pew) suggest U.S. atheist prevalence of only 3–11%. In contrast, our most credible indirect estimate is 26% (albeit with considerable estimate and method uncertainty). Our data and model predict that atheist prevalence exceeds 11% with greater than .99 probability and exceeds 20% with roughly .8 probability. Prevalence estimates of 11% were even less credible than estimates of 40%, and all intermediate estimates were more credible. Some popular theoretical approaches to religious cognition may require heavy revision to accommodate actual levels of religious disbelief.

Keywords: religion, atheism, social desirability, stigma, Bayesian estimation

Sunday, May 13, 2018

Elite chess players live longer than the general population and have a similar survival advantage to elite competitors in physical sports

Longevity of outstanding sporting achievers: Mind versus muscle. An Tran-Duy, David C. Smerdon, Philip M. Clarke. PLOS, https://doi.org/10.1371/journal.pone.0196938

Abstract

Background: While there is strong evidence showing the survival advantage of elite athletes, much less is known about those engaged in mind sports such as chess. This study aimed to examine the overall as well as regional survival of International Chess Grandmasters (GMs) with a reference to the general population, and compare relative survival (RS) of GMs with that of Olympic medallists (OMs).

Methods: Information on 1,208 GMs and 15,157 OMs, respectively, from 28 countries were extracted from the publicly available data sources. The Kaplan-Meier method was used to estimate the survival rates of the GMs. A Cox proportional hazards model was used to adjust the survival for region, year at risk, age at risk and sex, and to estimate the life expectancy of the GMs. The RS rate was computed by matching each GM or OM by year at risk, age at risk and sex to the life table of the country the individual represented.

Results: The survival rates of GMs at 30 and 60 years since GM title achievement were 87% and 15%, respectively. The life expectancy of GMs at the age of 30 years (which is near the average age when they attained a GM title) was 53.6 ([95% CI]: 47.7–58.5) years, which is significantly greater than the overall weighted mean life expectancy of 45.9 years for the general population. Compared to Eastern Europe, GMs in North America (HR [95% CI]: 0.51 [0.29–0.88]) and Western Europe (HR [95% CI]: 0.53 [0.34–0.83]) had a longer lifespan. The RS analysis showed that both GMs and OMs had a significant survival advantage over the general population, and there was no statistically significant difference in the RS of GMs (RS [95% CI]: 1.14 [1.08–1.20]) compared to OMs: (RS [95% CI]: 1.09 [1.07–1.11]) at 30 years.

Conclusion: Elite chess players live longer than the general population and have a similar survival advantage to elite competitors in physical sports.

Gender differences in Everyday Risk Taking: An Observational Study of Pedestrians in Newcastle upon Tyne

Gender differences in Everyday Risk Taking: An Observational Study of Pedestrians in Newcastle upon Tyne. Eryn O'Dowd, Thomas V Pollet. Letters on Evolutionary Behavioral Science,  Vol 9, No 1 (2018). http://lebs.hbesj.org/index.php/lebs/article/view/lebs.2018.65

Abstract: Evolutionary psychologists have demonstrated that there are evolved differences in risk taking between men and women. Potentially, these also play out in every day behaviours, such as in traffic. We hypothesised that (perceived) gender would influence using a pedestrian crossing. In addition, we also explored if the presence of a contextual factor, presence of daylight, could modify risk taking behaviour. 558 pedestrians were directly observed and their use of a crossing near a Metro station in a large city in the North East of England was coded. Using logistic regression, we found evidence that women more inclined than men to use the crossing. We found no evidence for a contextual effect of daylight or an interaction between daylight and gender on use of the crossing. We discuss the limitations and implications of this finding with reference to literature on risk taking.