Thursday, December 5, 2019

Cohort study of 13 588 adults without dementia at baseline: The Western dietary pattern may not contribute to cognitive decline in later life

Association of Dietary Patterns in Midlife and Cognitive Function in Later Life in US Adults Without Dementia. Jennifer L. Dearborn-Tomazos, Aozhou Wu, Lyn M. Steffen et al. JAMA Netw Open. 2019;2(12):e1916641. December 4, 2019, doi:10.1001/jamanetworkopen.2019.16641

Question  What is the association between the Western dietary pattern in adults in midlife and cognitive decline in later life?

Findings  In this cohort study of 13 588 adults without dementia at baseline, midlife dietary pattern was not associated with cognitive decline 20 years later.

Meaning  The Western dietary pattern may not contribute to cognitive decline in later life.


Abstract
Importance  The association of dietary patterns, or the combinations of different foods that people eat, with cognitive change and dementia is unclear.

Objective  To examine the association of dietary patterns in midlife with cognitive function in later life in a US population without dementia.

Design, Setting, and Participants  Observational cohort study with analysis of data collected from 1987 to 2017. Analysis was completed in January to February 2019. Community-dwelling black and white men and women from Washington County, Maryland; Forsyth County, North Carolina; Jackson, Mississippi; and suburban Minneapolis, Minnesota, participating in the Atherosclerosis Risk in Communities (ARIC) study were included.

Exposures  Two dietary pattern scores were derived from a 66-item food frequency questionnaire using principal component analysis. A Western, or unhealthy, dietary pattern was characterized by higher consumption of meats and fried foods. A so-called prudent, or healthier, dietary pattern was characterized by higher amounts of fruits and vegetables.

Main Outcomes and Measures  Results of 3 cognitive tests (Digit Symbol Substitution Test, Word Fluency Test, and Delayed Word Recall) performed at 3 points (1990-1992, 1996-1998, and 2011-2013) were standardized and combined to represent global cognitive function. The 20-year change in cognitive function was determined by tertile of diet pattern score using mixed-effect models. The risk of incident dementia was also determined by tertile of the diet pattern score.

Results  A total of 13 588 participants (7588 [55.8%] women) with a mean (SD) age of 54.6 (5.7) years at baseline were included; participants in the top third of Western and prudent diet pattern scores were considered adherent to the respective diet. Cognitive scores at baseline were lower in participants with a Western diet (z score for tertile 3 [T3], −0.17 [95% CI, −0.20 to −0.14] vs T1, 0.17 [95% CI, 0.14-0.20]) and higher in participants with a prudent diet (z score for T3, −0.09 [95% CI, −0.12 to −0.06] vs T1, −0.09 [95% −0.12 to −0.06]). Estimated 20-year change in global cognitive function did not differ by dietary pattern (difference of change in z score for Western diet, T3 vs T1: −0.01 [95% CI, −0.05 to 0.04]; and difference of change in z score for prudent diet, T3 vs T1: 0.02 [95% CI, −0.02 to 0.06]). The risk of incident dementia did not differ by dietary pattern (Western hazard ratio for T3 vs T1, 1.06 [95% CI, 0.92-1.22]; prudent hazard ratio for T3 vs T1, 0.99 [95% CI, 0.88-1.12]).

Conclusions and Relevance  This study found that the dietary pattern of US adults at midlife was not associated with processing speed, word fluency, memory, or incident dementia in later life.


Introduction
Healthy dietary patterns may protect against dementia and mild cognitive impairment.1,2 Prior studies demonstrate that healthy dietary patterns are associated with increased brain volumes and reduced atrophy compared with less healthy dietary patterns.3,4 Although the mechanism behind a healthy diet and improved brain health are not well understood, 2 plausible mechanisms include reduced vascular injury and a reduction in Alzheimer pathology.5 A healthy dietary pattern reduces hypertension, dysglycemia, hyperlipidemia, and chronic inflammation, which may reduce brain vascular injury.2,6 Second, a healthy diet may, through reduced oxidative stress, reduce the accumulation of proteins involved in Alzheimer disease.5,7,8

Midlife dietary pattern, compared with dietary pattern in later life, may have a stronger association with cognitive decline and dementia because chronic disease or the concern for chronic disease in later life may motivate individuals to improve their diet,9 making it appear that a healthy diet is associated with poor health outcomes. At least 10 prior studies examined associations of dietary patterns later in life with cognitive decline, but far fewer prospectively investigated associations for midlife dietary patterns.9,10 In this study, we examine the association between midlife dietary patterns and cognitive change and incident dementia over 20 years. We hypothesized that a healthy diet at midlife would be associated with less cognitive decline and a lower risk of dementia.

Methods
Study Population
The Atherosclerosis Risk in Communities (ARIC) study is a randomly selected and recruited observational cohort study that began in 1987 with individuals aged 45 to 64 years who were representative of the selected communities. Participants enrolled in the ARIC study were from 4 US communities (Jackson, Mississippi; Forsyth County, North Carolina; Washington County, Maryland; and suburban Minneapolis, Minnesota). A total of 15 792 participants received an initial evaluation, and these participants were re-evaluated in person every 3 years. The study is ongoing with 6 in-person visits completed to date.11 The current analysis includes information collected at visit 1 (1987-1989), visit 2 (1990-1992), visit 4 (1996-1998), and at visit 5 (2011-2013). All patients provided written informed consent. The study was approved by the institutional review boards at all participating institutions. The analysis presented is compliant with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.

Exposure: Dietary Patterns
Participants completed a 66-item food frequency questionnaire at baseline (1987-1989).12 We condensed baseline questionnaire items into 20 food and beverage groups and derived 2 dietary pattern scores using principal components analysis with orthogonal rotation.13 Principal components analysis transforms possibly correlated variables (in this case, each of the 20 food groups) into a set of linearly uncorrelated variables (in this case, the 2 dietary patterns). Two distinct dietary patterns emerged with an eigenvalue greater than 2 (eTable 1 in the Supplement). The Western diet pattern explained 12% of the total variance and included higher consumption of meat, refined grains, and processed and fried foods. The so-called prudent diet pattern reflected 10% of the total variance and included higher consumption of fruits and vegetables, fish, chicken, whole grains, dairy, nuts, and alcohol. Study participants received a score for each dietary pattern. The score established how closely they adhered to a Western diet pattern or a prudent diet pattern. Scores ranged from −3.96 to 14.26 (interquartile range [IQR], −1.34 to 0.77) for the Western diet pattern and −3.56 to 10.55 (IQR, −0.98 to 0.79) for the prudent diet pattern. A higher score indicated greater adherence to each particular diet pattern.

Outcomes
Cognitive Change
Cognitive testing was performed at visit 2 (1990-1992), visit 4 (1996-1998), and visit 5 (2011-2013). Three tests were used in the cognitive battery: the Delayed Word Recall (DWR)14 test, the Digit Symbol Substitution (DSS) test, and the Word Fluency (WF) test.15 A test-specific z score representing cognitive function at visits 4 and 5 was calculated by subtracting the baseline population mean from the participant’s raw score and dividing the difference by the baseline population standard deviation. A global z score representing cognitive function at visit 4 and 5 was created as the mean of the 3 test-specific z scores.

Cognitive change was defined as the difference in test-specific and global z scores at each point using random-effects linear regression models to account for the intra-individual correlation of cognitive scores.16

Dementia
Dementia was adjudicated according to an established protocol that included assessments involving 3 levels of ascertainment consisting of in-person assessments, telephone interviews of participants or informants, or surveillance based on hospital discharge codes and death certificates.17 Dementia was adjudicated in 2011 to 2013 and 2016 to 2017. Level 1 included in-person assessment of dementia using an algorithm that incorporated information from the Clinical Dementia Rating Interview; the Mini-Mental State examination; longitudinal cognitive testing at visits 2, 4, 5, and 6; a complete neuropsychological battery at visits 5 and 6; and the Functional Activities Questionnaire.18 Level 2 included participants from level 1 and, in addition, 3 other categorizations: (1) participants who met predefined criteria based on completion of the Telephone Interview for Cognitive Status–modified or the Six-Item Screener, (2) deceased persons classified as having dementia, and (3) informant interviews using the AD8 dementia screening interview, as described elsewhere.18 Level 3 included participants in level 2 in addition to individuals with dementia identified by surveillance using prior hospital discharge codes (International Classification of Diseases, Ninth Revision [ICD-9] or International Statistical Classification of Diseases and Related Health Problems, Tenth Revision [ICD-10]) or death certificate codes for dementia.17 Level 3 was used for this analysis. Level 3 ascertained a dementia status (yes or no) for all participants regardless of study visit completion.

Covariates
Covariates included demographic and lifestyle factors, clinical factors, and apolipoprotein E (APOE) ε4 status. Demographic factors included age, sex, race–study center, and education. Lifestyle factors included activity level, current smoking, current alcohol use, and total energy intake. Clinical factors included body mass index (calculated as weight in kilograms divided by height in meters squared), history of hypertension (yes or no, defined as use of hypertension medication, systolic blood pressure >140 mm Hg, or diastolic blood pressure >90 mm Hg at the baseline visit), diabetes (yes or no, defined as self-reported diabetes diagnosis by physicians, use of diabetes medication, or having fasting glucose level of 126 mg/dL or higher or a nonfasting glucose level of 200 mg/dL or higher at the baseline visit [to convert to millimoles per liter, multiply by 0.0555]), total cholesterol (fasting, mmol/L), history of coronary artery disease, and prevalent stroke through visit 2 defined based on self-report of stroke prior to visit 1 and adjudicated cases between visit 1 and visit 2.

Population Attrition
There was a 15-year gap between visit 4 and visit 5, leading to attrition, largely due to death or disability. At visits 4 and 5, respectively, 73.8% and 41.4% of the original cohort remained. This dropout is likely to be informative,19 as we found that diet scores were associated with loss to follow-up (Table 1). To account for population attrition, we imputed the missing cognitive test results in visits 4 and 5 and missing baseline covariates using multiple imputations with chained equations.20 In the multiple imputations with chained equations, we incorporated the diet scores, all covariates, prior cognitive function measurements, and ancillary information about cognitive status collected prospectively for participants who did not attend visit 5. The ancillary cognitive information was collected from the Clinical Dementia Rating scale from informants of both living participants and deceased participants, the Telephone Interview for Cognitive Status for living participants, and hospitalization discharge codes and death certificates (ICD-9 codes). We imputed the cognitive function for participants who died before visit 5 six months prior to the date of death.21 We conducted the primary analyses with 25 sets of imputations.

Cognitive Change Modeling
In the primary analysis, we evaluated the association of the 2 dietary pattern scores by tertile with cognitive function as measured by the change in global z scores at visits 2, 4, and 5. Mixed-effect models were used to account for the correlation between repeated cognitive test measures over time. We defined a study time metric from visit 2 to the cognitive measurement. We used a linear spline for the time variable with a knot at visit 4 to address potential nonlinearity of cognitive change. We incorporated 2 random slopes, which corresponded to the 2 time-spline terms, and a random intercept, assuming an independent correlation structure. To measure the association between diet scores and cognitive change, we examined the interactions between exposure strata and the time-spline terms modeled as conditional likelihoods. We also included conditional likelihoods for age, sex, education, race–field center, and total energy intake for model 1 and APOE ε4 status, alcohol use history, smoking history, activity level, body mass index, total cholesterol, prevalent coronary heart disease, history of hypertension, diabetes, and stroke for model 2. We included the conditional likelihoods for interactions between time-splines and covariates that contributed to the slope of cognitive change for aforementioned covariates. We estimated the mean cognitive change over 20 years by dietary score tertile, using the coefficients of the 2 time-splines terms and their interactions with diet score tertile. A linear trend was tested across the dietary tertiles using the median score of each tertile modeled as a continuous variable.

We performed 2 secondary analyses. In the first, we further evaluated the association of diet scores with the z score from each of the 3 individual cognitive test results (DSS, DWR, and WF). In the second, we replicated the analyses using nonimputed data. In both analyses, we applied the same methods as in the primary analysis.

Incident Dementia
We next evaluated the association of the 2 dietary pattern scores with incident dementia using Cox proportional hazard models. We adjusted for the baseline covariates age, sex, education, race–field center, and total energy intake for model 1 and APOE ε4 status, alcohol use history, smoking history, activity level, body mass index, total cholesterol, prevalent coronary heart disease, history of hypertension, diabetes, and stroke for model 2.

Statistical Analysis
The analysis for this study was completed in January to February 2019. Baseline characteristics of participants were compared by tertiles of diet score using χ2 or analysis of variance. Two-sided P < .05 was considered statistically significant. Analyses were conducted using Stata statistical software version 14.2 (StataCorp).

Results
Participant Characteristics
A total of 15 792 adults enrolled at study baseline (1987-1989) when they were aged 45 to 64 years. Of these 15 792 participants, 6538 joined the neurocognitive visit from 2011 to 2013. Because of small numbers, and in accordance with usual ARIC practice, we excluded those who were neither white nor black (48 individuals) and the black participants in the Minnesota (22 participants) and Washington County cohorts (33 participants). We further excluded 121 participants for missing dietary data and 387 participants with implausible caloric intake, defined as less than 500 and more than 3500 total calories per day for women and less than 700 and more than 4500 total calories per day for men. We also excluded 1550 participants with incomplete cognitive data at visit 2, among whom 1348 missed visit 2, and 43 participants who were missing key covariates. The analytic population included 13 588 participants.

At the baseline visit, participants in our study had a mean (SD) age of 54.6 (5.7) years, 55.8% were women, and 37.2% had at least some college education or greater (Table 1). The average participant was overweight (mean [SD] body mass index, 27.6 [5.3]) and consumed a mean (SD) of 1629 (605) kcal/d. In all, 57.9% of participants did not complete cognitive testing at visit 5, and 28.6% of participants died between study baseline and visit 5.

Western Diet Pattern
Adherence to the Western diet pattern was defined as participants reaching the third tertile of the Western diet pattern score. Participants with a Western diet pattern had a higher rate of study attrition (Table 1) and were less likely to be women. Participants with a Western diet pattern were more likely to be from Washington County, Maryland, or Jackson, Mississippi, compared with the other 2 sites, more likely to have less than high school education, more likely to be current smokers, and less likely to engage in physical activity. Participants with a Western diet pattern were also more likely to consume a greater number of calories but were not more likely to have hypertension and diabetes.

Cognitive scores at first measurement were lower in participants with a Western diet pattern compared with participants in the first tertile of Western diet pattern score (z score for tertile 3 [T3], −0.17 [95% CI, −0.20 to −0.14] vs T1, 0.17 [95% CI, 0.14-0.20]) (Table 2). The finding of lower cognitive scores in participants with a Western diet pattern was consistent after adjustments for demographic factors and caloric intake (model 1), but was not statistically significant after full adjustments for lifestyle and clinical factors (model 2) (Table 2). Twenty-year change in cognitive scores was less in participants with a Western diet pattern compared with participants in the first tertile of Western diet pattern score; however, this association did not remain after full adjustments (difference of change in z score for Western diet, T3 vs T1: −0.01 [95% CI, −0.05 to 0.04]) (Table 3). When examined independently, only 20-year change in the DSS z score was less in participants with a Western diet pattern compared with participants in the first tertile of Western diet pattern score (meaning less decline), but this association was not significant after adjustments (eTable 2 in the Supplement). Twenty-year change in the DWR or WF was not different in participants with a Western pattern compared with participants in the first tertile of Western diet pattern score (eTable 3 and eTable 4 in the Supplement).

Our secondary analysis using nonimputed data demonstrated the same findings for the Western diet pattern compared with the imputed data set (eTable 5 in the Supplement).

Participants with a Western diet pattern were no more likely to develop dementia 20 years later compared with participants in the first tertile of Western diet pattern score (adjusted hazard ratio for T3 vs T1, 1.06; 95% CI, 0.92-1.22 for T3 vs T1) (Table 4).

Prudent Diet Pattern
Adherence to a prudent diet pattern was defined as participants reaching the third tertile of the prudent diet pattern score (z score for T3, −0.09 [95% CI, −0.12 to −0.06] vs T1, −0.09 [95% −0.12 to −0.06]). Participants with a prudent diet pattern had no difference in study attrition (eTable 6 in the Supplement) and were more likely to be women. Participants with a prudent diet pattern were less likely to be from Jackson, Mississippi, and more likely to be from the other 3 study locations. Participants with a prudent diet pattern were more likely to have a college education or greater and were more likely to be never smokers and engage in physical activity. Participants with a prudent diet pattern were also more likely to consume higher calories and have diabetes but not hypertension.

Cognitive scores at first measurement were higher in participants with a prudent diet pattern compared with participants in the first tertile of prudent diet pattern score (Table 2), but this association did not remain after full adjustments. Twenty-year change in cognitive scores was greater in participants with a prudent diet pattern compared with participants in the first tertile of prudent diet pattern score; however, this association did not remain after full adjustments (difference of change in z score for prudent diet T3 vs T1: 0.02 [95% CI, −0.02 to 0.06]) (Table 3). When examined independently, only 20-year change in the DSS was greater in participants with a prudent diet pattern compared with participants in the first tertile of prudent diet pattern score, but this association was not significant after full adjustments (eTable 2 in the Supplement). Twenty-year change in the DWR or WF was not different in participants with a prudent diet pattern compared with participants in the first tertile of prudent diet pattern score (eTable 3 and 4 in the Supplement).

Our secondary analysis using nonimputed data demonstrated the same findings for the prudent pattern compared with the imputed data set (eTable 5 in the Supplement).

Participants with a prudent diet pattern were not more likely to develop dementia 20 years later compared with participants in the first tertile of prudent diet pattern score (adjusted hazard ratio, 0.99; 95% CI, 0.88-1.12 for T3 vs T1) (Table 4).

Discussion
We did not find an association between dietary patterns and cognitive decline measured over 20 years. A dietary pattern high in meat and fried food intake was associated with lower cognitive test scores at baseline, but differences in demographic characteristics and health behaviors explained this finding. Similarly, a dietary pattern high in fruit and vegetable intake was associated with higher cognitive test scores at baseline, but differences in demographic characteristics and health behaviors explained this finding.

Our results stand in contrast to short-term observational studies. Several observational studies,22-26 ranging in duration from 5 to 7 years, showed modest associations between dietary patterns and cognitive health. One study24 followed 1410 participants over 5 years and found that adherence to a Mediterranean-type dietary pattern was associated with less decline in the Mini-Mental State examination. Another study23 followed more than 2200 participants for 6 years and found that the Western diet was associated with greater cognitive decline and the prudent diet was associated with less cognitive decline as measured by the Mini-Mental State examination.

A recent long-term observational study27 aligns with our results. The Whitehall II study27 measured diet in 1991 to 1993 and dementia surveillance occurred through 2017. The authors found that diet quality at midlife was not associated with incident dementia in long-term follow up. Our results confirm the findings of this study in a US population.

We suggest 3 explanations for the reported differences between short-term studies and studies with long-term follow-up.27 First, it may be that over time, other chronic diseases such as diabetes have a greater impact on cognition compared with diet. Our study only partially accounts for this confounding by adjusting for comorbidities at baseline. Second, participants with an unhealthy diet engage in multiple unhealthy behaviors (eg, smoking and lack of physical activity). It may be difficult to elucidate the independent outcomes associated with diet when multiple lifestyle behaviors contribute to cognitive function. Third, our study does not account for change in dietary intake or the food supply over 20 years.

Two clinical trials28 build on the promising observational science to examine whether dietary changes can protect against cognitive decline and dementia. One intervention28 tested a Mediterranean diet with olive oil or nuts as supplementation in 334 participants at high cardiovascular risk and found improved composite cognitive function compared with the control diet. A second clinical trial, the Mediterranean-Dietary Approaches to Stop Hypertension (MIND) clinical trial, is currently under way. While our study did not find an association of diet with cognitive decline, this should not undermine the potential of dietary change to affect brain health.

Strengths and Limitations
Our study has strengths, one of which is the long duration of follow up. Another is our ability to account for study dropout due to death or loss to follow-up using criterion-standard imputation methods.20

There are also several limitations of our study. First, our definition of achievement of a Western or prudent diet score is based on our tertile cutoffs and may not reflect individual participant identification with the specified dietary pattern. Second, diet was measured 3 years before the first cognitive measurement. The nonconcurrent measurements are unlikely to affect the results because dietary patterns remain relatively stable up to 7 years.29 However, dietary intake likely changes over 20 years owing to change in the food supply and food habits. The ARIC study did not capture diet over the 20 years to test this possibility. In addition, as participants with an unhealthy diet had lower cognition at the time of first assessment, it is possible that diet exerted influence prior to our time of measurement. As diet was not associated with either cognitive trajectories or incident dementia, this is less likely to be the case. We should also note that although study dropout was accounted for, a large proportion of participants did not follow up after 20 years. Finally, as in all observational studies, we are unable to attribute causality to our observations, as the mechanisms between diet and brain health are complex, and the only way to definitively measure the relationship between dietary practices and cognition is in an experimental design in which diet is manipulated; however, long-term follow-up may be expensive.

Conclusions
The results of this cohort study do not support the hypothesis that midlife diet significantly contributes to cognitive decline independent of demographic and behavioral factors. Our finding that participants with an unhealthy diet have lower cognitive function could be attributed to cigarette smoking, eating excess calories, or engaging in less physical activity. Our results suggest that it may be important to address all modifiable risk factors in dietary interventions, supporting the emerging body of multimodal lifestyle and behavioral research.30 A multimodal approach may provide greater risk reduction for cognitive aging.

Contrary to expectations, didn't find a significant relation between friending behavior of participants and fictitious Facebook dimensions of gender, profile photos, education status & relationship status

Would You like to Be My Facebook Friend? Cemil Akkaş, Hülya Bakırtaş. Sexuality & Culture, December 5 2019. https://link.springer.com/article/10.1007/s12119-019-09684-6

Abstract: This research seeks to understand how people respond to demographic factors and different types of Facebook profile. Using a 2 × 3 × 2 × 2 between-subjects experimental design, the research explores the relationship between gender of a fictitious Facebook account (female, male), attraction levels of the profile photo (attractive, normal and default), education status (university, default) and relationship status (in a relationship, default). Additionally, this process has been applied in both field research (Study 1) and laboratory (Study 2). A beauty survey was applied to determine the profile photos to be used in these fictitious accounts. Friendship requests were sent to participants in the two different environments (field and lab) by fictitious Facebook accounts, and results were monitored and analyzed. Whilst some research has been carried out on online friendship, no study exists that involves the role of the environment. The results of this study indicate that the environment plays an important role in friendship acceptance behavior. Another important finding was that the gender of participants is the most significant determinant in friendship acceptance behavior in both field and laboratory. However, the relationship between class and income levels of participants and behavior of accepting friendship request was not significant. Contrary to expectations, this study did not find a significant relation between friending behavior of participants and fictitious Facebook dimensions of gender, profile photos, education status and relationship status.

Keywords: Social networks Friendship Facebook Friending


In Brain and Behavior: A multimethod investigation of motor inhibition in professional drummers

Boom Chack Boom—A multimethod investigation of motor inhibition in professional drummers. Lara Schlaffke  Sarah Friedrich  Martin Tegenthoff  Onur Güntürkün  Erhan Genç  Sebastian Ocklenburg. Brain and Behavior, December 4 2019. https://doi.org/10.1002/brb3.1490

Abstract
Introduction: Our hands are the primary means for motor interaction with the environment, and their neural organization is fundamentally asymmetric: While most individuals can perform easy motor tasks with two hands equally well, only very few individuals can perform complex fine motor tasks with both hands at a similar level of performance. The reason why this phenomenon is so rare is not well understood. Professional drummers represent a unique population to study it, as they have remarkable abilities to perform complex motor tasks with their two limbs independently.

Methods: Here, we used a multimethod neuroimaging approach to investigate the structural, functional, and biochemical correlates of fine motor behavior in professional drummers (n = 20) and nonmusical controls (n = 24).

Results: Our results show that drummers have higher microstructural diffusion properties in the corpus callosum than controls. This parameter also predicts drumming performance and GABA levels in the motor cortex. Moreover, drummers show less activation in the motor cortex when performing a finger‐tapping task than controls.

Conclusion: In conclusion, professional drumming is associated with a more efficient neuronal design of cortical motor areas as well as a stronger link between commissural structure and biochemical parameters associated with motor inhibition.

1 INTRODUCTION
Our hands are the primary means of interaction with the environment. A key aspect of hand use in humans is its asymmetrical organization. While most individuals can perform easy motor tasks with two hands at a similar level, only very few individuals can perform complex fine motor tasks with both hands equally well. Most individuals strongly prefer one hand (often called the dominant hand) over the other hand. Typically, each individual has a distinct handedness and prefers either the left or the right hand for complex fine motor tasks, for example writing (Güntürkün & Ocklenburg, 2017; Ocklenburg, Hugdahl, & Westerhausen, 2013). Handedness is thus one of the most pronounced and most widely investigated aspects of hemispheric asymmetries. A ratio of 90% right‐handed to 10% left‐handed people is constant for the past 5,000 years over all continents (Coren & Porac, 1977) and is noticeable even in utero (Hepper, Shahidullah, & White, 1990).

Each hand is controlled by the contralateral motor cortex. Neuronal correlates of handedness are mostly investigated by examining brain activity during more or less complex hand movement tasks. Such activities with the dominant hand are largely regulated by the contralateral hemisphere, whereas motor tasks with the nondominant hand are controlled more bilaterally by both hemispheres (van den Berg, Swinnen, & Wenderoth, 2011; Grabowska et al., 2012). The corpus callosum, as the major connecting pathway between hemispheres, was shown to have substantial influence on the characteristics of handedness (Hayashi et al., 2008; Westerhausen et al., 2004). Right‐handed people show a strong ipsilateral motor cortex de‐activation, when performing tasks with their dominant hand (Genç, Ocklenburg, Singer, & Güntürkün, 2015). In contrast, in left‐handed people, ipsilateral activations/de‐activation are equally pronounced, independent of the used hand. These findings demonstrate the correlation between ipsilateral activations and transcallosal inhibitions (Tzourio‐Mazoyer et al., 2015). Furthermore, patients with callosal agenesis, a hereditary condition in which the corpus callosum is absent in the brain, show a stronger tendency toward both‐handedness, for example not having a dominant hand (Ocklenburg, Ball, Wolf, Genç, & Güntürkün, 2015). Therefore, inhibitory functions of the corpus callosum represent an important aspect when understanding the neuronal correlates of handedness (Genç et al., 2015; Ocklenburg, Friedrich, Güntürkün, & Genç, 2016).

Since handedness can be partly altered through training (Perez et al., 2007), its constituent neural fundaments can change by learning. Neuroplasticity describes the adaption and cortical reorganization for example after training or learning a new skill. Functional plasticity of motor skills has been in the focus of neuroscientific research for decades. Already in the 1990s, it has been shown that playing the violin as a professional is influencing the somatosensory representations of the left (nondominant) hand (Elbert, Pantev, Wienbruch, Rockstroh, & Taub, 1995). Being able to play a music instrument on a professional level can also influence visuo‐motor (Buccino et al., 2004; Stewart et al., 2003; Vogt et al., 2007) as well as audio‐motor processes (Bangert et al., 2006; Baumann, Koeneke, Meyer, Lutz, & Jäncke, 2005; Baumann et al., 2007; Parsons, Sergent, Hodges, & Fox, 2005).

Up to now, musical training‐driven plasticity was primarily centered on changes of cortical gray matter. However, most musical instruments are played with both hands, increasing the demand for fast, precise and uncoupled movements of both hands. When playing piano, both hands are recruited in an equally demanding manner and sometimes with different rhythms, whereas playing a stringed instrument requires distinct motor activities for the same rhythm. In contrast, when drumming, both hands and even legs have to perform similar motor tasks, however with distinct rhythms. Therefore, drummers are well suited as subjects for the investigation of structural correlates of transcallosal inhibition.

While it is very difficult for an untrained person to play a ¾ beat with one hand and a 4/4 beat with the other at the same time, this is an easy task for trained drummers. Research in split‐brain patients indicates that this remarkable ability to uncouple the motor trajectories of the two hands is likely related to inhibitory functions of the corpus callosum. Franz, Eliassen, Ivry, and Gazzaniga (1996) investigated bimanual movements in split‐brain patients and healthy controls and found that the controls showed deviations in the trajectories when the two hands performed movements with different spatial demands (Franz et al., 1996). In contrast, split‐brain patients did not produce spatial deviations. This suggests that movement interference in controls is mediated by the corpus callosum and that professional drummers likely show an experience‐dependent change in callosal structure and/or function that enables them to perform two different motor trajectories with the two hands at the same time. Thus, drumming requires neuroplasticity of whiter matter pathways. This is what we set out to study.

The structural, functional, and biochemical correlates of this remarkable ability of professional drummers are still completely unclear, but unraveling them would yield important insights into the general neuronal foundations of motoric decoupling. Therefore, the present study was aimed at investigating professional drummers for structural, functional, and biochemical differences to untrained controls, linked to transcallosal inhibition. To this end, we used a state‐of‐the‐art multimethod neuroimaging approach. We assessed the microstructure of the corpus callosum using DTI to reveal possible alterations of callosal anatomy between groups (Friedrich et al., 2017; Genç, Bergmann, Singer, & Kohler, 2011a; Genç, Bergmann, Tong, et al., 2011b; Westerhausen et al., 2004). Moreover, we assessed the biochemical correlates of GABA spectroscopy to test long‐term changes of inhibitory motor control (Stagg, 2014), as GABA levels in motor regions are highly associated with BOLD activations and motor learning. Specifically, lower GABA levels are associated with an increased degree of motor learning (Ziemann, Muellbacher, Hallett, & Cohen, 2001), while individuals with higher baseline levels of M1 GABA have slower reaction times and smaller task‐related signal changes (Stagg, Bachtiar, & Johansen‐Berg, 2011). Last, we also scanned participants using a fMRI finger‐tapping task to use a well‐established quantitative framework producing different behavioral complexities (Genç et al., 2015; Haaland, Elsinger, Mayer, Durgerian, & Rao, 2004). We assumed that drummers should show differences from nonmusical controls reflecting a more efficient neural organization on the structural, functional, and biochemical modality.

Declining Sexual Activity and Desire in Women: Findings from Representative German Studies in 2005 & 2016

Declining Sexual Activity and Desire in Women: Findings from Representative German Surveys 2005 and 2016. Juliane Burghardt et al. Archives of Sexual Behavior, December 4 2019. DOI 10.1007/s10508-019-01525-9

Abstract: We estimate (1) sexual activity and sexual desire in women living with and without a partner across the age range in Germany and (2) changes over 11 years. A representative survey of 345 (response rate: 65%) women between 18 and 99 years from 2016 was compared to a survey of 1314 women age 18–91 from 2005 (response rate: 53%). Sexual activity was assessed as having been physically intimate with someone in the past year; frequency of sexual desire was rated for the past 4 weeks. In 2016, the great majority of women living with a partner were sexually active and indicated sexual desire until the age of 60, which decreased thereafter. Compared to 2005, fewer women cohabited with a partner. Across the age range, women living without a partner reported considerably less sexual activity and desire. The overall proportion of women reporting partnered sexual activity decreased from 67% to 62% in 2016, and absent sexual desire increased from 24% to 26%. Declines of sexual activity and desire affected mostly young and middle-aged women. The decline of sexual activity and desire seems to be due to a reduced proportion of women living with a partner. There was also a generation effect with younger and middle-aged women without a partner becoming less sexually active and experiencing less desire compared to the previous survey. While surveys were methodologically comparable, interpretations are limited by the absence of longitudinal data.

Keywords: Sexual desire Sexual activity Partnership Representative sample

Discussion
In 2016, 60% of women from a population-based German
sample reported sexual activity during the last year. Partnership
was an important factor: Eighty-seven percent of women
with a partner reported having been sexually active in the past
year, this applied to only 37% without a partner. A considerable
proportion of women living with a partner reported
sexual activity in old age (27% > 70 years), whereas among
women without a partner sexual activity decreased earlier.
For instance, over 41 years, the majority of women (59%)
reported having been sexually inactive during the past year,
and hardly any were active among the elderly.
Between 2005 and 2016, the overall proportion of sexually
active women decreased by 5%. However, this reduction did
not occur among women with a partner, whose sexual activity
remained high and stable between 2005 (85%) and 2016
(87%). The decline in sexual activity can be attributed to
both a decline of women living with a partner by 7%, which
manifested itself among all age groups except for the oldest
(over 71 yr.) and a decrease in sexual activity among women
living without a partner. Among these women, sexual activity
decreased from 42 to 37%. This decline was most pronounced
among women aged 18–60; no decline occurred among older
women regardless of partnership.
Our findings of reduced sexual activity are consistent
with American studies (Twenge et al., 2016, 2017). Thus,
despite differences between Germany and the U.S. regarding
cultural norms and use of contraception, the decrease seemed
to be a more general development. In contrast to Twenge
et al. (2017), we found this decline to occur mostly among
individuals living with a partner. The difference between
these findings could be based on differences in the way of
measuring sexual activity. While Twenge et al. (2017) asked
participants how often they did “have sex during the last
12 months,” we asked whether participants “were (physically)
intimate with someone” within this period.
Our findings on frequency of sexual desire mirrored those
of sexual activity, with desire also decreasing between 2005
and 2016. Overall, the frequency of absent sexual desire
increased by almost 3% (23.5–26.4%). This decline was the
strongest among women below 50 years and those living
without a partner. The decrease in sexual interest contrasts
with Lindau and Gavrilova (2010), who reported constant
sexual interest in women between 1995 and 2002, which
might be an artefact of using different items to measure
sexual interest within the two compared surveys or a more
recent development.
The similarity between the findings regarding sexual
activity and sexual desire was also marked by a strong correlation
between the two variables. The correlation was higher
among non-partnered women than among partnered women.
Though this correlation does not allow causal interpretations,
it attests to the relevance of the partner to prolonged sexual
activity. Further, the correlation decreased between 2005 and
2016. This may be the first evidence that a change in sexual
desire does not fully explain the decrease in sexual activity
over this decade.
It remains unclear, whether the decline in sexual activity
in women without a partner is compensated by increased
individual, non-partnered sexual activity (e.g. online sex use
with masturbation) or maybe increasing acceptance of absent
sexual interest. Alternatively, the decrease in women living
with a partner may indicate social changes, which decreased
the value of partnered activity and increasing solitary recreational
activities, for instance by media use (Stiftung Zukunftsfragen,
2016). Another important question is whether the
decrease in both sexual activity and interest creates sexual
distress or dissatisfaction (Hayes et al., 2008) or is instead
accepted as a different form of lifestyle.
Our data mirror the general trend of decreasing numbers
of married and cohabitating couples (Fry, 2016). The analysis
included all cohabiting couples by combining the variables
“married, living together” and “living with a partner”. However,
this did not include unmarried committed couples who
do not live together. Comparing committed couples that do
versus do not live together might distinguish between effects
of commitment and partner availability. Further, it remains
unclear whether the availability of a partner preserves sexual
activity and interest, or whether sexual desire sustains/
establishes a partnership, or alternative processes intervene.
However, the findings are in line with the previous models
on women’s sexual desire, which stated that women’s sexual
desire is often responsive and relies on positive non-sexual
outcomes provided by the partner (e.g. trust, emotional intimacy,
communication). These outcomes motivate women
to seek sexually arousing cues to trigger responsive desire in
addition or in the absence of spontaneous desire. This behaviour
has the potential to stabilize sexual activity in a partnership
even in the absence of spontaneous desire (Basson,
2000). The lower correlation of desire and activity among
partnered women fits with Basson’s observation that women
do not perceive responsive desire as “true” desire. Future
research should elaborate on these perceptions.
The cross-sectional design of our two surveys limits the
data interpretation. Despite including the entire age range,
we cannot analyse individual’s life trajectories. In contrast
to Lindau and Gavrilova (2010), we used identical measures
and sampling procedures to create representative samples
in both surveys. Unlike studies that limit sexual activity to
intercourse, our items purposefully included a wide range of
potentially relevant sexual behaviours (“have you been intimate
with someone…”) to cover a broad range of partnered
sexual activities (Mercer et al., 2013). Twenge et al. (2016)
argued that the decreased sexual frequency may be explained
by differences in definitions of sex (in contrast to including
only vaginal-penile penetration). However, our results match
their findings using a broader measure of sexual activity. The
low non-responder rate (< 1%) indicates that participants felt
comfortable to answer the questions, which we believe to be
based on the item introduction. This supports the validity of
the findings. Future studies should evaluate socioeconomic,
social, ethical and religious influences as well as working
conditions on sexuality and their interplay with partnerships.

Large US-representative adolescent sample: A Flynn Effect was found for IQs ≥ 130, a negative one for ≤ 70; this challenge the practice of generalizing IQ trends with non-representative data samples

The Flynn effect for fluid IQ may not generalize to all ages or ability levels: A population-based study of 10,000 US adolescents. Jonathan M. Platt et al. Intelligence, Volume 77, November–December 2019, 101385. https://doi.org/10.1016/j.intell.2019.101385

Highlights
• When outdated norms are used, the Flynn Effect inflates IQs and potentially biases intellectual disability diagnosis
• In a large US-representative adolescent sample, a Flynn Effect was found for IQs ≥ 130, and a negative effect for IQs ≤ 70
• IQ changes also differed substantially by age group
• A negative Flynn Effect for those with low intellectual ability suggests widening disparities in cognitive ability
• Findings challenge the practice of generalizing IQ trends based on data from non-representative samples

Abstract: Generational changes in IQ (the Flynn Effect) have been extensively researched and debated. Within the US, gains of 3 points per decade have been accepted as consistent across age and ability level, suggesting that tests with outdated norms yield spuriously high IQs. However, findings are generally based on small samples, have not been validated across ability levels, and conflict with reverse effects recently identified in Scandinavia and other countries. Using a well-validated measure of fluid intelligence, we investigated the Flynn Effect by comparing scores normed in 1989 and 2003, among a representative sample of American adolescents ages 13–18 (n = 10,073). Additionally, we examined Flynn Effect variation by age, sex, ability level, parental age, and SES. Adjusted mean IQ differences per decade were calculated using generalized linear models. Overall the Flynn Effect was not significant; however, effects varied substantially by age and ability level. IQs increased 2.3 points at age 13 (95% CI = 2.0, 2.7), but decreased 1.6 points at age 18 (95% CI = −2.1, −1.2). IQs decreased 4.9 points for those with IQ ≤ 70 (95% CI = −4.9, −4.8), but increased 3.5 points among those with IQ ≥ 130 (95% CI = 3.4, 3.6). The Flynn Effect was not meaningfully related to other background variables. Using the largest sample of US adolescent IQs to date, we demonstrate significant heterogeneity in fluid IQ changes over time. Reverse Flynn Effects at age 18 are consistent with previous data, and those with lower ability levels are exhibiting worsening IQ over time. Findings by age and ability level challenge generalizing IQ trends throughout the general population.

Keywords: IntelligenceFlynn effectAdolescenceIntellectual disabilities

Cool charts and tables at the publisher's link above. Excerpts:

5. Discussion

The present study utilized data from a large US-representative
sample of adolescents to describe changes in IQ between 1989 and
2003. There were three central findings: 1) Overall, there was no evidence
of a Flynn Effect during the study period; 2) however, overall IQ
trends masked substantial heterogeneity in the presence and direction
of the Flynn Effect by both ability level and age; and 3) there was no
variation in the Flynn effect as a function of other sociodemographic
characteristics.
The overall lack of a Flynn Effect in our sample is concordant with
trends in the K-BIT, KBIT-2, the Kaufman Assessment Battery for
Children (K-ABC and KABC-II), and other individually administered
screening tests reported in a previous meta-analysis (Trahan et al.,
2014). It also conforms with the conclusion that gains have decreased
in more recent decades (Pietschnig & Voracek, 2015). However, studies
using other tests (e.g., Wechsler scales) did find substantial Flynn
Effects (Pietschnig & Voracek, 2015; Trahan et al., 2014). Explanations
for the Flynn Effect are diverse. Although genetic explanations focusing
on factors such as hybrid vigor (Mingroni, 2007; Rodgers & Wänström,
2007) have been proposed, environmental explanations predominate
(Dickens & Flynn, 2001), emphasizing societal changes in perinatal
nutrition (Lynn, 2009) and nutrition in general (Colom, Lluis-Font, &
Andrés-Pueyo, 2005), education (Teasdale & Owen, 2005), reduced
number of siblings (Sundet, Borren, & Tambs, 2008), the prevalence of
parasites and the burden of disease (Daniele & Ostuni, 2013; Eppig,
Fincher, & Thornhill, 2010), and increased environmental complexity
(Schooler, 1998).
By contrast, other studies have reported reverse Flynn Effects. In
discussing these negative trends in Scandinavian countries, Lynn and
colleagues hypothesized that they may be due to greater fertility among
low SES groups, immigrants, and older adults (Dutton et al., 2016;
Dutton & Lynn, 2013). However, a recent analysis in Norway to test
these claims largely rejects their hypotheses, reporting that Flynn
Effects were not consistent within families over time (Bratsberg &
Rogeberg, 2018). Further, a recent meta-analysis found no substantial
role of fertility on test score changes across an array of studies
(Pietschnig & Voracek, 2015), and recent empirical evidence suggests
that immigration effects do not play a meaningful role in explaining
Flynn Effect reversals (Pietschnig, Voracek, & Gittler, 2018).
We add to the evidence reported in previous studies, by reporting
heterogeneity in the Flynn Effect by ability level and age. We find
support for a reverse Flynn Effect for those of low ability and older age,
and a positive Flynn Effect for those of high ability and younger age.
These results have several implications. First, they signal a widening
disparity in the US in terms of cognitive ability, with those at the lower
end of the ability dimension not only exhibiting less gains than those at
the higher ends, but reversing direction entirely. Second, these results
have implications for considering demographic differences when adjusting
IQ test scores in the population.
Improvements in education, nutrition, prenatal and post-natal care,
and overall environmental complexity over the past century are
thought to contribute to the Flynn Effect in the overall population
(Dickens & Flynn, 2001; Lynn, 2009; Schooler, 1998; Teasdale & Owen,
2005). However, the disparities by ability level that we identified
suggest that the benefits from these societal improvements have been
more dramatic for those at the highest ability levels, potentially because
they are better able to take advantage of these societal changes. This
interpretation is in line with Fundamental Cause Theory (Phelan, Link,
& Tehranifar, 2010), which argues that when new knowledge or technology
is introduced into a society, those with the highest status are
most likely to take advantage first and benefit. Disproportionate utilization
by those with higher abilities may widen intellectual disparities,
leaving those at the lowest ability levels worse off than before. We note,
however, that the Flynn Effect did not differ across other measures of
status, such as poverty and parental education. The correlation analyses
we conducted revealed a positive association of moderate magnitude
between IQ and the size of Flynn Effect, for every age group between 13
and 18, regardless of whether that group showed an overall positive or
negative Flynn Effect. One possible interpretation of this pattern is that
adolescents with high fluid intelligence, not necessarily those with the
highest access to resources, have benefitted most from societal progress
over time.
Previous research on the stability of the Flynn Effect across ability
levels has produced inconsistent and inconclusive results (McGrew,
2015; Weiss, 2010). Sometimes it has been higher at low IQs, and
sometimes a reverse Flynn Effect has been found in high IQ samples
(Spitz, 1989; Teasdale & Owen, 1989; Zhou et al., 2010). A meta-analysis examining ability level as a moderator variable did not observe a
Flynn Effect for those with low IQ (Trahan et al., 2014). However,
previous studies differ in quality (Trahan et al., 2014) and often rely on
small sample sizes at the lower end of the IQ distribution (Zhou et al.,
2010). Specifically, Trahan and colleagues noted, “the distribution of
Flynn effects that we observed at lower ability levels might be the result
of artifacts found in studies of groups within this range of ability” (p.
1349).
We also identified variation in the Flynn Effect by age. The positive
Flynn Effect of 2.3 points per decade at age 13 approximately equals the
value obtained in a summary of studies of Raven's matrices for nearly
250,000 children in 45 countries (Brouwers, Van de Vijver, & Van
Hemert, 2009) and in a meta-analysis of about 14,000 children and
adults in the US and UK (Trahan et al., 2014). However, the 2-point
value is smaller than the traditional 3 points for global intelligence and
4 points for fluid intelligence (Pietschnig & Voracek, 2015). Likewise,
the reverse Flynn Effect that occurred at ages 15–18 was similar to
effects reported in Scandinavian countries among young adult males
during the same time period (Bratsberg & Rogeberg, 2018; Dutton &
Lynn, 2013; Sundet et al., 2004; Teasdale & Owen, 2005, 2008), and in
other countries as well, such as France (adults tested on WAIS-III and
WAIS-IV) and Estonia (young adults tested on Raven's Matrices)
(Dutton et al., 2016). The age effects are discordant with previous
metaanalyses. Pietschnig and Voracek (2015) evaluated age effects and
found stronger gains for adults than children. In their meta-analysis,
Trahan et al. (2014) did not find a significant relationship between
Flynn Effect and age in their examination of the mean ages across
heterogeneous and often small samples. Our methodology differed from
the techniques used in both meta-analyses, as we studied large samples
that were homogeneous by age.
The notable differences we identify among narrowly defined age
groups may be related to cognitive and neurodevelopmental changes
that occur during adolescence. Fluid reasoning abilities and cognitive
abilities that support reasoning (e.g., rule representation) develop rapidly during early adolescence (Crone et al., 2009; Crone, Donohue,
Honomichl, Wendelken, & Bunge, 2006; Ferrer, O'Hare, & Bunge, 2009;
Žebec, Demetriou, & Kotrla-Topić, 2015). Brain regions that play a
central role in reasoning and problem solving, including the dorsolateral and ventrolateral prefrontal cortex and superior and inferior
parietal cortex, also exhibit dramatic changes in structure and function
across adolescence (Bunge, Wendelken, Badre, & Wagner, 2004; Ferrer
et al., 2009; Gogtay et al., 2004; Wendelken, Ferrer, Whitaker, & Bunge,
2015; Wright, Matlen, Baym, Ferrer, & Bunge, 2008). The notably different Flynn Effects by age in our study caution against generalizing
findings for a specific sub-group (such as conscripted young adult
males, which comprise the Scandinavian samples) to the nation as a
whole (Dutton & Lynn, 2013).
The present study identified no meaningful relationship between
Flynn Effect and poverty, parental education other sociodemographic
variables and background factors, including parental nationality, birth
order, family size, age of birth mother and father. This finding is notable given that these demographic variables are associated with IQ
level (von Stumm & Plomin, 2015), including in our sample (Platt,
Keyes, et al., 2018).
The results of this study should be considered in light of several
limitations. First, the study data were obtained 15 years ago. However,
this period was an ideal time to evaluate the presence of a reverse Flynn
Effect in the US, given the reverse effects found in Denmark, Norway,
Finland, and several other countries (Dutton et al., 2016; Teasdale &
Owen, 2008). In more recent years, no reverse Flynn Effect has been
observed for Wechsler's scales, as gains on the WAIS-IV (Wechsler,
2008) and WISC-V (Wechsler, 2014). Full Scale IQ have been close to
the hypothesized value of 3 points per decade (J Grégoire & Weiss,
2019; Jacques Grégoire, Daniel, Llorente, & Weiss, 2016 Weiss,
Gregoire, & Zhu, 2016; Zhou et al., 2010), especially when test content
is held constant (J Grégoire & Weiss, 2019; Weiss et al., 2016).
Second, the K-BIT nonverbal test is a screening test that measures a
single cognitive ability. It is, however, an analog of Raven's popular
matrices test which is commonly used in Flynn Effect studies (Brouwers
et al., 2009; Flynn, 1998; Pietschnig & Voracek, 2015). The Flynn Effect
is known to differ for different cognitive abilities (e.g., fluid intelligence, short-term memory) (Pietschnig & Voracek, 2015; Teasdale
& Owen, 2008), which may contribute to heterogeneity in findings
across studies with differing IQ measures. However, the K-BIT and
KBIT-2 nonverbal IQ is substantially correlated with comprehensive IQ
tests, such as the Wechsler's Full Scale IQ (mid-.50s to mid-70s)
(Canivez et al., 2005; Kaufman & Kaufman, 1990, 2004), though it is
lower than the correlation between different comprehensive test batteries (Kaufman, 2009; Wechsler, 2014). The present findings are descriptive and any practical application regarding the adjustment of IQs
must be made with the awareness that clinical diagnosis, such as the
identification of individuals with intellectual disabilities, must be based
on comprehensive IQ tests such as Wechsler's scales or the WoodcockJohnson, which assess multiple cognitive abilities.
Third, the study included only adolescents, which represents a
narrow period that may not capture meaningful developmental
changes. Indeed, fluid reasoning changes between ages 13–18 are
minimal (Wechsler, 2008, 2014), including in the present 2003 K-BIT
norms sample (Keyes et al., 2016) and the original 1989 norms sample
Kaufman & Kaufman (1990, Table 4.7). This age pattern may partially
explain why we found no overall Flynn Effect in this sample.
Fourth, different procedures were used to develop the 1989 and
2003 norms. The 1989 norms were estimated based on aggregated data
across all age groups, in order to stabilize norms at all ages (Angoff &
Robertson, 1987). Although slightly different statistical techniques
were used to develop the 2003 norms, the general approach to norms
development was similar between samples, and one test author (ASK)
was involved in the development of both sets of norms. Both samples
were representative of the US distributions of sociodemographic, economic, and other key background variables at the time (Kaufman &
Kaufman, 1990; Kessler, Avenevoli, Costello, et al., 2009). Further, both
sets of norms are based on six-month age bands. These samples are at
least as convergent as similar studies comparing samples used to develop original vs. revised norms. Previous studies have differed substantially by key sociodemographic distributions, such as the WISC and
WISC-R (Wechsler, 1949, 1974), which were key samples in the development of the Flynn Effect theory (Flynn, 1984). In the present
study, we adjusted the Flynn Effect for an array of background variables
to further minimize any differences between the 1989 and 2003 norms
samples that may confound the Flynn Effect estimates.
Fifth, the Flynn Effect has had a non-linear trajectory over the past
century (Pietschnig & Voracek, 2015). Because our study included IQ
measurements at only two time points, we were not able to test the
linearity of change over time.
This study is strengthened by the use of a large and representative
adolescent sample, with IQs measured with reasoning items that are
widely accepted as prototypical measures of fluid intelligence (Dutton
et al., 2016). The use of two sets of norms based on a single
administration of a test avoids practice effects and bias that may arise
from use of different versions of a test.
In conclusion, this study reports important heterogeneity in the
Flynn Effect among a nationally-representative sample of US adolescents.
We confirmed previous reports of reverse Flynn Effects among
large samples of older adolescent males, and extended the same pattern
to females. We also found important differential Flynn Effects by ability
level. These results add to a growing body of evidence suggesting that
Flynn Effect findings from narrow age bands or ability levels may
produce divergent findings that do not generalize to the overall population.
However, given the potential life or death implications of this
research in determining intellectual status in capital punishment cases,
the strength of evidence needed for definitive conclusions is extremely
high. At this time, we do not have sufficient evidence to recommend
differential adjustments to IQ scores. Additional research is needed to
replicate the current findings on the full age range and across comprehensive
measures of intelligence.

On psychological researcher's strategic behavior: High prevalence of effect declines with each new study of some question, yielding a ratio of 2:1; these declines are systematic, strong, and ubiquitous

Effect Declines are Systematic, Strong, and Ubiquitous: A Meta-Meta- Analysis of the Decline. Jakob Pietschnig et al. Front. Psychol., Nov 2019, doi: 10.3389/fpsyg.2019.02874

Abstract: Empirical sciences in general and psychological science in particular are plagued by replicability problems and biased published effect sizes. Although dissemination bias-related phenomena such as publication bias, time-lag bias, or visibility bias are well-known and have been intensively studied, another variant of effect distorting mechanisms, so-called decline effects, have not. Conceptually, decline effects are rooted in low initial (exploratory) study power due to strategic researcher behavior and can be expected to yield overproportional effect declines. Although decline effects have been documented in individual meta-analytic investigations, systematic evidence for decline effects in the psychological literature remains to date unavailable. Therefore, we present in this meta-meta-analysis a systematic investigation of the decline effect in intelligence research. In all, data from 22 meta-analyses comprising 36 meta-analytical and 1,391 primary effect sizes (N = 697,000+) that have been published in the journal Intelligence were included in our analyses. Two different analytic approaches showed consistent evidence for a higher prevalence of cross-temporal effect declines compared to effect increases, yielding a ratio of about 2:1. Moreover, effect declines were considerably stronger when referenced to the initial primary study within a meta-analysis, yielding about twice the magnitude of effect increases. Effect misestimations were more substantial when initial studies had smaller sample sizes and reported larger effects, thus indicating suboptimal initial study power as the main driver of effect misestimations in initial studies. Post-hoc study power comparisons of initial versus subsequent studies were consistent with this interpretation, showing substantially lower initial study power of declining, than of increasing effects. Our findings add another facet to the ever accumulating evidence about non-trivial effect misestimations in the scientific literature. We therefore stress the necessity for more rigorous protocols when it comes to designing and conducting primary research as well as reporting findings in exploratory and replication studies. Increasing transparency in scientific processes such as data sharing, (exploratory) study preregistration, but also self- (or independent) replication preceding the publication of exploratory findings may be suitable approaches to strengthen the credibility of empirical research in general and psychological science in particular.

Keywords: decline effect, meta-meta-analysis, Dissemination bias, effect misestimation, Intelligence

Wednesday, December 4, 2019

Not only assholes drive Mercedes; besides disagreeable men, also conscientious people drive high‐status cars

Not only assholes drive Mercedes. Besides disagreeable men, also conscientious people drive high‐status cars. Jan Erik Lönnqvist, Ville‐Juhani Ilmarinen, Sointu Leikas. International Journal of Psychology, December 3 2019. https://doi.org/10.1002/ijop.12642

Abstract: In a representative sample of Finnish car owners (N = 1892) we connected the Five‐Factor Model personality dimensions to driving a high‐status car. Regardless of whether income was included in the logistic model, disagreeable men and conscientious people in general were particularly likely to drive high‐status cars. The results regarding agreeableness are consistent with prior work that has argued for the role of narcissism in status consumption. Regarding conscientiousness, the results can be interpreted from the perspective of self‐congruity theory, according to which consumers purchase brands that best reflect their actual or ideal personalities. An important implication is that the association between driving a high‐status car and unethical driving behaviour may not, as is commonly argued, be due to the corruptive effects of wealth. Rather, certain personality traits, such as low agreeableness, may be associated with both unethical driving behaviour and with driving a high‐status car.


The introduction of a new machine translation system has significantly increased international trade on this platform, increasing exports by 10.9%; found too a substantial reduction in translation costs

Does Machine Translation Affect International Trade? Evidence from a Large Digital Platform. Erik Brynjolfsson, Xiang Hui, Meng Liu. Management ScienceVol. 65, No. 12, Sep 3 2019. https://doi.org/10.1287/mnsc.2019.3388

Abstract: Artificial intelligence (AI) is surpassing human performance in a growing number of domains. However, there is limited evidence of its economic effects. Using data from a digital platform, we study a key application of AI: machine translation. We find that the introduction of a new machine translation system has significantly increased international trade on this platform, increasing exports by 10.9%. Furthermore, heterogeneous treatment effects are consistent with a substantial reduction in translation costs. Our results provide causal evidence that language barriers significantly hinder trade and that AI has already begun to improve economic efficiency in at least one domain.


1.1. Related Literature and Contribution

1.1.1. AI and Economic Welfare. The current generation
of AI represents a revolution of prediction and classification
capabilities (e.g.,Brynjolfsson andMcAfee 2017).
Recent breakthroughs in ML, especially supervised
learning systems using deep neural networks, have
allowed substantial improvements in many technical
capabilities. Machines have surpassed humans at
tasks as diverse as playing the game Go (Silver et al.
2016) and recognizing cancer from medical images
(Esteva et al. 2017). There is active work converting
these breakthroughs into practical applications, such
as self-driving cars, substitutes for human-powered
call centers, and new roles for radiologists and pathologists,
but the complementary innovations required
are often costly (Brynjolfsson et al. 2019).
Machine translation has also experienced significant
improvement because of advances in ML. For
instance, the best score at the Workshop on Machine
Translation for translating English into German improved
from 23.5 in 2011 to 49.9 in 2018,2 according to
the widely used BLEU score, which measures how
close the MT translation output is to one or more
reference translations by linguistic experts (for details,
see Papineni et al. 2002). Much of the recent
progress in MT has been a shift from symbolic approaches
toward statistical and deep neural network
approaches. For our study, an important characteristic
of eMT is that replacing human translators with
MT or upgrading MT is relatively seamless. For instance,
for product listings on eBay, users consume
the output of the translation system but, otherwise,
need not change their buying or selling process. Although
users care about the quality of translation, it
makes no difference whether it was produced by a
human or machine. Thus, adoption of MT can be very
fast and its economic effects, especially on digital
platforms, immediate. Although, so far, much of the
work on the economic effects of AI has been theoretical
(Sachs and Kotlikoff 2012, Aghion et al. 2017,
Korinek and Stiglitz 2017, Acemoglu and Restrepo
2018, Agrawal et al. 2019) and notably (Goldfarb and
Trefler 2018) in the case of global trade, the introduction
of improved MT on eBay is an early opportunity
to assess the economic effects of AI using
plausible natural experiments.

1.1.2. Language Barriers in Trade. Empirical studies
using gravity models, which are formally derived in
Anderson and Van Wincoop (2003), have established
a robust negative correlation between bilateral trade
and language barriers. Typically, researchers regress
bilateral trade on a “common language” dummy and
find that this coefficient is strongly positive (Egger
and Lassmann 2012).3 However, these cross-sectional
regressions are vulnerable to endogeneity biases even
after controlling for the usual set of variables in the
gravity equation. For example, two countries with the
same official language (e.g., the United Kingdom and
Australia) can also be similar in preferences for food,
clothing, entertainment, and so forth. Without exogenous
variation in one or the other, it is impossible to
tease out the language effect on trade.
Our paper exploits a natural experiment on eBay
that provides exactly such an exogenous change,
namely a large reduction in the language barrier, and
assesses its effect on international trade. The online
marketplace provides us with a powerful laboratory
to study the consequences on bilateral trade after this
decrease in language barriers for a given language pair.
Our finding that a quality upgrade of machine translation
could increase exports by about 10.9% is consistent
with Lohmann (2011) and Molnar (2013), who
argue that language barriers may be far more trade
hindering than previously suggested.

1.1.3. Peer-to-Peer Platforms and Matching Frictions.
Einav et al. (2016) and Goldfarb and Tucker (2017)
provide great surveys on how digital technology has
reduced matching frictions and improved market efficiency.
Reduced matching frictions affect price dispersion
as evidenced in Brynjolfsson and Smith (2000),
Brown and Goolsbee (2002), Overby and Forman
(2014), and Cavallo (2017). These reduced frictions
also mitigate geographic inequality in economic activities
in the case of ride-sharing platforms (Lam and Liu
2017, Liu et al. 2018), short-term lodging platforms
(Farronato and Fradkin 2018), crowdfunding platforms
(Catalini and Hui 2017), and e-commerce platforms
(Blum and Goldfarb 2006, Lendle et al. 2016, Cowgill
and Dorobantu 2018, Hui 2019). We contribute to this
literature by documenting the significant matching
frictions between consumers and sellers who speak
different languages. Specifically, we find that efforts
to remove language barriers increase market efficiency
substantially.


[...]

2. Background

[...]

Prior to eMT, eBay used Bing Translator for query
and itemtitle translation. Therefore, the policy treatment
here is an improvement in translation quality. To understand
the magnitude of quality improvement, we
follow the MT evaluation literature and report qualities
based on both the BLEU score and human evaluation.
The BLEU score is an automated measure that has
been shown to highly correlate with human judgment
of quality (Callison-Burch et al. 2006). However, BLEU
scores are not easily interpretable and should not be
compared across languages (Denkowski and Lavie
2014). Generally, scores over 30 reflect understandable
translations, and scores over 50 reflect good
translation (Lavie 2010). On the other hand, although
human evaluations are highly interpretable, they are
very costly and can be less consistent.

A comparison of Bing and eMT translation for item
titles from English into Spanish revealed the BLEU
score increased from 41.01 to 45.24, and human acceptance
rate (HAR) increased from 82.4% to 90.2%.
To compute HAR, three linguistic experts vote either
yes or no for translations based on adequacy only
(whether the translation is acceptable for minimum
understanding), and the majority vote is then used to
determine the translation quality. In comparison, the
BLEU score is rated based on both adequacy and fluency
because it compares the MT output with human translation.
Therefore, in cases in which the grammar and
style of translation are not of first-order importance,
such as in listing titles, one might prefer using HAR for
measuring translation quality.

From 2018... Estimates fail to associate gender equality measures with gender segregation in higher education; religiosity is significantly associated with lower gender segregation in higher education

Graduates’ opium? Cultural values, religiosity and gender segregation by field of study. Izaskun Zuazu. Jun 2018. http://www.iaffe.org/media/cms_page_media/788/Izaskun_Zuazu.pdf

Abstract: This paper studies the relationship between cultural values and gender distribution across fieldsof study in higher education. I compute national, field and subfield-level gender segregation indicesfor a panel dataset of 26 OECD countries for 1998-2012. This panel dataset expands the focus ofprevious macro-level research by exploiting data on gender segregation in specific subfields of study. Iconsider two focal cultural traits: gender equality and religiosity, and control for potential segregationfactors, such as labour market and educational institutions, and aggregate-level gender disparities inmath performance and beliefs among young people. The estimates fail to associate gender equality measures with gender segregation in higher education. Religiosity is significantly associated with lower gender segregation in higher education. However, gender gaps in math beliefs seem to be stronger predictors of national-level gender segregation. Field and subfield-level analyses reveal that religiosity is associated with less gender-segregated fields of education, science, and health, and specifically with the subfield of social services.

Keywords: horizontal gender segregation, higher education, cultural values, religiosity, math beliefs,association index.
JEL:A13, I24, J16

6 Conclusion

Persisting levels of gender segregation across fields of study in Western countries seem at odds with the
increase in female participation in higher education. This observation is particularly puzzling against the
backdrop of a rmative action, anti-discrimination policies, and gender-egalitarian ideals in developed
countries. The literature highlights individual factors (gender gaps in preferences and foreseeing family
obligations) and external factors (economic structure, institutions, discrimination) as causes of gender
segregation. This paper studies whether cultural values, in particular gender equality and religion, play
a role in horizontal gender segregation in higher education.

I construct a panel dataset with information on gender segregation indices at national level, at 9-field level
and at 23-subfield level for 26 OECD countries for 1998-2012. I link this data with two focal cultural traits:
Gender equality, measured alternatively on the basis of either the Gender Inequality Index (UNDP) or the
Gender Equality measure (IDEA), and religiosity, taken from the World Value Survey. I propose fixed-effects
models that control for potential segregative factors such as economic structural change, labour
market and educational systems features. The estimates fail to associate gender (in)equality measures
with a significant role in horizontal gender segregation. By contrast, religiosity is significantly associated
with lower levels of horizontal gender segregation.

I expand the models seeking to control for gender gaps in math beliefs developed during the youthhood.
Using two waves of data taken from PISA surveys, I find a contemporaneous association between gender
gaps in anxiety, self-concept and self-e cacy with higher gender segregation of graduates across fields of
study. These gaps seem to be stronger predictors of national-level gender segregation than religiosity.
Field and subfield-levels analyses pinpoint to a robust association between religiosity and lower segregation
levels in the fields of agriculture and health and welfare, and more specifically in the subfield of social
services.

From a policy viewpoint, the role of religiosity may be controversial. However, the findings regarding
gender gaps in math beliefs tend to indicate that e orts to close gaps between boys and girls might
enhance a more gender-equal distribution across fields of study in higher education. Nevertheless, it
should be stressed that the findings above are based on macro-level data on segregation, and should be
taken with caution. Two natural ways to extend this paper would be first to scrutinize whether there is
any link between cultural traits and vertical segregation, i.e. gender segregation at di erent attainment
levels within higher education; and second to expand the gender gaps in ability perceptions among young
people into other dimensions, such as reading and science.

As expected, exploitation had concurrent and longitudinal associations with bullying, but unexpectedly empathic concern only had concurrent associations and no longitudinal associations with bullying

Empathy, Exploitation, and Adolescent Bullying Perpetration: a Longitudinal Social-Ecological Investigation. Ann H. Farrell, Anthony A. Volk, Tracy Vaillancourt. Journal of Psychopathology and Behavioral Assessment, December 4 2019. https://link.springer.com/article/10.1007/s10862-019-09767-6

Abstract: Empathy has been often negatively associated with bullying perpetration, whereas tendencies to be exploitative have been relatively understudied with bullying. Empathic concern and exploitation may also indirectly link distal social-ecological factors to bullying perpetration. Therefore, the associations among personality (i.e., empathic concern, exploitation), self-perceived social-ecological factors (school bonding, social resources), and bullying perpetration were examined in a sample of 531 adolescents across three years of high school in Ontario, Canada (i.e., Grades 9 to 11; mean age 14.96 [SD = 0.37] in Grade 9). As expected, exploitation had concurrent and longitudinal associations with bullying, but unexpectedly empathic concern only had concurrent associations and no longitudinal associations with bullying. Also as expected, exploitation indirectly linked self-perceived social resources to bullying perpetration, but unexpectedly there were no indirect effects with empathic concern. Findings suggest a complex social ecology whereby a lack of empathic concern may remain an important correlate of bullying within each year of high school, whereas exploitative tendencies may be an important predictor of bullying across the high school years, including to strategically leverage self-perceived social resources.

Keywords: Bullying Adolescents Exploitation Empathic concern Social-ecology

From the main author's PhD Thesis, excerpts of the general discussion (chp 5):

Increasing evidence supports the suggestion that adolescence may be a
developmental period when bullying can be adaptively used to acquire material, social,
and romantic resources (Volk, Dane, & Marini, 2014). Bullying may be adaptive under a
specific combination of proximate intrinsic and distal extrinsic social ecological factors.
In particular, genetically influenced personality traits may indirectly link broader
environments to adolescent bullying. The purpose of this dissertation was to investigate
the associations between exploitative personality traits and broader social ecologies
(family, peers, school, community, and economic) to see how they independently and
indirectly facilitated adolescent bullying perpetration. These associations were examined
concurrently, longitudinally, and experimentally in three studies. My prediction that the
broader social environments would filter through exploitative personality traits to
indirectly associate with bullying perpetration was largely supported throughout these
three studies.
In Study 1, I found that environmental variables from three different ecological
systems (micro-, meso-, and macro-) were concurrently associated to both direct (i.e.,
physical, verbal) and indirect (i.e., social) forms of adolescent bullying primarily through
a trait capturing exploitation (i.e., lower Honesty-Humility). Direct bullying also had
indirect associations from social ecological variables through a trait capturing
recklessness (i.e., lower Conscientiousness). To extend on Study 1, I examined
personality-environment associations in a sample of adolescents longitudinally. I found
that exploitation, but not empathy, was longitudinally associated with bullying
perpetration across the first three years of high school. Additionally, social ecological
variables, in particular social status and family functioning, were longitudinally
associated with exploitation, and social status was indirectly longitudinally associated
with bullying through exploitation. Finally, given that Studies 1 and 2 were correlational,
in Study 3, I examined whether bullying perpetration could be simulated through point
allocations in economic games in a laboratory setting. I found that economic games can
be a novel way to experimentally investigate bullying perpetration. Self-report bullying
and selfish Dictator Game point allocations were both related to one another and an
exploitative personality trait (i.e., lower Honesty-Humility). Also, the association
between the environment and both forms of behavior were indirectly facilitated through
this exploitative trait. These three studies together contributed two overall themes in the
social ecology of adolescent bullying perpetration. First, these studies demonstrated the
significance of the role of exploitative personality traits, as opposed to a lack of empathy,
general disagreeableness, or impulsivity, within the context of adaptive adolescent
bullying. Second, these three studies demonstrated a complex social ecology of bullying,
whereby broader social environments from multiple ecological systems can indirectly
facilitate bullying perpetration through exploitative personality traits.


Antisocial Personality and Bullying Perpetration: The Importance of Exploitation

Across all three studies, it was evident that traits capturing exploitation were the
most prominent personality correlates of adolescent bullying perpetration. In both Studies
1 and 3, lower Honesty-Humility was significantly associated with higher bullying
perpetration and selfish Dictator Game point allocations (i.e., an experimental proxy for
bullying). In Study 2, higher exploitation was longitudinally associated with bullying
perpetration. These results are consistent with previous concurrent associations between
adolescent bullying and Honesty-Humility (e.g., Book, Volk, & Hosker, 2012; Farrell,
Della Cioppa, Volk, & Book, 2014), experimental studies on economic game behavior
and Honesty-Humility (e.g., Hilbig, Thielmann, Hepp, Klein, & Zettler, 2015; Hilbig,
Thielman, Wührl, & Zettler, 2015; Hilbig & Zettler, 2009; Hilbig, Zettler, Leist, &
Heydasch, 2013), and finally longitudinal studies on bullying perpetration and narcissism
(i.e., comprised of exploitation and self-superiority; Fanti & Henrich, 2015). It was
evident that adolescents may be strategically exploiting weaker and vulnerable peers to
maximize self-gain, while minimizing costs like victim retaliation. More importantly, my
results demonstrate that a predatory, exploitative tendency may be the most relevant
personality risk factor for engaging in bullying, over and above other personality traits
related to antisocial tendencies.
In contrast to previous studies that found bullying perpetration is often associated
with personality traits such as a lack of empathy, a general tendency to be disagreeable or
angry, and higher impulsivity (e.g., Bollmer, Harris, & Milich, 2006; Caravita, Di Blasio,
& Salmivalli, 2009; Tani, Greeman, Schneider, & Fregoso, 2003), I found lower
Honesty-Humility and higher exploitativeness were associated with bullying, despite
controlling for these other antisocial personality traits. In both Studies 1 and 3, I found
that lower Honesty-Humility was the strongest correlate of bullying perpetration, over
and above the other HEXACO personality traits. Although indirect and direct forms of
bullying and Dictator Game point allocations were both negatively related with lower
Agreeableness (and in Study 1 additionally related to lower Conscientiousness), Honesty-
Humility was the strongest correlate. Thus, it appears that predatory exploitation over
weaker individuals may be the driving personality factor facilitating bullying, even if the
other antisocial traits are still important and associated with bullying. These results are
consistent with recent findings that although Honesty-Humility, Emotionality,
Agreeableness, and Conscientiousness from the HEXACO were all associated with
antisocial tendencies, Honesty-Humility was the largest and driving contributor of
antisociality (Book et al., 2016; Book, Visser & Volk, 2015; Hodson et al., 2018).
However, it is important to note that lower Conscientiousness was additionally a
significant multivariate predictor of direct, but not indirect, bullying. This result
demonstrates that in addition to reflecting strategical exploitation, direct forms of
bullying like hitting, pushing, or kicking, may reflect a risky form of antisocial behavior
that is associated with a general recklessness (Volk et al., 2014). These indirect
associations are also consistent with theories of a faster life history, which posit that
certain individuals who experience competitive or adverse social environments may be
more likely to engage in more impulsive and aggressive behavior to obtain immediate,
short-term access to resources, and bullying may be one behavior that can reflect this
strategy (Dane, Marini, Volk, & Vaillancourt, 2017; Del Giudice & Belsky, 2011;
Hawley, 2011). Interestingly, unlike Agreeableness and Conscientiousness, which had at
least univariate associations with bullying, lower Emotionality or lower empathy had the
fewest univariate and multivariate associations with adolescent bullying.
Although contrasting with the prevalent theories linking lower empathy with
bullying, our lack of association agrees with more contemporary theories of adolescent
bullying as an adaptive, predatory strategy. In Study 1, lower Emotionality was not
significantly related at either the univariate or multivariate levels with bullying, and in
Study 2, lower empathy was only concurrently, but not longitudinally, related with higher
bullying. My results are in contrast to those of previous researchers who found significant
associations between child bullying and lower empathy (e.g., Caravita et al., 2009; Zych,
Ttofi, & Farrington, 2016). Instead, my findings support my prediction that instead of a
lack of emotional recognition or response, a predatory exploitation of others’ weaknesses
may be an important reason why adolescents bully. This may be one potential reason why
empathy related interventions may have been largely ineffective for adolescents (Yeager,
Fong, Lee, & Espelage, 2015). Taken together, all three studies not only support existing
literature on the concurrent association between exploitative style traits and adolescent
bullying (e.g., Book et al., 2012), but extend on these findings by providing both quasiexperimental
and longitudinal evidence for this association. These results with
personality and bullying also suggest that not every risk factor for bullying affects all
adolescents in the same way. Instead, adolescents with specific personality traits may be
more likely and willing to use bullying. Further, adolescents with these personality traits
can respond to, or are influenced by, particular environments in multiple ways (Caspi et
al., 2002; Marceau et al., 2013; Moffitt, 2005; Scarr & McCartney, 1983). In my thesis,
these associations between environment and personality were evident through the
multiple social environmental variables that were indirectly associated with bullying
through exploitative personality styles.


Bullying Perpetration and Indirect Associations with Broader Social Ecology

Across the three studies, it was evident that not all social environments facilitate
bullying in the same way for all adolescents. Instead as expected, I found that multiple
adverse and risky social environment variables filtered specifically through exploitative
personality traits to indirectly facilitate adolescent bullying. These social environmental
variables were from multiple ecological systems ranging from proximate economic
power contexts and peer and family relationships, to distal school and community
variables. Starting with the more proximate factors, all three studies demonstrated that
social relationships in the microsystem (i.e., immediate social context), had indirect
associations with bullying most frequently through either lower Honesty-Humility (i.e.,
Study 1 and 3), or higher exploitation (i.e., Study 2), as opposed to other antisocial
personality traits. Occasionally in Study 1, a proximate ecological factor was found to
have an indirect effect with bullying primarily through Honesty-Humility and secondarily
through lower Conscientiousness. In these instances, the strength of the indirect
associations through Honesty-Humility and Conscientiousness were often comparable, as
indicated through the standardized beta coefficients. The associations with Honesty-
Humility and exploitation may be a result of predatory individuals being able to
strategically take advantage of adverse and/or risky social ecological circumstances.
Adverse social relationships including poorer family dynamics and higher peer
problems, along with powerful social positions such as higher social status (i.e., Study 2)
or higher interpersonal influence (i.e., Study 1 and 3), appeared to be risk factors
indirectly associated with bullying through exploitative personality styles. Individuals
with higher social status or social influence, and individuals who experience negative
social relationships characterized by conflict, lower support and lower warmth, may
exploit these social environments. For example, adolescents who know their parents do
not have much knowledge or care for their whereabouts may take advantage of this lack
of interest by engaging in bullying, knowing that they would have fewer repercussions.
Likewise, adolescents who know they have poorer friendships may exploit these friends
and employ these friends in bullying strategies. Finally, exploiting these relationships
may be especially advantageous for adolescents who have higher social status, as they
would likely have greater influence when navigating their peer networks to effectively
assert their power through bullying tactics. These concurrent results held longitudinally
across three years of adolescence, and also held when manipulating dyadic economic
contexts in a laboratory setting.
These results are consistent with broader evolutionary frameworks that help
explain the use of aggressive behavior. Bullying may be a facultative or conditional
adaptation that an adolescent may consciously or subconsciously decide to engage in
after evaluating his or her own personality traits (i.e., exploitative tendencies; Buss, 2011)
in combination with their broader environments (e.g., friendships, family relationships,
social status; Dane et al., 2017; Del Giudice & Belsky, 2011; Hawley, 2011; Volk,
Camilleri, Dane, & Marini, 2012). Adverse and negative social environments may also
facilitate faster life history strategies that encourage aggressive behavior like bullying (as
opposed to cooperative, long-term strategies) as an immediate means for resources (Del
Giudice & Belsky, 2011; Hawley, 2011). After these assessments of the self and
environment, an adolescent may anticipate the immediate benefits of bullying over
weaker peers may outweigh the costs. Additionally, if previous uses of bullying have
been successful, these dominant and exploitative adolescents may be more inclined to use
this behavior again (Dawkins, 1989). Alternatively, adolescents who possess certain
genetically based personality traits such as exploitative tendencies, may already be more
likely to use coercive or bullying behavior, as opposed to prosocial or cooperative forms
of behavior (Del Giudice & Belsky, 2011).
In addition to evolutionary frameworks, my results are consistent with
Bronfenbrenner’s Ecological Systems Theory (EST; Bronfenbrenner, 1979), and with
recent findings that multiple ecological levels can differentially facilitate bullying
perpetration (e.g., Hong & Espelage, 2012). Furthermore, my results demonstrate that
multiple ecological contexts can have indirect associations with individual differences in
personality, similar to previous ecological studies on bullying (e.g., Barboza et al., 2009;
Lee, 2011; Low & Espelage, 2014). However, my results provide some key novel
contributions. My findings demonstrate that there are indirect associations from adverse
parental and peer relationships and socially powerful positions to bullying, specifically
through exploitative traits, as opposed to other antisocial personality traits. These results
are likely due to the reason that exploitative adolescents may be more willing and able to
take advantage of adverse relationships and powerful positions.
It is likely that within these environmental contexts, exploitative adolescents may
experience more social benefits when using bullying (e.g., increased social status), and
may simultaneously have fewer costs imposed by parents and/or weaker peers. One of the
most noteworthy and prominent social ecological variables that emerged were status
related variables that indicate higher power. Across all three studies, it was evident that
higher social status (i.e., Study 2) or higher interpersonal influence (i.e., Studies 1 and 3)
were commonly associated with bullying through exploitative personality, and this
association held concurrently, longitudinally, and in a laboratory based experimental
setting. Bullying is fundamentally about a power imbalance (Volk et al., 2014). By
definition, bullying requires an individual with more power to inflict harm on a weaker
individual. As evident throughout animal studies (e.g., hierarchy in hyenas; Stewart,
1987), and in research in which human participants are assigned a powerful role (e.g.,
role of prison guards; Haney, Banks, & Zimbardo, 1973), a position of higher status and
social power can be translated into gaining resources at the expense of others. It is not
surprising then, that this fundamental feature that distinguishes bullying from other forms
of aggression is reflected in the broader adolescent social ecology. Those willing to use
power to inflict harm on weaker peers may be more effective in doing so if they have
exploitative, predatory tendencies (as opposed a general lack of concern or empathy for
others). These exploitative tendencies will ultimately assist in taking advantage of higher
social status and influence to strategically bully weaker peers, who are less likely to
defend themselves and/or retaliate.
My findings are similar to previous associations between strategic adolescent
bullying and higher perceived popularity, social status, and influence (Dijkstra,
Lindenberg, & Veenstra, 2008; Garandeau, Lee, & Salmivalli, 2013; Pellegrini & Long,
2002; Reijntjes et al., 2013; Sentse, Veenstra, Kiuru, & Salmivalli, 2015; Sijtsema,
Veenstra, Lindenberg, & Salmivalli, 2009). Additionally, these results are consistent with
those of previous researchers who found that although adolescents who bully can be high
in peer-perceived popularity, power, and status, they are not necessarily socially preferred
or liked by peers as friends (Vaillancourt & Hymel 2006). Researchers have found that
early adolescence is a developmental period when peer-perceived popularity is most
valued (LaFontana & Cillessen, 2010). As a result, given that adolescents may be
exploiting their social status to engage in bullying, my results support the notion that
bullying can be used selectively and adaptively by adolescents for status, a goal that is
highly salient during adolescence. A similar pattern of indirect effects also emerged with
more distal ecological variables.
Adverse mesosystem variables (i.e., interactions among immediate social
environments), and macrosystem variables (i.e., broader cultural attitudes and values)
also indirectly facilitated bullying through either lower Honesty-Humility or higher
exploitation. Risky, negative aspects of the social environment such as higher
neighborhood violence, higher school competition, and adverse school climates were
indirectly associated with bullying. It appears that in addition to immediate social
environments, exploitative adolescents may take advantage of wider negative climates to
engage in bullying for self-gain. These broader adverse environments may not provide
the social structures including discipline that could prevent adolescents from acting on
their exploitative motivations. Thus, in addition to assessments of the self and
environment, adolescents may have learned the benefits of bullying within these
environments outweigh the costs through vicarious reinforcement, consistent with the
Social Learning Theory (Bandura, 1978). The fact that risky environments filtered
through both lower Honesty-Humility and lower Conscientiousness for direct bullying
behavior in Study 1 suggests that while all forms of bullying can be strategically
implemented within the right conditions, direct forms of bullying behavior also reflect a
recklessness for consequences (Volk et al., 2014), and a tendency to engage in riskier,
aggressive behavior for immediate gain (Del Giudice & Belsky, 2011). Accordingly,
these findings further provide support that not all environments affect all adolescents the
same way. Although predatory, exploitative tendencies appear to filter both proximate
and distal adverse social environments for bullying perpetration, there are even subtle
differences in the bullying behavior used. Poorer social relationships, higher social status,
and more competitive and violent school and neighborhood variables appear to be risk
factors for engaging in bullying as a whole. These variables appear to be risk factors for
exploitative adolescents who may strategically take advantage of these contexts to
adaptively bully. However, these adverse environments may also be risk factors for
generally impulsive or reckless adolescents willing to engage in direct forms of bullying.
Accordingly, there appears to be a successful facultative translation of these risky social
environments into adaptive bullying behavior by adolescents with a primarily predatory,
exploitatively personality style.
Taken together, my results are consistent with previous findings on poorer social
relations interacting with Honesty-Humility to predict bullying (e.g., lower parental
knowledge; Farrell, Provenzano, Dane, Marini, & Volk, 2017), and with additional
ecological findings on personality interacting with or indirectly linking social
environments for bullying (Barboza et al., 2009; Lee, 2011; Low & Espelage, 2014). My
findings suggest that adolescents who bully may not necessarily be generally
disagreeable, antisocial individuals with a lack of empathy. Instead, adolescent
perpetrators may be strategic, exploitative individuals who are able to take advantage of
their broader social environments and immediate social influence to gain more benefits,
while simultaneously reducing costs. Both Studies 1 and 2 demonstrated that the distal
and proximate environmental contexts may adaptively filter through an exploitative
personality trait to predict bullying, a behavior rooted in taking advantage of power.
However, Study 3 extended findings from the previous two studies by demonstrating how
proximate contextual factors like power can be manipulated to examine bullying and/or
similarly related competitive behavior, and how these forms of behavior relate with
personality. Despite these significant contributions, this dissertation was not without
limitations.

Induced Mate Abundance Increases Women’s Expectations for Engagement Ring Size and Cost

Induced Mate Abundance Increases Women’s Expectations for Engagement Ring Size and Cost. Ashley Locke, Jessica Desrochers, Steven Arnocky. Evolutionary Psychological Science, December 4 2019. https://link.springer.com/article/10.1007/s40806-019-00214-z

Abstract: Research on some non-human species suggests that an abundance of reproductively viable males relative to females can increase female choosiness and preferences for longer-term mating and resource investment by males. Yet little research has explored the potential influence of mate availability upon women’s preferences for signals of men’s commitment and resource provisioning. Using an experimental mate availability priming paradigm, the present study examined whether women (N = 205) primed with either mate scarcity or abundance would differ in their expectations for engagement ring size and cost. Results demonstrated that women who were primed with the belief that good-quality mates are abundant in the population reported expecting a statistically-significantly larger and more expensive engagement ring relative to women primed with mate scarcity. Results suggest that women flexibly attune their expectations for signals of men’s investment based, in part, upon their perception of the availability of viable mates.

Keywords: Priming Sex ratio Engagement rings Social psychology Evolutionary psychology Mating behavior

---
Sample
205 undergraduate women aged 17 to 39 (M 20 SD 2 87).

Demographic Measures
Prior to the priming task, participants completed measures of age and romantic relationship status (“Are you currently in a committed heterosexual romantic relationship?”)

Mate Availability Priming Task
Using a set of fictitious magazine articles developed by Spielmann, MacDonald, and Wilson 2009 participants were primed with the belief that potential mates were either abundant or scarce. In this task, participants read one of two articles In the mate abundant condition, the article explained the task of finding a new romantic partner as being relatively easy, with the mating population
consisting of many available mates. Conversely, in the mate scarcity condition, the article highlighted the difficulty of finding a new romantic partner, with desirable mates being a scarce resource.

Manipulation Check
Participants then responded to the following two items asking about their own perceptions of mate availability: (1) “It scares me to think there might not be anyone out there for me” and (2) "I feel it is close to being too late for me to find love in my life"

Engagement Ring Preferences
Following Cloud and Taylor 2018 female participants were asked ,,“if this man were to propose to you after an extended period of dating, what is the smallest size engagement ring that you would be satisfied with him giving to you To make their decision, participants saw five identical engagement rings that differed only by carat weight and cost, ranging from 0 50 carats 500 to 1 50 carats 9000 and their selection was recorded (see Fig 1 from Cloud and Taylor 2008).

Are Sex Differences in Mating Preferences Really “Overrated”? The Effects of Sex and Relationship Orientation on Long-Term and Short-Term Mate Preference

Are Sex Differences in Mating Preferences Really “Overrated”? The Effects of Sex and Relationship Orientation on Long-Term and Short-Term Mate Preference. Sascha Schwarz, Lisa Klümper, Manfred Hassebrauck. Evolutionary Psychological Science, December 4 2019. https://link.springer.com/article/10.1007/s40806-019-00223-y

Abstract: Sex differences in mating-relevant attitudes and behaviors are well established in the literature and seem to be robust throughout decades and cultures. However, recent research claimed that sex differences are “overrated”, and individual differences in mating strategies (beyond sex) are more important than sex differences. In our current research, we explore between-sex as well as within-sex differences; further we distinguish between short-term and long-term relationship orientation and their interactions with sex for predicting mate preferences. In Study 1, we analyzed a large dataset (n = 21,245) on long-term mate characteristics. In Study 2 (n = 283), participants indicated their preference for long-term as well as short-term partners. The results demonstrate the necessity to include both intersexual as well as intrasexual differences in mating strategies. Our results question the claim that sex differences in mate preferences are “overrated.”

Keywords: Sex differences Mate preferences Sociosexual orientation Long-term relationship orientation Short-term relationship orientation Online dating

Education's marginal cognitive benefit does not reach a plateau until 17 years of education; those with low childhood intelligence derive the largest benefit of education

The influence of educational attainment on intelligence. Emilie Rune Hegelund et al. Intelligence. Volume 78, January–February 2020, 101419. https://doi.org/10.1016/j.intell.2019.101419

Highlights
•    Education has a positive influence on intelligence.
•    The marginal cognitive benefit of education does not reach a plateau until 17 years of education.
•    Individuals with low childhood intelligence derive the largest benefit from education.
•    Findings of relatively small cognitive benefits might be explained by selection bias.

Abstract: Education has been found to have a positive influence on intelligence, but to be able to inform policy, it is important to analyse whether the observed association depends on the educational duration and intelligence prior to variations in educational attainment. Therefore, a longitudinal cohort study was conducted of all members of the Metropolit 1953 Danish Male Birth Cohort who were intelligence tested at age 12 and appeared before the Danish draft boards (N = 7389). A subpopulation also participated in the Copenhagen Aging and Midlife Biobank (N = 1901). The associations of educational attainment with intelligence in young adulthood and midlife were estimated by use of general linear regression with adjustment for intelligence test score at age 12 and family socioeconomic position. Results showed a positive association of educational attainment with intelligence test scores in both young adulthood and midlife after prior intelligence had been taken into account. The marginal cognitive benefits depended on the educational duration but did not reach a plateau until 17 years. Further, intelligence test score at age 12 was found to modify the association, suggesting that individuals with low intelligence in childhood derive the largest benefit from education. Comparing the strength of the association observed among participants and non-participants in our midlife study, we showed that selection due to loss to follow-up might bias the investigated association towards the null. This might explain previous studies' findings of relatively small cognitive benefits. In conclusion, education seems to constitute a promising method for raising intelligence, especially among the least advantaged individuals.


4.2. Comparison with the existing literature
The finding of a positive association between educational attainment
and intelligence test scores after prior intelligence has been taken
into account is consistent with extant literature (Clouston et al., 2012;
Falch & Massih, 2011; Ritchie, Bates, Der, Starr, & Deary, 2013), including
a recent meta-analysis (Ritchie & Tucker-Drob, 2018). More
specifically, our results suggested an average increase in intelligence
test score of 4.3 IQ points per year of education in young adulthood and
1.3 IQ points per year of education in midlife. The effect estimate in
young adulthood is considerably higher than the effect estimate of 1.2
IQ points (95% confidence interval: 0.8, 1.6) reported in the metaanalysis
for the control prior intelligence design. However, in a simultaneous
multiple-moderator analysis, the authors report an adjusted
effect estimate of 2.1 IQ points (95% confidence interval: 0.8, 3.4),
taking into account the possible influence of age at early test, age at
outcome test, outcome test category, and male-only studies. Besides
these possible moderators, contextual factors might also account for the
contrasting findings between our study and the seven studies included
in the meta-analysis. However, it is important to notice that our effect
estimate in midlife is consistent with the findings of the meta-analysis,
suggesting that sample selectivity might have influenced the association
observed in midlife in our study and perhaps the associations observed
in the cohort studies included in the meta-analysis as well. This is
supported by findings of higher educational attainment among the individuals
in our study population who participated in the midlife study,
higher IQ at age 12 (IQ mean: 103.2 vs. 98.9) as well as our finding of
effect measure modification. If both educational attainment and intelligence
are positively associated with participation in studies and
individuals with low intelligence in childhood derive the largest benefit
from education, selection will bias the investigated associations towards
the null. This is probably the reason why the study with the least
selected sample in the meta-analysis is the one reporting the largest
effect estimate (Falch & Massih, 2011). Based on a male population who
were initially intelligence tested in school at age 10 and later at the
mandatory military draft board examination at age 20, the authors
found an increase in intelligence test score of 3.5 IQ points per year of
education (95% confidence interval: 3.0, 3.9), which is much more in
line with our results.
Our finding of a stronger association between educational attainment
and intelligence test scores in young adulthood compared with
midlife in the subpopulation of individuals who participated in both
examinations is consistent with the findings of the meta-analysis
(Ritchie & Tucker-Drob, 2018). One possible explanation for this
finding might be that schooling has a larger influence on intelligence
compared with vocational education or training, which mainly takes
place after the age of 18. However, the finding might also be explained
by the smaller time gap between the measurements of educational attainment
and intelligence, as the measurement of educational attainment
in midlife in most cases will reflect one's educational attainment
before the age of 30. The older the age at outcome testing, the larger the
time gap and thus more additional factors might influence the association,
such as the individual's occupational complexity and health
(Smart, Gow, & Deary, 2014; Waldstein, 2000).
In general, our finding of a positive association between educational
attainment and intelligence test scores should be interpreted with
caution. As previously written, it is difficult to separate the positive
influence of educational attainment on intelligence from the influence
of selection by prior intelligence, whereby individuals with higher intelligence
test scores prior to variations in educational attainment
progress further in the educational system. Although our results show a
strong positive association between educational attainment and intelligence
test scores after prior intelligence has been taken into account,
a hierarchical analysis of our data suggests that educational attainment
only increases the amount of explained variance in later IQ by
7% when IQ at age 12 is already accounted for (R2 = 0.46 vs
R2 = 0.53; p < .001). Therefore, our findings most likely not only
reflect the positive influence of educational attainment on intelligence,
but also a residual influence of selection processes which our statistical
analyses were not able to take into account.
To answer one of our specific aims, we also investigated whether the
increase in intelligence test score for each extra year of education depended
on the educational duration. Irrespective of whether intelligence
was measured in young adulthood or midlife, intelligence test
scores were found to increase with increasing years of education in a
cubic relation, suggesting that the increase in intelligence test score for
each extra year of education diminishes with increasing length of
education. This finding supports the hypothesis proposed by the authors
of a recent study, who, based on their own data and the existing literature,
suggest that the influence of educational attainment on intelligence
eventually might reach a plateau (Kremen et al., 2019).
However, where the authors of the previous study suggest that this
plateau might already be reached by late adolescence as their findings
show no significant association between educational attainment and
intelligence test score in midlife after IQ at age 20 has been taken into
account, we find no plateau until approximately 17 years of education.
In fact, replicating the previous study's statistical analyses we find an
average increase in intelligence test score in midlife of 0.8 IQ points per
year of education taking IQ at age 20 into account (Supplementary
Table 4). We speculate whether this contrasting finding might be explained
by the representativeness of the study populations as the previous
study is based on a selected sample of twins serving in the
American military at some point between 1965 and 1975. Thus, a study
based on the two Lothian birth cohorts, which like our study includes a
follow-up examination of a population-representative survey in childhood,
finds a weighted average increase in intelligence test score in late
life of 1.2 IQ points per year of education taking IQ at age 11 into
account (Ritchie et al., 2013). However, the contrasting finding might
also be explained by residual confounding due to the use of non-identical
baseline and outcome intelligence tests in our study. Nevertheless,
in our study, the strongest association between educational attainment
and intelligence in midlife was observed in upper-secondary school, i.e.
around 10–13 years of education. A possible explanation for this finding
might be that pupils up to and including upper-secondary school receive
general education, which improves exactly what the intelligence
tests included in our study most likely measure: General cognitive
ability. After upper-secondary school, individuals start to specialize in
different fields, which may explain why the increase in intelligence test
score for each extra year of education diminishes. However, the cubic
tendency was relatively weak and as written above the association
between educational attainment and intelligence did not reach a plateau
until approximately 17 years of education, corresponding to the
completion of a Master's degree program. As our study is the first to
investigate whether the increase in intelligence test score for each extra
year of education depends on the educational duration, replication of
our finding is needed – preferably in studies with access to the same
baseline and outcome test.
Finally, to answer another of our specific aims, we investigated
whether the increase in intelligence test score for each extra year of
education depended on the intelligence prior to variations in educational
attainment. The results showed that the increase in intelligence
test score for each extra year of education was higher in the group of
individuals with an IQ < 90 compared with the group of individuals
with an IQ of 90–109. Although this finding clearly needs to be replicated,
it is in line with the findings of a Danish study investigating
whether distributional changes accompanied the secular increases in
intelligence test scores among males born in 1939–1958 and
1967–1969 (Teasdale & Owen, 1989). According to the authors of that
study, a possible explanation of why individuals with low intelligence
in childhood derive the largest benefit from education is that the Danish
school system for the last seven decades mainly has focused on improving
the abilities of the least able (Teasdale & Owen, 1989).
Therefore, future studies are needed to investigate whether our finding
is peculiar to the Danish school system or whether it can be generalized
to school systems in other countries.