Sunday, April 11, 2021

Dynamic network perspective represents major departure from localist models; instead of cognitive functions mapping to discrete neural regions/connects, mental operations are suggested to be supported by unique conjunctions of distributed brain regions

Neuroimaging evidence for a network sampling theory of individual differences in human intelligence test performance. Eyal Soreq, Ines R. Violante, Richard E. Daws & Adam Hampshire . Nature Communications volume 12, Article number: 2072. Apr 6 2021. https://www.nature.com/articles/s41467-021-22199-9

Abstract: Despite a century of research, it remains unclear whether human intelligence should be studied as one dominant, several major, or many distinct abilities, and how such abilities relate to the functional organisation of the brain. Here, we combine psychometric and machine learning methods to examine in a data-driven manner how factor structure and individual variability in cognitive-task performance relate to dynamic-network connectomics. We report that 12 sub-tasks from an established intelligence test can be accurately multi-way classified (74%, chance 8.3%) based on the network states that they evoke. The proximities of the tasks in behavioural-psychometric space correlate with the similarities of their network states. Furthermore, the network states were more accurately classified for higher relative to lower performing individuals. These results suggest that the human brain uses a high-dimensional network-sampling mechanism to flexibly code for diverse cognitive tasks. Population variability in intelligence test performance relates to the fidelity of expression of these task-optimised network states.

Introduction

The question of whether human intelligence is dominated by a single general ability, ‘g’1, or by a mixture of psychological processes2,3,4,5,6, has been the focus of debate for over a century. While performance across cognitive tests does tend to positively correlate, population-level studies of intelligence have clearly demonstrated that tasks which involve similar mental operations form distinct clusters within a positive correlation manifold. These task clusters exhibit distinct relationships with various sociodemographic factors that are not observable when using aggregate measures of intelligence, such as ‘g’2,7.

Recent advances in network science offer the potential to resolve these contrasting views. It has been proposed that transient coalitions of brain regions form to meet the computational needs of the current task8,9,10. These dynamic functional networks are thought to be heavily overlapping, such that any given brain region can express flexible relationships with many networks, depending on the cognitive context8,9,11,12,13. This dynamic network perspective represents a major departure from localist models of brain functional organisation. Instead of cognitive functions mapping to discrete neural regions or specific connections, mental operations are suggested to be supported by unique conjunctions of distributed brain regions, en masse. The set of possible conjunctions can be considered as the repertoire of dynamic network states and the expression of these states may differ across individuals and relate to cognitive performance.

This conceptual shift motivates us to propose a network sampling theory of intelligence, which is conceptually framed by Thomson’s classic sampling theory14. Thomson originally proposed that ‘every mental test randomly taps a number of ‘bonds’ from a shared pool of neural resources, and the correlation between any two tests is the direct function of the extent of overlap between the bonds, or processes, sampled by different tests’. Extending this hypothesis, network sampling theory views the set of connections in the brain that constitute a task-evoked dynamic network state to be equivalent to Thomson’s ‘bonds’; therefore, the set of available brain regions is equivalent to the ‘shared pool of neural resources’. The distinctive clusters within the positive manifold reflect the tendency of operationally similar tasks to rely on similar dynamic networks2,15,16,17. From this perspective, the general intelligence factor ‘g’ is proposed to be a composite measure of the brain’s capacity to switch away from the steady state, as measured in resting-state analyses, in order to adopt information processing configurations that are optimal for each specific task. When recast in this framework, classic models of unitary and multiple-factorial intelligence1,14 are reconciled as different levels of summary description of the same high-dimensional dynamic network mechanism. The notion of domain-general systems such as ‘task active’ or ‘multiple-demand’ cortex is also reconciled within this framework. Specifically, each brain region can be characterised by the diversity of network states they are active members of. Brain regions that classic mapping studies define as ‘domain-general’ place at one extreme of the membership continuum, whereas areas ascribed specific functions, e.g., sensory or motor, place at the other extreme. The aim of this study was to test key predictions of network sampling theory using 12 cognitive tasks and machine learning techniques applied to functional MRI (fMRI) and psychometric data. First, we test the hypothesis that cognitive tasks evoke distinct configurations of activity and connectivity in the brain. We predicted that these configurations would be sufficient to reliably classify individual tasks, and that this would be the case even when focusing on brain regions at the domain-general extreme of the network membership continuum. We then tested Thomson’s hypothesis that similarity between cognitive tasks maps to the ‘overlap’ of the neural resources being tapped. Subsequently, it was predicted that the ability to classify pairs of tasks would negatively correlate with their behavioural-psychometric similarity, with tasks that are less similar being classified more reliably. Next, we hypothesised that individual functional dynamic repertoires would positively correlate with task performance, with the top performers expressing task configurations that would be more reliably classified. We also tested the prediction that classification success rates should have a basis in a combination of the distinct visual (VS), motor and cognitive sub-processes of the tasks. Finally, we hypothesised that task performance would be associated with optimal perturbation of the network architecture from the steady state, and that certain features within the network would have more general and more prominent roles in intelligence test performance.

Discussion

The results presented here are highly compatible with a network science interpretation of Thomson’s sampling theory14. Indeed, as has been noted by others, the relationship between the classic notion of a flexible pool of bonds and the analysis of the brain’s dynamic networks as applied a century later is striking17. Thomson proposed that mental tests tap bonds from a shared pool of neural resources, which is confirmed by our observation that different cognitive tasks tend to recruit unique but heavily overlapping networks of brain regions. Furthermore, when testing Thomson’s proposal that the correlation between any two tasks is a function of the extent of overlap between their bonds, we confirmed that the similarities of tasks in multi-factor behavioural psychometric space correlated strongly with the similarities in the dynamic network states that they evoked. These findings corroborate the key tenets of network sampling theory, further predictions of which were tested utilising a combination of machine learning techniques applied to the fMRI and psychometric data.

From a network science perspective, our results showing that the tasks were 12-way classifiable with high accuracy based on their dynamic network states is highly relevant. Indeed, the 74% accuracy achieved by the CRTX stack model was surprising, given that chance was 8.3% and we used just 1 min, comprising 30 images, of task performance data per classified sample. Although activity and connectivity provided complementary information when combined in the stack models, classification accuracy was consistently higher for connectivity when the measures were analysed independently. These results strongly support the hypothesis that the human brain is able to support diverse cognitive tasks because it can rapidly reconfigure its connectivity state in a manner that is optimal for processing their unique computational demands8,9,12,17. A key finding was that the task-evoked dynamic network states were consistent across individuals; i.e., our trained 12-way classification models operated with high accuracy when applied in a robust CV pipeline to data from individuals to whom they were completely naive. This was with an out of the box classifier with no CV optimisation, which is important, because it means that the features that drove accurate classification must reflect on a fundamental level how networks in the human brain are prewired to flexibly support diverse tasks.

At a finer grain, these task-optimised network states are most accurately described as a perturbation away from the RSN architecture of the brain12,29. More specifically, it was not simply the case that the relative levels of activity or connectivity within each RSN change, i.e., reflecting different mixtures dependent on task demands; instead, the features that were most specific to a given task-evoked state were predominantly the inter-RSN connections. Put another way, task-evoked states are not a simple blending of RSNs, but a dissolution of the RSN structure. This extends the findings of another recent study, where we used a similar analysis pipeline to examine how different aspects of working memory affected brain activity and connectivity12. Mirroring the current findings, we found that behaviourally distinct aspects of working memory mapped to distinct but densely overlapping patterns of activity and connectivity within the brain. Taken together, these results do not accord well with the hypothesis that the human brain is organised into discrete static networks. Instead, it would appear that the dynamic network coding mechanism is very high-dimensional, relating to the greater number of possible combinations of nodes8,9. There are dependencies whereby some nodes operate together more often than others, but these canonical network states, which are consistently evident in data-driven analyses of the resting state brain, are statistical rather than absolute. Our more holistic interpretation of the relationship between network states and cognitive processes is further supported by the analysis of the classifiability of task clusters when grouped according to their behavioural dimensions. Specifically, when grouped by psychometric, motor or VS characteristics, the clusters were more classifiable than random task groupings in all cases. It was notable though that psychometric and motor characteristics provided a stronger basis for classification. This is interesting, because it pertains to how the most prominent factors of human intelligence differ operationally. For example, it accords well with process overlap theory17, which proposes that general intelligence relates most closely to processes that are common across many different cognitive tasks.

More generally, the fact that inter-individual differences in the classifiability of the tasks predicted variability in a general measure of behavioural task performance provides further evidence that cognitive faculties relate to the way in which the brain expresses these task-optimal network states. Previous research into the neural basis of human intelligence has typically emphasised the role of flexible FP brain regions2,30,31,32. In this context, our focused analysis of the INTR ROI set warrants further discussion. Brain regions within the INTR ROI set belong to the classical MD cortical volume, which has been closely associated with general intelligence. MD includes the FP brain regions that have the broadest involvement in cognitively demanding tasks19,20,30; this includes executive functions, which enable us to perform complex mental operations33,34 and that have been proposed to relate closely to the ‘g’ factor17. From a graph theoretic perspective, MD ROIs have been reported to have amongst the broadest membership of dynamic networks of any brain regions35 and it has been shown that inter-individual variability in the flexibility of MD nodes, as measured by the degree of their involvement in different functional networks, correlates positively with individuals’ abilities to perform specific tasks, e.g., motor skill learning36 and working memory37. Collectively, these findings highlight a strong relationship between the flexibility of nodes within MD cortex and cognitive ability.

Here, we reconfirmed that MD ROIs were amongst the most consistently active across the 12 tasks. However, we also demonstrated that these ROIs were highly heterogeneous with respect to their activation profiles across those tasks. Furthermore, in many cases they were significantly active for most but not all tasks. This variability in the activation profiles even amongst the most commonly recruited areas of the brain aligns with the idea that MD cortex flexibly codes for diverse tasks in a high-dimensional manner. More critically, the internal activity and connectivity of the INTR ROI set was not strongly predictive of behavioural task performance. Nor did it provide the most accurate basis for classification overall, or correspondence to psychometric structure. Extending to the MDDM set provides an improvement, but it was inclusion of the whole cortex ROI set that provided the best predictor of task and behavioural performance. Furthermore, connections between the core set of INTR regions and the rest of the brain featured prominently in all of the above cases. This finding accords with bonds theory, insofar as that theory pertains to the wide variety of bonds that contribute to diverse behavioural abilities. It also accords particularly well with the core tenet of network science that cognitive processes are emergent properties of interactions that occur across large-scale distributed networks in the brain10,12.

An intriguing aside pertains to the phenomena of ‘factor differentiation’. It was originally noted by Spearman38 that ‘g’ explains a greater proportion of variance individuals who perform lower on intelligence tests. This finding has been robustly replicated over the subsequent century5. Our results provide a simple explanation for factor differentiation. When individuals of higher intelligence perform different cognitive tasks, the dynamic network states that they evoke are more specific. Therefore, there is less overlap in the neural resources that they recruit to perform the tasks. Given the relationship observed here between network similarity and behavioural-psychometric distance, this would be expected to reduce bivariate correlations in task performances and produce a corresponding reduction in the proportion of variance explained by ‘g’.

The boosted ensemble of regression trees provided a simple way to extend the individual differences analysis in order to capture not just mixtures but also interactions between network features when predicting behavioural performance. We observed that increased connectivity between DA and VS systems strongly associated with better performance, whilst increased connectivity within the DMN combined with decreased connectivity between either DA to VS or DM to FP associated with lower performance. This accords well with previous studies that have shown that these networks update their connectivity patterns according to the task context35,39,40,41,42,43. However, it was particularly notable that inter-RSN connections again played the most prominent role insofar as they formed the roots of all of the trees, meaning they had the broadest relevance across individuals. This further accords with the view that task-evoked network states are best described as a perturbation from the RSN architecture12,29.

In summary, we validated multiple key predictions of network sampling theory. This theory can potentially explain key findings from behavioural psychometrics, experimental psychology and functional neuroimaging research within the same overarching network-neuroscience framework, and bridges the classic divide between unitary and multi-factorial models of intelligence. Given that our machine learning analysis pipeline aligns naturally with multivariate network coding whereas more commonly applied univariate methods do not, we believe that the analysis of multivariate network states as applied here has untapped potential in clinical research; e.g., providing functional markers for quantifying the impact of pathologies and interventions on the brain’s capacity to flexibly express task optimised network states11,29.

Is the Psychopathic Brain an Artifact of Coding Bias? A Systematic Review

Is the Psychopathic Brain an Artifact of Coding Bias? A Systematic Review. Jarkko Jalava1 et al. Front. Psychol., April 12 2021. https://doi.org/10.3389/fpsyg.2021.654336

Abstract: Questionable research practices are a well-recognized problem in psychology. Coding bias, or the tendency of review studies to disproportionately cite positive findings from original research, has received comparatively little attention. Coding bias is more likely to occur when original research, such as neuroimaging, includes large numbers of effects, and is most concerning in applied contexts. We evaluated coding bias in reviews of structural magnetic resonance imaging (sMRI) studies of PCL-R psychopathy. We used PRISMA guidelines to locate all relevant original sMRI studies and reviews. The proportion of null-findings cited in reviews was significantly lower than those reported in original research, indicating coding bias. Coding bias was not affected by publication date or review design. Reviews recommending forensic applications—such as treatment amenability or reduced criminal responsibility—were no more accurate than purely theoretical reviews. Coding bias may have contributed to a perception that structural brain abnormalities in psychopaths are more consistent than they actually are, and by extension that sMRI findings are suitable for forensic application. We discuss possible sources for the pervasive coding bias we observed, and we provide recommendations to counteract this bias in review studies. Until coding bias is addressed, we argue that this literature should not inform conclusions about psychopaths' neurobiology, especially in forensic contexts.

Discussion

Neurobiological reviews of PCL-R and PCL:SV psychopathy significantly under-report null-findings in sMRI research, indicating widespread coding bias. The majority (64.18%) of original sMRI findings were nulls, whereas nulls made up a small minority (8.99%) of effects in review literature. Reviewers, in other words, preferentially reported data supporting neurobiological models of psychopathy. We found no evidence that the reporting imbalance was due to factors other than bias: systematic, narrative, and targeted reviews all reported disproportionately few nulls (though meta-analyses reported too few effects to evaluate), the pattern was stable across time, and not driven by exploratory research or outliers. Notably, reviews calling for forensic application of the data, such as treatment, criminal responsibility, punishment, and crime prediction, were no more accurate than purely theoretical reviews. Applied reviews were, however, more likely than theoretical reviews to conclude that the data supported neurobiological bases of psychopathy. These findings are surprising, as applied reviews in other fields—such as those examining drug safety and efficacy—typically face the highest burden of proof and are thus most likely to emphasize limitations in the data [see e.g., Köhler et al. (2015)].

Our study is the first to systematically examine coding bias in cognitive neuroscience. Although our findings are limited to structural imaging in psychopathy, they suggest that coding bias should be considered alongside more widely recognized Questionable Research Practices (QRPs) such as p-hacking, reporting bias, publication bias, citation bias, and the file drawer problem. QRPs in original research filter out null-findings at early stages of the research and publication process, while coding and citation bias further distort the state of scientific knowledge by eliminating null findings from reviews. In addition to coding bias, we found evidence of reporting bias during our review of sMRI studies. Null-findings in the original literature were rarely reported in the study abstracts and were frequently not reported fully in results sections. Nulls often appeared only in data or supplemental tables, and in some cases they had to be inferred by examining ROIs mentioned in the introduction but not in the results section. This illustrates how QRPs are not mutually exclusive, and the presence of one QRP may also signal the presence of another [see e.g., Agnoli et al. (2017)].

The coding bias we observed may have a number of explanations. First, reviewers may have been subject to confirmation bias. Confirmation bias refers to the tendency to weigh evidence that confirms a belief more heavily than evidence that does not (Nickerson, 1998). Reviewers in our study may have assumed neurobiological abnormalities in psychopaths—perhaps from previous reviews—and looked more carefully for data to confirm that assumption. Confirmation bias has been cited as a possible explanation for under-reporting of null-findings in original research (Forstmeier et al., 2017). Our findings suggest that it may play a role in review literature, where null-findings would be especially difficult to square with theories presuming group differences [see e.g., Sterling et al. (1995) and Ferguson and Heene (2012)], and reporting bias would make it very hard to locate disconfirming (null) findings. Second, reviewers may have been following convention. The earliest review studies did not generally include null-findings, and later reviews may have interpreted this as a precedent to follow. Third, explicit and tacit publication preferences may increase coding bias. Research tracking original studies from grant proposal to publication show that most null-findings are not even written up for publication, and that journals—particularly top-tier journals—show a marked preference for strong positive findings (Franco et al., 2014Ioannidis et al., 2014). Similarly, review authors may have declined to submit reviews with inconclusive findings. Given the extent of publication bias, it is also possible that journal editors may have been more likely to reject inconclusive reviews in favor of those summarizing consistent, positive findings.

Coding bias observed in our study has a number of potential effects. Aside from distorting the true state of knowledge about structural brain abnormalities in psychopaths, it may also have led at least some researchers and courts to believe that the abnormalities are consistent enough for forensic application. This may have encouraged practitioners to de-emphasize or overlook more reliable, behavioral indicators of criminal responsibility, future dangerousness and treatment amenability in favor of less reliable predictors, such as brain structure. Neuroprediction of crime has a number of empirical shortcomings, such as unknown measurement error and inadequate outcome variables (Poldrack et al., 2018). Using MRI data to predict crime can thus introduce substantial error into an already imperfect process (e.g., Douglas et al., 2017). Neurobiologically-informed assessments and treatments are even less likely to be effective if the population's neurobiology is fundamentally misunderstood. Given the extent of coding bias in the psychopathy literature, such interventions may in fact be harmful.

More broadly, coding bias may have contributed to reverse inference [see Scarpazza et al. (2018)] whereby reports of brain abnormalities are taken as proof that psychopathy is a legitimate diagnostic category [for an argument such as this, see e.g., Kiehl and Hoffman (2011)].5 Similarly, some researchers have suggested that psychopathy diagnoses could be enhanced by neuroimaging evidence (e.g., Hulbert and Adeli, 2015). Arguments of this sort can detract from problems in other aspects of the PCL-R, particularly in its psychometric properties. Recently, these critiques have intensified, with authors raising concerns about the reliability of the PCL-R, its utility in forensic contexts (DeMatteo et al., 2020), its factor structure, and its predictive validity (Boduszek and Debowska, 2016). Using neurobiology to validate psychopathy as a diagnostic category is doubly problematic: not only are presumed brain abnormalities in psychopathy broad and non-specific [for problems in reverse inference, see Poldrack (2011) and Scarpazza et al. (2018)], but as we have shown here, their consistency appears to be largely misunderstood as well.

In light of our findings, we recommend the following: First, published review literature on sMRI studies of PCL-R and PCL:SV psychopathy should be approached with caution, especially when the literature is used to influence forensic decisions. Second, we recommend that guidelines for conducting review literature be revised to include explicit guidance for avoiding coding bias. Although the problem of un- and under-reported null-findings is recognized [e.g., Pocock et al., 1987Hutton and Williamson, 2000; guidelines for accurate reporting in review literature also exist; see Petticrew and Roberts (2008)American Psychological Association (2008), and Moher et al. (2015)], the role of coding bias, by and large, is not. Third, we recommend that review literature pay careful attention to the a priori likelihood of null-findings in their data. In our example, both the PCL-R (DeMatteo et al., 2020) and neuroimaging methods (Nugent et al., 2013) have relatively low reliability. The likelihood that sMRI research on psychopathy should yield more than 91% positive findings is therefore not realistic [for more extended discussions relating to fMRI, see Vul et al. (2009) and Vul and Pashler (2017)]. Fourth, we recommend that the production of new data should be complemented by closer examination of data already published. Among the 45 reviews we evaluated, we found a single study (Plodowski et al., 2009) that comprehensively reported all nulls in the original literature. Unfortunately, it was also among the least cited reviews, suggesting that accuracy and scientific impact do not necessarily go together. Finally, we recommend that reviewers pay close attention to potential biases—such as publication and reporting bias, p-hacking, and the file drawer problem—in the original literature, and take measures to compensate for them. Currently, it appears that reviews largely magnify them instead.

Limitations

Our study has a number of important limitations. First, in order to focus on forensically relevant studies, we limited our analysis to PCL-R and PCL:SV psychopathy. We also excluded studies that reported on PCL-R Factor scores only (e.g., Bertsch et al., 2013), that did not use case-control or correlational method (Sato et al., 2011Kolla et al., 2014), and that included youth samples. It is possible that the excluded studies were reported more accurately in review literature than those we included. Second, we excluded original and review studies not published in English. This may have introduced a selection bias of our own, as it is possible that non-English publications use different standards of reporting and reviewing than those published in English. Third, our findings may have underestimated the extent of the bias. For example, one whole-brain analysis reviewed here (Contreras-Rodríguez et al., 2015) only reported positive findings, which means that the remaining brain regions were unreported nulls. Had these unreported null-findings been included in our analysis, the true percentage of nulls in the original studies would have been greater than 64.18%. Further, we did not account for possible publication bias. Since null-findings are presumed to be less likely than null-rejections to be published, the percentage of true nulls in the field is essentially unknown, though it may be significantly higher than we estimated (review literature examined here did not report any unpublished null-findings). Finally, we excluded fMRI and other imaging methods entirely. Future research could evaluate whether coding bias is present in reviews of this literature as well.

Those with low levels of conscientiousness, life satisfaction, & self-esteem, as well as high levels of neuroticism, used more drugs on average; in contrast, found little evidence for personality change following substance use

How Does Substance Use Affect Personality Development? Disentangling Between- and Within-Person Effects. Lara Kroencke et al. Social Psychological and Personality Science, July 7, 2020. https://doi.org/10.1177/1948550620921702

Abstract: Little is known about the effects of substance use on changes in broad personality traits. This 10-year longitudinal study sought to fill this void using a large, representative sample of the Dutch population (N = 10,872), which provided annual assessments of drug use (tobacco, alcohol, sedatives, soft drugs, ecstasy, hallucinogens, and hard drugs), Big Five personality traits, life satisfaction, and self-esteem. Using multilevel models, we examined the longitudinal associations between drug use and personality both between and within persons. Results indicated that individuals with low levels of conscientiousness, life satisfaction, and self-esteem, as well as high levels of neuroticism, used more drugs on average (between-person effects). In contrast, we found little evidence for personality change following substance use (within-person effects). We discuss these findings in the context of previous empirical and theoretical work and highlight opportunities for future research.

Keywords: substance use, drug use, personality development, life satisfaction, self-esteem

This research examined the 10-year longitudinal associations between broad personality traits, life satisfaction, and self-esteem and use of different legal and illegal substances in a representative sample of the Dutch population. The purpose was to disentangle stable between-person effects from within-person associations to advance our understanding of the sources that may drive personality change. In what follows, we discuss our findings with respect to the previous literature and highlight their implications.

Consistent with our preregistration and past research, we found evidence for moderate between-person associations between drug use and personality traits. Specifically, individuals who were high in neuroticism and low in conscientiousness were more likely to consume drugs. These findings were mirrored by associations with life satisfaction and self-esteem (participants lower in life satisfaction and self-esteem were more likely to report substance use). As expected from our power analysis, even small to moderate effects (B > .30) were typically significant, except for infrequently consumed substances.

The fact that conscientiousness was related to use of nearly all substances is consistent with its association with a wide range of health behaviors (Bogg & Roberts, 2004). The relationships between substance use and neuroticism may indicate attempts of self-medication among neurotic individuals in an effort to decrease negative affective states (e.g., Khantzian, 1997). Interestingly, these between-person effects were more pronounced for less frequently consumed substances.

Regarding personality change, our study is among the first to fully disentangle between- from within-person effects and hence represents a more conservative test for the hypotheses at hand. Contrary to previous studies, we found few within-person effects of drug use on subsequent personality change. Even when significant, these effects were considerably smaller than the between-person effects, and none of the effects were predicted based on the existing literature. The within-person effects for the more malleable variables life satisfaction and self-esteem were also small and rarely significant, highlighting the robustness of the results. Below, we will discuss several possible reasons for the lack of predicted within-person effects.

First, our study was limited by selective attrition and somewhat lower power for rarely consumed drugs. Importantly, our power for relatively frequently consumed drugs was adequate even for small within-person effects. As such, the null findings for those effects are unlikely to represent Type II errors.

Second, we investigated whether a drug was consumed during the last month, but we did not measure substance use over longer periods of time, neither did our measures account for intensity and context of usage. We tried to control for these limitations (e.g., by investigating the effects of repeated use), but future studies should replicate our results using alternative measures of drug use.

Third, the intervals between personality and drug use assessments were relatively long, preventing us from examining transitory effects (less than 200 days). Our analyses were also restricted by the limited number of assessments per person. Future studies should include more measurement points and examine both the bivariate trajectories of substance use and personality and the effects of certain substance use life events (e.g., first onset of use) on personality trajectories.

Our findings have important theoretical implications. First, drug use has been proposed as a candidate mechanism for changes in personality that may be mediated via biological pathways (Costa et al., 2019) as well as behavioral or social mechanisms. Although theoretically plausible, we found little evidence for such effects. Second, we observed large variability in the associations between substance use and personality (i.e., random effects), indicating that, despite the lack of strong main effects, there are significant individual differences in within-person associations between substance use and personality. In other words, substance use might have negative effects for some people but no effects or even positive effects for others. Future studies should examine which moderator variables explain these different trajectories.

To our knowledge, this is the first large-scale study examining the impact of a wide range of drugs on the Big Five personality traits, life satisfaction, and self-esteem. We analyzed data from more than 10,000 individuals that were collected over a period of more than 10 years with an average of three assessments for each participant, using highly reliable personality measures. In addition, we used statistical models that effectively distinguished between- and within-person effects. Overall, our study provides strong evidence for between-person relationships between substance use and personality differences but little evidence for within-person changes in personality following substance use.

Do Newborns Have the Ability to Imitate? We can now rule out some long-standing explanations for why the effect might be difficult to detect, only some research groups observe it, the published literature is biased

Do Newborns Have the Ability to Imitate? Virginia Slaughter. Trends in Cognitive Sciences, Vol 25, Issue 5, pp 377-387, March 13, 2021. https://doi.org/10.1016/j.tics.2021.02.006

Highlights

Although many assume that newborn infants imitate others, new data and analyses suggest it is not a reliable effect.

Meta-analysis of human neonatal imitation studies revealed an overall positive effect which was not moderated by major methodological variations, but did vary by research group.

Future studies of newborn imitation should adopt modern procedures to eliminate potential biases.

Researchers should also test models of how imitation could be learned. Associative Sequence Learning proposes that coincident experience of producing and perceiving body gestures over the first year of life, creates mirror systems to support imitation.

Neonatal imitation is widely accepted as fact and cited as evidence of an inborn mirror neuron system that underpins human social behaviour, even though its existence has been debated for decades. The possibility that newborns do not imitate was reinvigorated recently by powerful longitudinal data and novel analyses. Although the evidence is still mixed, recent research progresses the debate by ruling out some long-standing explanations for why the effect might be difficult to detect, by showing that only some research groups observe it, and by revealing indications that the published literature is biased. Further advances will be made with updated testing procedures and reporting standards, and investigation of new research questions such as how infants could learn to imitate.

Concluding Remarks

When considering all the evidence, it is hard to maintain the conviction that newborns imitate. However, there is still no definitive answer to the question. To find an answer, several things need to happen. Researchers should replace the 40-year-old methodology that has nurtured this controversy with modern approaches to data collection, analysis and reporting. It is also time to ask new questions: are newborns physically capable of imitating (Box 2)? To what extent do adults imitate babies in everyday interactions? Can imitation be trained in the first months of life? It also would be helpful if authors writing about newborns, imitation or mirror neurons, acknowledged the ongoing controversy rather than treating neonatal imitation as a fact.
Box 2
Do Newborns Have Voluntary Control of Oral–Facial Movements?
Putting aside the controversial evidence for and against newborn imitation, Keven and Akins [] considered whether newborn infants’ sensory–motor and brain development enables imitation. Their detailed analysis drawing on modern accounts of early neuromotor development suggested that imitation of mouth gestures, and tongue protrusion in particular, is beyond the capacity of neonates whose nervous systems are still adapting to postpartum life. It takes months for the newborn brain to coordinate breathing with sucking and swallowing liquids. As these functions mature, newborns engage in repetitive oral activity, including a lot of mouth opening and closing, and tongue protrusion and retraction. These patterned behaviours are involuntary, driven by subcortical brain mechanisms, and increase when newborns are aroused.
Keven and Akins’ analysis suggests that voluntary production of oral–facial movements in response to a matching model is physically impossible in newborns, because these gestures are generated by the brain’s subcortex in the first months of life. This would mean that so-called imitation in the newborn period is simply coincidental, since neonates’ involuntary mouth movements increase in response to the arousing sight of an experimenter’s animated face. This conclusion has been offered previously, based on observations that newborns increase their tongue protrusions in response to various arousing stimuli including light displays and orchestral music []. Keven and Akins’ analysis also addresses the claim that newborn imitation fades out around 2–3 months of age: at that stage, the ‘wiring up’ of sucking, swallowing, and respiration functions are complete, so infants no longer involuntarily produce oral–facial movements in response to arousing stimuli.
Keven and Akins’ analysis demonstrates the value of asking different questions in relation to newborn imitation, rather than just focusing on ‘do they, or don’t they?’ However, their analysis was itself controversial, as evident in the open peer commentaries accompanying their article. For instance, some commentators dismissed the analysis as irrelevant to newborn imitation, relying on claims that a range of facial and manual gestures, in addition to tongue protrusion and mouth opening, are imitated by neonates [,]. Others accepted that newborns’ oral–facial movements promote maturation of sucking and swallowing but argued that they simultaneously function as communicative signals in face-to-face interactions with caregivers [,].
There is no doubt that imitation is a central element of human development. Indeed, there is a vast literature documenting what children imitate, from whom, and under what circumstances []. However, we still do not know when or how this ubiquitous behaviour emerges, which means that we do not truly understand it (see Outstanding Questions). If imitation turns out to be learned rather than inborn, this would not diminish the theoretical significance of imitation for infant–parent bonding, social learning or later-developing interpersonal skills. Rather, it would highlight the brain’s proclivity to create connections between ourselves and others, from the first months of life.
Outstanding Questions
Is there evidence of imitation when newborns are tested with objective measures such as EMG?
Does video modelling of gestures genuinely increase newborns’ imitative response as suggested by the meta-analysis? If so, why?
What accounts for the meta-analytic finding that neonatal imitation varies by research group?
How frequently do infants experience observation–action correspondences during a typical day and is this variable related to production of imitation?
Are different types of observation–action correspondence (e.g., self-observation, mirror exposure, and caregiver mimicry) related to different forms of imitation in infancy?
Can imitation be promoted in infants with observation-action training as predicted by ASL?