Tuesday, August 9, 2022

Cognitive training is completely ineffective in advancing cognitive function and academic achievement, but the field has maintained an unrealistic optimism about them

Cognitive Training: A Field in Search of a Phenomenon. Fernand Gobet, Giovanni Sala. Perspectives on Psychological Science, August 8, 2022. https://doi.org/10.1177/17456916221091830

Abstract: Considerable research has been carried out in the last two decades on the putative benefits of cognitive training on cognitive function and academic achievement. Recent meta-analyses summarizing the extent empirical evidence have resolved the apparent lack of consensus in the field and led to a crystal-clear conclusion: The overall effect of far transfer is null, and there is little to no true variability between the types of cognitive training. Despite these conclusions, the field has maintained an unrealistic optimism about the cognitive and academic benefits of cognitive training, as exemplified by a recent article (Green et al., 2019). We demonstrate that this optimism is due to the field neglecting the results of meta-analyses and largely ignoring the statistical explanation that apparent effects are due to a combination of sampling errors and other artifacts. We discuss recommendations for improving cognitive-training research, focusing on making results publicly available, using computer modeling, and understanding participants’ knowledge and strategies. Given that the available empirical evidence on cognitive training and other fields of research suggests that the likelihood of finding reliable and robust far-transfer effects is low, research efforts should be redirected to near transfer or other methods for improving cognition.

Keywords: cognitive training, meta-analysis, methodology, working memory training

As is clear from the empirical evidence reviewed in the previous sections, the likelihood that cognitive training provides broad cognitive and academic benefits is very low indeed; therefore, resources should be devoted to other scientific questions—it is not rational to invest considerable sums of money on a scientific question that has been essentially answered by the negative. In a recent article, Green et al. (2019) took the exact opposite of this decision—they strongly recommended that funding agencies should increase funding for cognitive training. This obviously calls for comments.

The aim of Green et al.’s (2019) article was to provide methodological recommendations and a set of best practices for research on the effect of behavioral interventions aimed at cognitive improvement. Among others, the addressed issues include the importance of distinguishing between different types of studies (feasibility, mechanistic, efficacy, and effectiveness studies), the type of control groups used, and expectation effects. Many of the points addressed in detail by Green et al. reflected sound and well-known research practices (e.g., necessity of running studies with sufficient statistical power, need for defining the terminology used, and importance of replications; see also Simons et al., 2016).

However, the authors made disputable decisions concerning central questions. These include whether superordinate terms such as “cognitive training” and “brain training” should be defined, whether a discussion of methods is legitimate while ignoring the empirical evidence for or against the existence of a phenomenon, the extent to which meta-analyses can compare studies obtained with different methodologies and cognitive-enhancement methods, and whether multiple measures should be used for a latent construct such as intelligence.

Lack of definitions

Although Green et al. (2019) emphasized that “imprecise terminology can easily lead to imprecise understanding and open the possibility for criticism of the field,” they opted to not provide an explicit definition of “cognitive training” (p. 4). Nor did they define the phrase “behavioral interventions for cognitive enhancement,” used throughout their article. Because they specifically excluded activities such as video-game playing and music (p. 3), we surmised that they used “cognitive training” to refer to computer tasks and games that aim to improve or maintain cognitive abilities such as WM. The term “brain training” is sometimes used to describe these activities, although it should be mentioned that Green et al. objected to the use of the term.

Note that researchers investigating the effects of activities implicitly or explicitly excluded by Green et al. (2019) have emphasized that the aim of those activities is to improve cognitive abilities and/or academic achievement, for example, chess (Jerrim et al., 2017Sala et al., 2015), music (Gordon et al., 2015Schellenberg, 2006), and video-game playing (Bediou et al., 2018Feng et al., 2007). For example, Gordon et al.’s (2015) abstract concluded by stating that “results are discussed in the context of emerging findings that music training may enhance literacy development via changes in brain mechanisms that support both music and language cognition” (p. 1).

Green et al. (2019) provided a rationale for not providing a definition. Referring to “brain training,” they wrote:

We argue that such a superordinate category label is not a useful level of description or analysis. Each individual type of behavioral intervention for cognitive enhancement (by definition) differs from all others in some way, and thus will generate different patterns of effects on various cognitive outcome measures. (p. 4)

They also noted that even using subcategories such as “working-memory training” is questionable. They did note that “there is certainly room for debate” (p. 4) about whether to focus on each unique type of intervention or to group interventions into categories.

In line with common practice (e.g., De Groot, 1969Elmes et al., 1992Pedhazur & Schmelkin, 1991), we take the view that definitions are important in science. Therefore, in this article, we have proposed a definition of “cognitive training” (see “Defining Terms” section above), which we have used consistently in our research.

Current state of knowledge and meta-analyses

A sound discussion of methodology in a field depends on the current state of knowledge in this field. Whereas Green et al. (2019) used information gleaned from previous and current cognitive-training research to recommend best practices (e.g., use of previous studies to estimate the sample size needed for well-powered experiments), they also explicitly stated that they will not discuss previous controversies. We believe that this is a mistake because, as just noted, the choice of methods is conditional on the current state of knowledge. In our case, a crucial ingredient of this state is whether cognitive-training interventions are successful—specifically, whether they lead to far transfer. One of the main “controversies” precisely concerns this question, and thus it is unwise to ignore it.

Green et al. (2019) were critical of meta-analyses and argued that studies cannot be compared:

For example, on the basic research side, the absence of clear methodological standards has made it difficult-to-impossible to easily and directly compare results across studies (either via side-by-side contrasts or in broader meta-analyses). This limits the field’s ability to determine what techniques or approaches have shown positive outcomes, as well as to delineate the exact nature of any positive effects – e.g., training effects, transfer effects, retention of learning, etc. (p. 3)

These comments wholly underestimate what can be concluded from meta-analyses. Like many other researchers in the field, Green et al. (2019) assumed that (a) the literature is mixed and, consequently, (b) the inconsistent results depend on differences in methodologies between researchers. However, assuming that there is some between-studies inconsistency and speculating on where this inconsistency stems from is not scientifically apposite (see “The Importance of Sampling Error and Other Artifacts” section above). Rather, quantifying the between-studies true variance (τ2) should be the first step to take.

Using latent factors

In the section “Future Issues to Consider With Regard to Assessments,” Green et al. (2019, pp. 16–17) raised several issues with using multiple measures for a given construct such as WM. This practice has been recommended by authors such as Engle et al. (1999) to reduce measurement error. Several of Green et al.’s arguments merit discussion.

A first argument is that using latent factors—as in confirmatory factor analysis—might hinder the analysis of more specific effects. This argument is incorrect because the relevant information is still available to researchers (see Kline, 2016Loehlin, 2004Tabachnik & Fidell, 1996). By inspecting factor loadings, one can examine whether the preassessment/postassessment changes (if any) affect the latent factor or only specific tests (this is a longitudinal-measurement-invariance problem). Green et al. (2019) seemed to equate multi-indicator composites (e.g., summing z scores) with latent factors. Composite measures are the result of averaging or summing across a number of observed variables and cannot tell much about any task-specific effect. A latent factor is a mathematical construct derived from a covariance matrix within a structural model that includes a set of parameters that links the latent factor to the observed variables. That being said, using multi-indicator composites would be an improvement compared with the current standards in the field.

A second argument is that large batteries of tests induce motivational and/or cognitive fatigue in participants, especially with particular populations. Although this may be true, for example with older participants, large batteries have been used in several cognitive-training studies, and participants were able to undergo a large variety of testing (e.g., Guye & von Bastian, 2017). Nevertheless, instead of assessing many different constructs, it may be preferable to focus on one or two constructs at a time (e.g., fluid intelligence and WM). Such a practice would help reduce the number of tasks and the amount of fatigue.

Another argument concerns carryover and learning effects. The standard solution is to randomize the presentation order of the tasks. This procedure, which ensures that bias gets close to zero as the number of participants increases, is generally efficient if there is no reason to expect an interaction between treatment and order (Elmes et al., 1992). If this is the case, another approach can be used: counterbalancing the order of the tasks. However, complete counterbalancing is difficult with large numbers of tasks, and in this case, one often has to be content with incomplete counterbalancing using a Latin square (for a detailed discussion, see Winer, 1962).

A final point made by Green et al. (2019) is that using large batteries of tasks increases the rate of Type I errors. Although this point is correct, it is not an argument against multi-indicator latent factors. Rather, it is an argument in favor because those do not suffer from this bias. In addition, latent factors aside, there are many methods designed for correcting α (i.e., the significance threshold) for multiple comparisons (e.g., Bonferroni, Holm, false-discovery rate). Increased Type I error rates are a concern with researchers who ignore the problem and do not apply any correction.

One reasonable argument is that latent factor analysis requires large numbers of participants. The solution is offered by multilab trials. The ACTIVE trial—the largest experiment carried out in the field of cognitive training—was, indeed, a multisite study (Rebok et al., 2014). Another multisite cognitive-training experiment is currently ongoing (Mathan, 2018).

To conclude this section, we emphasize two points. First, it is well known that in general, single tests possess low reliability. Second, multiple measures are needed to understand whether improvements occur at the level of the test (e.g., n-back) or at the level of the construct (e.g., WM).

Some methodological recommendations

We are not as naive as to believe that our analysis will deter researchers in the field to carry out much more research on the putative far-transfer benefits of cognitive training despite the lack of any empirical evidence. We thus provide some advice about the directions that should be taken so that not all resources are spent in search of a chimera.

Making methods and results accessible, piecemeal publication, and objective report of results

We broadly agree with the methodological recommendations made by Green et al. (2019), such as reporting not only p values but also effect sizes and confidence intervals, and the need for well-powered studies. We add a few important recommendations (for a summary of the recommendations throughout this article, see Table 3). To begin with, it is imperative to put the data, analysis code, and other relevant information online. In addition to providing supplementary backup, this allows other researchers to closely replicate the studies and to carry out additional analyses (including meta-analyses)—important requirements in scientific research. By the same token and in the spirit of Open Science, researchers should reply to requests from meta-analysts asking for summary data and/or the original data. In our experience, response rate is currently 20% to 30% at best (e.g., Sala et al., 2018). Although we understand that it may be difficult to answer such replies positively when data were collected 20 years or more ago, there is no excuse for data collected more recently.

Table

Table 3. Key Recommendations for Researchers

Table 3. Key Recommendations for Researchers

Just like other questionable research practices, piecemeal publication should be avoided (Hilgard et al., 2019). If dividing the results of a study into several articles cannot be avoided, the articles should clearly and unambiguously indicate the fact that this has been done and should reference the articles sharing the results.

There is one point made by Green et al. (2019) with which we wholeheartedly agree: the necessity of reporting results correctly and objectively without hyperbole and incorrect generalization. The field of cognitive training is littered with exaggerations and overinterpretations of results (see Simons et al., 2016). A fairly common practice is to focus on the odd statistically significant result even though most of the tests turn out nonsignificant. This is obviously capitalizing on chance and should be avoided at all costs.

In a similar vein, there is a tendency to overinterpret results of studies using neuroscience methods. A striking example was recently offered by Schellenberg (2019), who showed that in a sample of 114 journal articles published in the last 20 years on the effects of music training, causal inferences were often made although the data were only correlational; neuroscientists committed this logical fallacy more often than psychologists. There was also a rigid focus on learning and the environment and a concurrent neglect of alternative explanations, such as innate differences. Another example consists in inferring far transfer when neuroimaging effects are found but not behavioral effects. However, such an inference is illegitimate.

The need for detailed analyses and computational models

As a way forward, Green et al. (2019) recommended well-powered studies with large numbers of participants. In a similar vein, and focusing on the n-back-task training, Pergher et al. (2020) proposed large-scale studies isolating promising features. We believe that such an atheoretical approach is unlikely to succeed. There is an indefinite space of possible interventions (e.g., varying the type of training task, the cover story used in a game, the perceptual features of the material, the pace of presentation, ad infinitum), which means that searching this space blindly and nearly randomly would require a prohibitive amount of time. Strong theoretical constraints are needed to narrow down the search space.

There is thus an urgent need to understand which cognitive mechanisms might lead to cognitive transfer. As we showed above in the section on meta-analysis, the available evidence shows that the real effect size of cognitive training on far transfer is zero. Prima facie, this outcome indicates that theories based on general mechanisms, such as brain plasticity (Karbach & Schubert, 2013), primitive elements (Taatgen, 2013), and learning to learn (Bavelier et al., 2012), are incorrect when it comes to far transfer. We reach this conclusion by a simple application of modus tollens: (a) Theories based on general mechanisms such as brain plasticity, primitive elements, and learning to learn predict far transfer. (b) The empirical evidence shows that there is no far transfer. Therefore, (c) theories based on general mechanisms such as brain plasticity, primitive elements, and learning to learn are incorrect.

Thus, if one believes that cognitive training leads to cognitive enhancement—most likely limited to near transfer—one has to come up with other theoretical mechanisms than those currently available in the field. We recommend two approaches to identify such mechanisms, which we believe should be implemented before large-scale randomized controlled trials are carried out.

Fine analyses of the processes in play

The first approach is to use experimental methods enabling the identification of cognitive mechanisms. Cognitive psychology has a long history of refining such methods, and we limit ourselves to just a few pointers. A useful source of information consists in collecting fine-grained data, such as eye movements, responses times, and even mouse location and mouse clicks. Together with hypotheses about the processes carried out by participants, these data make it possible to rule out some mechanisms while making others more plausible. Another method is to design experiments that specifically test some theoretical mechanisms. Note that this goes beyond establishing that a cognitive intervention leads to some benefits compared with a control group. In addition, the aim is to understand the specific mechanisms that lead to this superiority.

It is highly likely that the strategies used by the participants play a role in the training, pretests, and posttests used in cognitive-training research (Sala & Gobet, 2019Shipstead et al., 2012von Bastian & Oberauer, 2014). It is essential to understand these strategies and the extent to which they differ between participants. Are they linked to a specific task or a family of tasks (near transfer), or are they general across many different tasks (far transfer)? If it turns out that such general strategies exist, can they be taught? What do they tell researchers about brain plasticity and changing basic cognitive abilities such as general intelligence?

Two studies that investigated the effects of strategies are mentioned here. Laine et al. (2018) found that instructing participants to employ a visualization strategy when performing n-back training improved performance. In a replication and extension of this study, Forsberg et al. (2020) found that the taught visualization strategy improved some of the performance measures in novel n-back tasks. However, older adults benefited less, and there was no improvement in WM tasks structurally different from n-back tasks. In the uninstructed participants, n-back performance correlated with the type of spontaneous strategies and their level of detail. The types of strategies also differed as a function of age.

A final useful approach is to carry out a detailed task analysis (e.g., Militello & Hutton, 1998) of the activities involved in a specific regimen of cognitive training and in the pretests and posttests used. What are the overlapping components? What are the critical components and those that are not likely to matter in understanding cognitive training? These components can be related to information about eye movements, response times, and strategies and can be used to inspire new experiments. The study carried out by Baniqued et al. (2013) provides a nice example of this approach. Using task analysis, they categorized 20 web-based casual video games into four groups (WM, reasoning, attention, and perceptual speed). They found that performance in the WM and reasoning games was strongly associated with memory and fluid-intelligence abilities, measured by a battery of cognitive tasks.

Cognitive modeling as a method

The second approach we propose consists of developing computational models of the postulated mechanisms, which of course should be consistent with what is known generally about human cognition (for a similar argument, see Smid et al., 2020). To enable an understanding of the underlying mechanisms and be useful in developing cognitive-training regimens, the models should be in a position to simulate not only the tasks used as pretests and posttests but also the training tasks. This is what Taatgen’s (2013) model is doing: It first simulates improvement in a complex verbal WM task over 20 training sessions and then simulates how WM training reduces interference in a Stroop task compared with a control group. (We would, of course, query whether this far-transfer effect is genuine.) By contrast, Green, Pouget, & Bavelier’s (2010) neural-network and diffusion-to-bound models simulate the transfer tasks (a visual-motion-direction discrimination task and an auditory-tone-location discrimination task) but do not simulate the training task with action video-game playing. Ideally, a model of the effect of an action video game should simulate actual training (e.g., by playing Call of Duty 2), processing the actual stimuli involved in the game. To our knowledge, no such model exists. Note that given the current developments in technology, modeling such a training task is not unrealistic.

The models should also be able to explain data at a micro level, including eye movements and verbal protocols (to capture strategies). There is also a need for the models to use exactly the same stimuli as those used in the human experiments. For example, the chunk hierarchy and retrieval structures model of chess expertise (De Groot et al., 1996Gobet & Simon, 2000) receives as learning input the kind of board positions that players are likely to meet in their practice. When simulating experiments, the same stimuli are used as those employed with human players, and close comparison is made between predicted and actual behavior along a number of dimensions, including percentage of correct responses, number and type of errors, and eye movements. In the field of cognitive training, Taatgen’s (2013) model is a good example of the proper level of granularity for understanding far transfer. Note that, ideally, the models should be able to predict possible confounds and how modifications to the design of training would circumvent them. Indeed, we recommend that considerable resources be invested in this direction of research with the aim of testing interventions in silico before testing them in vivo (Gobet, 2005). Only those interventions that lead to benefits in simulations should be tested in trials with human participants. In addition to embodying sound principles of theory development and testing, such an approach would also lead to considerable savings of research money in the medium and long terms.

Searching for small effects

Green et al. (2019, p. 20) recognized the possibility that large effects are unlikely and that one should be content with small effects. They are also open to the possibility of using unspecific effects, such as expectation effects. It is known that many educational interventions bring a modest effect (Hattie, 2009), and thus, the question arises as to whether cognitive-training interventions are more beneficial than alternative ones. We argue that many other interventions are cheaper and/or have specific benefits when they directly match educational goals. For example, games related to mathematics are more likely to improve one’s mathematical knowledge and skills than n-back tasks and can be cheaper and more fun.

If cognitive training leads only to small and unspecific effects, one faces two implications, one practical and one theoretical. Practically, the search for effective training features has to operate blindly, which is very inefficient. This is because current leading theories in the field are incorrect, as noted above, and thus there is no theoretical guidance. Thus, effectiveness studies are unlikely to yield positive results. Theoretically, if the effectiveness of training depends on small details of training and pre/post measures, then the prospects of generalization beyond specific tasks are slim to null. This is unsatisfactory scientifically because science progresses by uncovering general laws and finding order in apparent chaos (e.g., the state of chemistry before and after Mendeleev’s discovery of the periodic table of elements).

A straightforward explanation can be proposed for the pattern of results found in our meta-analyses with respect to far transfer—small to zero effect sizes, low or null true between-studies variance. Positive effect sizes are just what can be expected by chance, features of design (i.e., active vs. passive control groups), regression to the mean, and sometimes publication bias. (If you believe that explanations based on chance are not plausible, consider Galton’s board: It perfectly illustrates how a large number of small effects can lead to a normal distribution. Likewise, in cognitive training, multiple variables and mechanisms lead to some experiments having a positive effect, others a negative effect, with most experiments centered around the mean of the distribution.) Thus, the search for robust and replicable effects is unlikely to be successful.

Note that the issue with cognitive training is not the lack of replications and the lack of reproducibility, which plague large swathes of psychology: The main results have been replicated often and form a highly coherent pattern when results are put together in (meta-)meta-analyses. Pace Pergher et al. (2020), we do not believe that variability of methods is an issue. On the contrary, the main outcomes are robust to experimental variations. Indeed, results obtained with many different training and evaluation methods converge (small-to-zero effect sizes and low true heterogeneity) and thus satisfy a fundamental principle in scientific research: the principle of triangulation (Mathison, 1988).

Funding agencies

Although Green et al.’s (2019) article is explicitly about methodology, it does make recommendations for funding agencies and lobbies for more funding: “We feel strongly that an increase in funding to accommodate best practice studies is of the utmost importance” (p. 17). On the one hand, this move is consistent with the aims of their article in that several of the suggested practices, such as using large samples and performing studies that would last for several years, would require substantial amounts of money to be carried out. On the other hand, lobbying for an increase in funding is made without any reference to results showing that cognitive training might not provide the hoped-for benefits. The authors only briefly discussed the inconsistent evidence for cognitive training, concluding that “our goal here is not to adjudicate between these various positions or to rehash prior debates” (p. 3). However, in general, rational decisions about funding require an objective evaluation of the state of the research. Obviously, if the research is about developing methods for cognitive enhancement, funders must take into consideration the extent to which the empirical evidence supports the hypothesis that the proposed methods provide domain-general cognitive benefits. As we showed in the “Meta-Analytical Evidence” section, there is little to null support for this hypothesis. Thus, our advice for funders is to base their decisions on the available empirical evidence and on the conclusions reached by meta-analyses.

As discussed earlier, our meta-analyses clearly show that cognitive training does not lead to any far transfer in any of the cognitive-training domains that have been studied. In addition, using second-order meta-analysis made it possible to show that the between-meta-analyses true variance is due to second-order sampling error and thus that the lack of far transfer generalizes to different populations and different tasks. Taking a broader view suggests that our conclusions are not surprising and are consistent with previous research. In fact, they were predictable. Over the years, it has been difficult to document far transfer in experiments (Singley & Anderson, 1989Thorndike & Woodworth, 1901), industrial psychology (Baldwin & Ford, 1988), education (Gurtner et al., 1990), and research on analogy (Gick & Holyoak, 1983), intelligence (Detterman, 1993), and expertise (Bilalić et al., 2009). Indeed, theories of expertise emphasize that learning is domain-specific (Ericsson & Charness, 1994Gobet & Simon, 1996Simon & Chase, 1973). When putting this substantial set of empirical evidence together, we believe that it is possible to conclude that the lack of training-induced far transfer is an invariant of human cognition (Sala & Gobet, 2019).

Obviously, this conclusion conflicts with the optimism displayed in the field of cognitive training, as exemplified by Green et al.’s (2019) article discussed above. However, it is in line with skepticism recently expressed about cognitive training (Moreau, 2021Moreau et al., 2019Simons et al., 2016). It also raises the following critical epistemological question: Given that the overall evidence in the field of cognitive training strongly suggests that the postulated far-transfer effects do not exist, and thus the probability of finding such effects in future research is very low, should one conclude that the reasonable course of action is to stop performing cognitive-training research on far transfer?

We believe that the answer to this question is “yes.” Given the clear-cut empirical evidence, the discussion about methodological concerns is irrelevant, and the issue becomes searching for other cognitive-enhancement methods. However, although the hope of finding far-transfer effects is tenuous, the available evidence clearly supports the presence of near-transfer effects. In many cases, near-transfer effects are useful (e.g., with respect to older adults’ memory), and developing effective methods for improving near transfer is a valuable—and importantly, realistic—avenue for further research.


Monday, August 8, 2022

Yewtree, Savile, age of consent: The acute problems of proof which stale allegations entail also generates a demand that criminal courts should afford accusers therapy, by giving them ‘a voice’; this function is far removed from the courts’ traditional role, in which the state must prove defendants guilty beyond reasonable doubt

Yewtree is destroying the rule of law. By Barbara Hewson. spiked, May 8 2013.

With its emphasis on outcomes over process, the post-Savile witch-hunting of ageing celebs echoes the Soviet Union.

Yewtree is destroying the rule of law - spiked

I do not support the persecution of old men. The manipulation of the rule of law by the Savile Inquisition – otherwise known as Operation Yewtree – and its attendant zealots poses a far graver threat to society than anything Jimmy Savile ever did.

Now even a deputy speaker of the House of Commons is accused of male rape. This is an unfortunate consequence of the present mania for policing all aspects of personal life under the mantra of ‘child protection’.

We have been here before. England has a long history of do-gooders seeking to stamp out their version of sexual misconduct by force of the criminal law. In the eighteenth century, the quaintly named Society for the Reformation of Manners funded prosecutions of brothels, playwrights and gay men.

In the 1880s, the Social Purity movement repeatedly tried to increase the age of consent for girls from 13 to 16, despite parliament’s resistance. At that time, puberty for girls was at age 15 (now it is 10). The movement’s supporters portrayed women as fragile creatures needing protection from men’s animal impulses. Their efforts were finally rewarded after the maverick editor of the Pall Mall Gazette, WT Stead, set up his own secret commission to expose the sins of those in high places.

After procuring a 13-year-old girl, Stead ran a lurid exposé of the sex industry, memorably entitled ‘The Maiden Tribute of Modern Babylon’. His voyeuristic accounts under such titles as ‘Strapping girls down’ and ‘Why the cries of the victims are not heard’ electrified the Victorian public. The ensuing moral panic resulted in the age of consent being raised in 1885, as well as the criminalisation of gross indecency between men.

By contrast, the goings-on at the BBC in past decades are not a patch on what Stead exposed. Taking girls to one’s dressing room, bottom pinching and groping in cars hardly rank in the annals of depravity with flogging and rape in padded rooms. Yet the Victorian narrative of innocents despoiled by nasty men endures.

What is strikingly different today is how Britain’s law-enforcement apparatus has been infiltrated by moral crusaders, like the National Society for the Prevention of Cruelty to Children (NSPCC) and the National Association for People Abused in Childhood (NAPAC). Both groups take part in Operation Yewtree, which looks into alleged offences both by and not by Savile.

These pressure groups have a vested interest in universalising the notion of abuse, making it almost as prevalent as original sin, but with the modern complication that it carries no possibility of redemption, only ‘survival’. The problem with this approach is that it makes abuse banal, and reduces the sympathy that we should feel for victims of really serious assaults (1).

But the most remarkable facet of the Savile scandal is how adult complainants are invited to act like children. Hence we have witnessed the strange spectacle of mature adults calling a children’s charity to complain about the distant past.

The NSPCC and the Metropolitan Police Force produced a joint report into Savile’s alleged offending in January 2013, called Giving Victims a Voice. It states: ‘The volume of the allegations that have been made, most of them dating back many years, has made this an unusual and complex inquiry. On the whole victims are not known to each other and taken together their accounts paint a compelling picture of widespread sexual abuse by a predatory sex offender. We are therefore referring to them as “victims” rather than “complainants” and are not presenting the evidence they have provided as unproven allegations [italics added].’ The report also states that ‘more work still needs to be done to ensure that the vulnerable feel that the scales of justice have been rebalanced’.

Note how the police and NSPCC assume the roles of judge and jury. What neither acknowledges is that this national trawl for historical victims was an open invitation to all manner of folk to reinterpret their experience of the past as one of victimisation (2).

The acute problems of proof which stale allegations entail also generates a demand that criminal courts should afford accusers therapy, by giving them ‘a voice’. This function is far removed from the courts’ traditional role, in which the state must prove defendants guilty beyond reasonable doubt.

What this infantilising of adult complainants ultimately requires is that we re-model our criminal-justice system on child-welfare courts. These courts (as I have written in spiked previously) have for some decades now applied a model of therapeutic jurisprudence, in which ‘the best interests of the child’ are paramount.

It is depressing, but true, that many reforms introduced in the name of child protection involve sweeping attacks on fundamental Anglo-American legal rights and safeguards, such as the presumption of innocence. This has ominous consequences for the rule of law, as US judge Arthur Christean pointed out: ‘Therapeutic jurisprudence marks a major and in many ways a truly radical shift in the historic function of courts of law and the basic purpose for which they have been established under our form of government. It also marks a fundamental shift in judges’ loyalty away from principles of due process and toward particular social policies. These policies are less concerned with judicial impartiality and fair hearings and more concerned with achieving particular results…’

The therapeutic model has certain analogies with a Soviet-style conception of justice, which emphasises outcomes over processes. It’s not difficult, then, to see why some celebrity elderly defendants, thrust into the glare of hostile publicity, including Dalek-style utterances from the police (‘offenders have nowhere to hide’), may conclude that resistance is useless. But the low-level misdemeanours with which Stuart Hall was charged are nothing like serious crime.

Touching a 17-year-old’s breast, kissing a 13-year-old, or putting one’s hand up a 16-year-old’s skirt, are not remotely comparable to the horrors of the Ealing Vicarage assaults and gang rape, or the Fordingbridge gang rape and murders, both dating from 1986. Anyone suggesting otherwise has lost touch with reality.

Ordinarily, Hall’s misdemeanors would not be prosecuted, and certainly not decades after the event. What we have here is the manipulation of the British criminal-justice system to produce scapegoats on demand. It is a grotesque spectacle.

It’s interesting that two complainants who waived anonymity have told how they rebuffed Hall’s advances. That is, they dealt with it at the time. Re-framing such experiences, as one solicitor did, as a ‘horrible personal tragedy’ is ironic, given that tragoidia means the fall of an honourable, worthy and important protagonist.

It’s time to end this prurient charade, which has nothing to do with justice or the public interest. Adults and law-enforcement agencies must stop fetishising victimhood. Instead, we should focus on arming today’s youngsters with the savoir-faire and social skills to avoid drifting into compromising situations, and prosecute modern crime. As for law reform, now regrettably necessary, my recommendations are: remove complainant anonymity; introduce a strict statute of limitations for criminal prosecutions and civil actions; and reduce the age of consent to 13.

Barbara Hewson is a barrister at Hardwicke in London.

---

Notes:

(1) Moral Crusades in an Age of Mistrust, by Frank Furedi, Palgrave Macmillan, 2013 pp60-61;  ‘No Law in the Arena’, by Camille Paglia, included in Vamps & Tramps: New Essays, Penguin, 1995, pp24-25

(2) Moral Crusades in an Age of Mistrust, by Frank Furedi, Palgrave Macmillan, 2013, p70


Previous research claims that public awareness of censorship will lead to backlash against the regime; but individuals become desensitized to censorship when the range of censored content expands beyond politically threatening topics

Yang, Tony, Normalization of Censorship: Evidence from China (November 2, 2021). SSRN: http://dx.doi.org/10.2139/ssrn.3835217

Abstract: Previous research claims that public awareness of censorship will lead to backlash against the regime. However, surveys consistently find that Chinese citizens are apathetic toward or even supportive of government censorship. To explain this puzzle, I argue that citizens are subject to a process of normalization. Specifically, individuals become desensitized to censorship when the range of censored content expands beyond politically threatening topics like government criticism and collective action to other seemingly harmless non-political issues. Using a dataset of 15,872 censored articles on WeChat and two original survey experiments in China, I show that (1) a majority of censored articles are unrelated to politically threatening topics, and (2) respondents exposed to the censorship of non-political content display less backlash toward the regime and its censorship apparatus. My findings highlight how normalization of repressive policies contributes to authoritarian control.


Keywords: Censorship, China, Normalization, Desensitization, Backlash, Authoritarian Control


Female fruit flies copy the acceptance, but not the rejection, of a mate

Female fruit flies copy the acceptance, but not the rejection, of a mate. Sabine Nöbel, Magdalena Monier, Laura Fargeot, Guillaume Lespagnol, Etienne Danchin, Guillaume Isabel. Behavioral Ecology, arac071, Aug 8 2022, https://doi.org/10.1093/beheco/arac071

Abstract: Acceptance and avoidance can be socially transmitted, especially in the case of mate choice. When a Drosophila melanogaster female observes a conspecific female (called demonstrator female) choosing to mate with one of two males, the former female (called observer female) can memorize and copy the latter female’s choice. Traditionally in mate-copying experiments, demonstrations provide two types of information to observer females, namely, the acceptance (positive) of one male and the rejection of the other male (negative). To disentangle the respective roles of positive and negative information in Drosophila mate copying, we performed experiments in which demonstrations provided only one type of information at a time. We found that positive information alone is sufficient to trigger mate copying. Observer females preferred males of phenotype A after watching a female mating with a male of phenotype A in the absence of any other male. Contrastingly, negative information alone (provided by a demonstrator female actively rejecting a male of phenotype B) did not affect future observer females’ mate choice. These results suggest that the informative part of demonstrations in Drosophila mate-copying experiments lies mainly, if not exclusively, in the positive information provided by the copulation with a given male. We discuss the reasons for such a result and suggest that Drosophila females learn to prefer the successful males, implying that the underlying learning mechanisms may be shared with those of appetitive memory in non-social associative learning.

DISCUSSION

Our goal was to disentangle the role of positive and negative information during the observation of binary mate-choice decisions in D. melanogaster in order to evaluate its ecological relevance. We found that females, that received positive information only or positive and negative information at the same time, learned and copied the choice of the demonstrator females, as in previous studies (Dagaeff et al. 2016Danchin et al. 2018Nöbel et al. 2018Monier et al. 2019). We further found no significant difference in the learning capacities of females of these two treatments. In contrast, females receiving only negative information did not significantly avoid the color they saw being rejected, which differs from a previous study in fish (Witte and Ueding 2003). Thus, positive information appears sufficient to elicit mate copying after one demonstration in fruit flies.

The absence of mate copying in the rejection treatment suggests that one demonstration containing rejection(s) of a male is not sufficient to elicit avoidance behavior in the observer females. This may be because a female can reject a male for reasons that are independent from its quality, like the female being non-receptive (Connolly and Cook 1973Neckameyer 1998), as this is the case in our study. Alternatively, it may be that observer females were less interested in negative demonstrations as they did not involve copulation, in which case the negative result would simply result from a lack of interest in the demonstrations. Or it could be that the solitary male and the rejected male were evaluated in the same way, and thus, no preference was developed.

A recent study of aversive olfactory memory in Drosophila showed that an initially neutral stimulus can become attractive to fruit flies under some circumstances—the “safety memory” (Jacob and Waddell 2020). Briefly, after a multiple spaced training with sequences of conditioned stimuli (CS) simultaneously with an aversive cue (CS+) followed by another CS without reinforcement (CS−), Jacob and Waddell conclude that the individuals display both a CS + avoidance and an approach movement towards the CS- when later given the choice between the CS + and CS− odors. Thus, in our design, a sequence of several rejections (showing first a male of phenotype A rejected by a female and then a single male of another phenotype B, repeated several times) might elicit aversive learning for phenotype A leading to a choice for the male phenotype B. Interestingly, in the fruit fly larva, appetitive but not aversive olfactory stimuli support associative gustatory learning (Hendel et al. 2005). Opposite to what we observe in fruit fly females, female sailfin mollies (Poecillia latipinna) copy the rejection of a male (Witte and Ueding 2003). However, the setup used in that study was quite different from ours, as the rejection demonstration consisted of a sequence of four 12-min video of four different females escaping from a courting male, so that the rejection cue seemed much stronger than in the present study that only involved a single demonstrator female. Similarly, in humans, women, but not men, decrease their interest for a relationship to a demonstrator after watching a speed-dating video in which the demonstrator and a potential partner showed mutual lack of interest (Place et al. 2010). This can indicate that beyond the effect of the experimental conditions, different species use different social cues for mate copying. However, the motivations to reject a partner are way less studied than for building specific mating preferences.

A last alternative can be that in nature newly emerged females do not see older females choosing between only two males, but rather see females choosing among many males to copulate with one of them. The fact that the former chooses that specific male is informative in itself but the fact that she rejected all other potential male does not reveal much information about all the non-selected males. This purely statistical fact may explain the absence of an effect of seeing only a rejection.

Finally, our results suggest that in the classical Drosophila mate-copying design, the rejected male shown in the demonstration may not constitute the prominent cue triggering learning in the observer female. Moreover, the presentation of a male of the opposite color together with the copulating pair in the classical demonstration might even constitute a distractive stimulus, as indirectly suggested by Germain et al. (2016 experiment 3). In nature, females may observe copulations longer than rejection as copulations likely last for more than 30 min (Markow 2000), while rejections are brief and thus far less prominent (Gromko and Markow 1993). It is thus possible that our result is explained by the fact that D. melanogaster females evolved an ability to gather social information from the most easily detectable and reliable social cues. Alternatively, females might pay attention to rejection events too but might have difficulties in interpreting them or distinguishing them from other neutral information, such as solitary males.

Our finding that the acceptance of a male by the demonstrator female is the most relevant cue to elicit full mate copying by the observer female suggests that it involves networks of appetitive learning neurons and mechanisms rather than the aversive pathway. Several authors suggested that social learning in many contexts can have an associative explanation (e.g., Munger et al. 2010 ; Avarguès-Weber et al. 2015Heyes and Pearce 2015Leadbeater and Dawson 2017). For mate copying, this has yet to be proven. At the moment, asocial learning, like olfactory associative direct learning, is way better understood. Here the pairing between a conditioned stimulus (CS; for instance, odor A) and an appetitive US (sucrose) leads flies to prefer odor A over B even in the absence of any reward (Tempel et al. 1983) through the association of odor A to the reward (Schultz et al. 1997). In our social learning paradigm, we can speculate that the relevant cues eliciting learning are the color of the copulating males in association with the successful mating. Hence, the copulating pair would mediate the appetitive US, while male color would constitute the CS (Avarguès-Weber et al. 2015). Under this hypothesis, it would be interesting to study whether mate-copying mechanisms resemble those of visual, appetitive, associative learning, given that its neural bases are now well-understood (Vogt et al. 20142016).

More generally, understanding how social learning works can only help sharpening our view on the evolution of the different types of learning, opening the way to new theories about the evolution of behavior, cognition, and culture in invertebrates.

The population is widely exposed to online false news; however, echo chambers are minimal, and the most avid readers of false news content regularly expose themselves to mainstream news sources

Zhang, Jiding and Moon, Ken and Veeraraghavan, Senthil K., Does Fake News Create Echo Chambers? (June 23, 2022). SSRN: http://dx.doi.org/10.2139/ssrn.4144897

Abstract: Platforms have come under criticism from regulatory agencies, policymakers, and media scholars for the unfettered spread of fake news online. A key concern is that, as fake news becomes prevalent, individuals may fall into online "echo chambers" that predominantly expose them only to fake news. Using a dataset reporting 30,995 individual households’ online activity, we empirically examine the reach of false news content and whether echo chambers exist. We find that the population is widely exposed to online false news. However, echo chambers are minimal, and the most avid readers of false news content regularly expose themselves to mainstream news sources. Using a natural experiment occurring on a major social media platform, we find that being exposed to false news content causes households to increase their exposure to countervailing mainstream news (by 9.1% in the experiment). Hence, a naive intervention that reduces the supply of false news sources on a platform also reduces the overall consumption of news. Based on a structural model of household decisions whether to diversify their online news sources, we prescribe how platforms should moderate false news content. We find that platforms can further reduce the size of echo chambers (by 12-18%) by focusing their content moderation efforts on the households that are most susceptible to consuming predominantly false news, instead of the households most deeply exposed to false news.

Keywords: Digital operations, Echo chambers, Marketplace for news, Natural experiment, Platforms, Structural estimation

Check also other literature with references: Politically partisan left-right online news echo chambers are real, but only a minority of approximately 5% of internet news users inhabit them; the continued popularity of mainstream outlets often preclude the formation of large partisan echo chambers

Nearshoring in Mexico: In 2018-2021 the proportion of manufactured goods imported into the US from Mexico barely changed; Asian countries but China increased their share of US manufactured goods imports from 12.6 pct to 17.4 pct

Why Mexico is missing its chance to profit from US-China decoupling. Michael Stott and Christine Murray. Financial Times Jul 3 2022. https://www.ft.com/content/7fc2adf0-0577-4e13-b9a3-218dda2ddd5b


A predicted economic boom from American companies relocating closer to home has not arrived. Many blame the president

Michael Stott and Christine Murray in Mexico City. July  3 2022


When Donald Trump started a trade war with China in 2018, Mexico looked well placed to benefit.


For American manufacturers scrambling to dodge newly imposed tariffs on Chinese imports, the attraction of moving production to their southern neighbour seemed clear. Mexico offered a skilled workforce, good road and rail connections, an established export industry and privileged trade access.


The stage appeared to be set for a boom in “nearshoring” — relocating production closer to home. A bonanza beckoned, perhaps rivalling the one Mexico enjoyed in 1994 after the signing of the North American Free Trade Agreement.


It didn’t happen. Between 2018 and 2021 the proportion of manufactured goods imported into the US from Mexico barely changed according to data compiled by Kearney, the consultancy. Instead the rewards of the China boycott were reaped by low-cost Asian competitors including Vietnam and Taiwan. Asian countries other than China increased their share of US manufactured goods imports from 12.6 per cent to 17.4 per cent over the period.


The rapid growth in total US goods imports from Mexico that might have been expected had nearshoring taken off was also missing. It rose by just 11.8 per cent over three years to $384.6bn in 2021, according to the US Census Bureau — after allowing for inflation the total increase was just under 4 per cent.


“Most of the gains have gone to Asean, India and Korea,” said UBS in a recent report examining nearshoring in Mexico. “At least for now, the US import penetration data does not support the view that Mexico has been a net beneficiary of nearshoring.”


There have been some signs of increased activity. Mexico attracted $34.9bn in foreign direct investment in the year to the end of March, up from $26.1bn a year earlier — although that figure includes large one-off transactions outside the manufacturing sector. Industrial parks in the north of the country are full and some international companies have relocated there. But despite this, Mexico’s overall economic growth over the past three years has been among the weakest of Latin America’s larger economies.


“This should be the golden era for investment in Mexico,” says Mauricio Claver-Carone, president of the Inter-American Development Bank and a big supporter of nearshoring. Calculations by the IDB suggest Mexico has the potential to deliver almost half of the $78bn in additional annual exports from nearshoring that the bank estimates Latin America could generate in the medium term.


Claver-Carone says there is plenty of interest from executives in moving to Mexico: “Not a day goes by without a major company calling me up and saying, ‘Hey, we want to invest [in moving production], can you help us in Mexico?’”


Yet the interest has not yet translated into measurable economic gains, says Ernesto Revilla, head of Latin America economics at Citi and a former Mexican finance ministry official. While nearshoring has become a buzzword in discussions about the future of the Mexican economy, he says, “nobody knows how to continue the conversation”.


The ‘moral economy’

Much of the blame for Mexico’s lacklustre economic performance has fallen on the shoulders of President Andrés Manuel López Obrador. Business leaders, diplomats and investors say he has been hostile to some foreign companies and complain that his capricious decision-making and authoritarian tendencies are scaring off investment.


López Obrador came to power in 2018 on a leftist, nationalist platform. He dreams of restoring Mexico’s economy to the oil-powered, state-dominated days of the 1970s. His quixotic pledge of a “fourth transformation” of the country — a change he puts on a par with Mexico gaining independence from Spain — promises to eliminate corruption and speed up growth. He wants to create a “moral economy” that puts the poor first, but his regular naming and shaming of multinationals at daily news conferences does little to instil confidence in foreign businesses contemplating forays into Mexico.


Despite his heavy criticism of low growth under previous “neoliberal” governments, in the first three full years of his own administration Mexico’s gross domestic product contracted overall. It is the only major Latin American economy whose output will still be below pre-pandemic levels by the end of this year, according to estimates from JPMorgan.


Mexico’s poor performance is “a direct consequence of . . . Amlo-nomics, which is extremely tight macroeconomic policy coupled with very bad microeconomics,” says Citi’s Revilla, using the acronym that has become the president’s nickname. “The result is not surprising: it’s very low growth.”


Andrés Rozental, a former deputy foreign minister who now works as a consultant, agrees. “We had everything to gain from the global geopolitical situation,” he says. “But it’s all been squandered because of López Obrador’s anti-private sector policies.”


The president’s obsession with “republican austerity” — he flies economy class and took a large personal pay cut — has meant salary reductions for top officials. This in turn has led to a brain drain, budget cuts at government agencies and sharply reduced spending on infrastructure.


Mexico has the lowest public investment among OECD countries, spending just 1.3 per cent of GDP in 2019, the first year of López Obrador’s government. Much of what remains is channelled into a handful of grandiose projects championed by the president. The most prominent is a new oil refinery in his home state of Tabasco, whose cost has spiralled to between $16bn and $18bn, according to Bloomberg.


López Obrador has repeatedly attacked Mexico’s autonomous regulatory agencies, criticising their decisions, cutting budgets and suggesting that they collude corruptly with business.


“Mexico has a big comparative advantage in farming but there are problems in [agricultural health agency] Senasica,” says Luis de la Calle, an economist who runs a consulting firm in Mexico City. “We are a big exporter of fish and seafood, but the government took away funding from [fishing council] Conapesca. We are a big exporter of medical equipment but they cut money for the certifying body Cofepris. It’s madness.”


[Photo w/text: López Obrador dreams of restoring Mexico’s economy to the oil-powered, state-dominated days of the 1970s]


One of López Obrador’s first decisions as president was to shut the government agency ProMéxico, which worked to promote investment in Mexico and had 51 offices overseas. The president said they were “supposedly dedicated to promoting the country, which is ridiculous because there are no ProGermany, ProFrance or ProCanada offices”. In fact, most countries have government agencies to promote foreign investment.


“Deep down,” de la Calle says, “López Obrador believes that economic success is not possible by itself. It is always the result of luck or corruption.”


Foreign targets

Despite the downbeat mood, the government and some experts insist Mexico could still take advantage of supply chain disruptions caused by Covid-19, higher shipping costs and Ukraine invasion-related fuel price surges that make the economics of moving production to Mexico more compelling.


Tatiana Clouthier, Mexico’s economy minister, argues the country is “doing well for investment” in nearshoring. “It always could be [more] . . . There always could be better circumstances for everything,” she says.


For many years, Clouthier says, Mexico has suffered from “an imbalance, where the thinking was about how to strengthen investment and the social part was ignored”. Now, she says, “the idea is to try to compensate for that”. In practice, the shift in policy has brought decisions that upset foreign companies. Those from the US, Mexico’s biggest foreign investor, have been particularly exposed.


Last month the government forced Vulcan Materials, the biggest American producer of aggregates used in US construction, to halt quarrying in the southeastern state of Quintana Roo, with López Obrador warning that an “ecological catastrophe” was taking place. Vulcan, which has operated in the area for 30 years, has described the shutdown as “arbitrary and illegal” and has sought arbitration under the US-Mexico-Canada Agreement, the successor trade pact to Nafta.


The dispute prompted a letter from a group of US senators last month to President Joe Biden, calling on him to take immediate action to stop Mexico’s “aggression towards US companies”. “If these violations are allowed to continue, they will . . . encourage businesses to seek more predictable and suitable markets elsewhere,” the letter said.


Economy minister Tatiana Clouthier argues the country is ‘doing well for investment’ in nearshoring © Lujan Agusti/Bloomberg

In 2020, Constellation Brands, a large American drinks company that makes Corona beer for the US market, cancelled a $1.4bn factory being built in the northern city of Mexicali after the government revoked its construction permit. The company has since agreed to build a facility in Veracruz but work has yet to start.


Companies from Spain, the former colonial power, have also been targeted. That Spain is Mexico’s second biggest foreign investor after the US cuts little ice. Spanish companies “abused our country and our peoples”, López Obrador told a news conference in February. In a reference to the post-Nafta decades, he added: “During the neoliberal period, Spanish companies supported by the political establishment saw us as a land to be conquered.”


A particular target of the president’s ire has been Iberdrola, the Spanish power company that owns a string of electricity generating plants in Mexico. He has accused it of corrupt deals, something Iberdrola denies. Iberdrola had announced plans to invest $5bn in renewable energy projects in Mexico during López Obrador’s term but has now abandoned almost all of its Mexican investment and is fighting the government in the courts.


The attacks on Iberdrola are part of López Obrador’s crusade to restore the state to pride of place in Mexico’s energy sector. Previous governments had chipped away at the state’s historic monopoly over energy, allowing the private sector to operate electricity generating plants serving industrial customers in the wake of Nafta.


A landmark 2013 constitutional reform opened up the oil, gas and electricity sectors more widely but, since he took power, López Obrador has opposed these reforms and launched a wave of initiatives to roll them back. These include a law changing Mexico’s electricity grid rules to favour the fossil fuel-heavy state power generator, CFE, at the expense of private companies.


As foreign power generators fight the government in the courts, companies that need power for new plants in Mexico are struggling to secure adequate supplies. Worse, the CFE’s reliance on CO₂-emitting oil and gas plants rules it out for multinationals which are committed to reaching net zero carbon emissions targets.


Alberto de la Fuente, president of Mexico’s Executive Council of Global Companies, which represents 57 multinationals accounting for 40 per cent of foreign direct investment, has warned that if Mexico cannot fulfil its clean energy goals, companies “will simply leave”.


Behind closed doors

While foreign companies have borne the brunt of López Obrador’s attacks, the handful of big Mexican businesses that control large parts of the economy have been less affected.


When the president wanted to tackle inflation, his government invited Mexican business leaders for private conversations to agree an informal pact limiting price rises on basic groceries. “It wasn’t a big sacrifice,” noted the owner of one large Mexican group.


Mexico’s oligarchs have reinforced the impression of a cosy relationship with the president by making supportive statements in public and confining any criticism to conversations behind closed doors. “All the Mexican business leaders complain about Amlo,” says the chief executive of one big foreign company. “But when they meet him, they all appear afterwards in public saying how wonderful he is . . It’s a circle of collusion.”


López Obrador has also made life difficult for international businesses in Mexico in less direct ways. Shortly after taking office, he cancelled a $13bn new airport for Mexico City that would have replaced the congested Benito Juárez facility, claiming the partly-executed project was too extravagant, even though cancelling it cost billions.


“The airport decision was 100 per cent political,” says one leading Mexican businessman. “It was the worst economic decision this government has made.”


Instead, López Obrador ordered the army to remodel a nearby military air base and turn it into an additional airport for the capital. The new Felipe Ángeles facility opened in March at a cost estimated by former finance minister Carlos Urzúa of $5.7bn. Most airlines have shunned it because of its poor road and rail connections (the government is working on new transport links). Its only regular international flight goes to Venezuela, a nation under US sanctions. Flights to Cuba, another nation under US sanctions, are due to start in July.


The new airport is only 45km from Benito Juárez, a proximity that has forced a controversial redesign of Mexico City’s airspace. This triggered what the International Air Transport Association called a “very worrying” increase in alerts over flights at risk of collision. Mexico’s airlines are currently unable to expand flights to the US because the Federal Aviation Administration downgraded its air safety rating last year.


“We are losing out on the potential we have as a country and as a city” because of the airport situation, says the chief executive of the foreign firm. “There may come a moment where you have to fly to Monterrey to take a plane to Barcelona.”


Despite this, Omar Troncoso, a nearshoring expert at Kearney in Mexico, sees some reason for optimism in the recent geopolitical shifts. He said that as recently as last year, “Mexico was still more expensive than many [Asian] low-cost countries” when total costs of getting the product to the customer were factored in. Then “we had a massive disruption in [the] supply chain and . . . the price of a container being brought from China to the US skyrocketed . . . It [is] now cheaper to produce in Mexico,” he says.


Troncoso believes that the nearshoring aimed at supplying the US market currently under way will take another two or three years to show up in the data. “If you . . . look for a space in some of the border cities . . . the real estate agents will tell you that you’re going to have to wait until 2025 — everything . . . is already sold out.”


Sergio Argüelles González, head of Mexico’s industrial parks association, says 2021 was a bumper year with “spectacular demand” and predicts this will continue if there are sufficient power supplies.


“In spite of Amlo, something is happening,” says Citi’s Revilla. “The [momentum] for nearshoring is big and hopefully will outlast Amlo and help Mexico in the medium term.” De la Calle, the consultant, voices a similar view: “Nearshoring is happening,” he says. “But . . . if we did things properly, it could be three times more than it is now . . . The opportunity cost of López Obrador is very big.” 


Sunday, August 7, 2022

Parishes affected by the Dissolution of the English monasteries, 1535, experienced a rise of the gentry & had more innovation, higher yield in agriculture, more population working outside of agriculture, & ultimately higher levels of industrialization

The Long-Run Impact of the Dissolution of the English Monasteries. Leander Heldring, James A Robinson, Sebastian Vollmer. The Quarterly Journal of Economics, Volume 136, Issue 4, November 2021, Pages 2093–2145, https://doi.org/10.1093/qje/qjab030

Abstract: We use the effect of the Dissolution of the English Monasteries after 1535 to test the commercialization hypothesis about the roots of long-run English economic development. Before the Dissolution, monastic lands were relatively unencumbered by inefficient feudal land tenure but could not be sold. The Dissolution created a market for formerly monastic lands, which could now be more effectively commercialized relative to nonmonastic lands, where feudal tenure persisted until the twentieth century. We show that parishes affected by the Dissolution subsequently experienced a rise of the gentry and had more innovation and higher yield in agriculture, a greater share of the population working outside of agriculture, and ultimately higher levels of industrialization. Our results are consistent with explanations of the Agricultural and Industrial Revolutions which emphasize the commercialization of society as a key precondition for taking advantage of technological change and new economic opportunities.

JEL N43 - Europe: Pre-1913N63 - Europe: Pre-1913N93 - Europe: Pre-1913O14 - Industrialization; Manufacturing and Service Industries; Choice of TechnologyQ15 - Land Ownership and Tenure; Land Reform; Land Use; Irrigation; Agriculture and Environment


6 Conclusions

In this paper we conducted what to our knowledge is the first empirical investigation of one aspect of the salient commercialization thesis about the causes of industrialization and the industrial revolution in England. Though we cannot test the idea that it was commercialization that caused the industrial revolution, we used the impact of the Dissolution of the monasteries in England between 1536 and 1540 as a source of variation in the extent of commercialization within England. Tawney (1941a,b) first proposed that the Dissolution and subsequent sell off of church land, representing around 1/3 of agricultural land in England, created a huge shock to the land market with profound consequences. We argue that this can be viewed as a natural experiment in the modernization of economic institutions and we hypothesized that the subsequent thickening of the land market would have had a major positive impact on resource allocation and incentives. This was particularly because monastic lands were relatively free of customary perpetual copyhold tenancies which were a direct legacy of feudalism. To investigate this we digitized the 1535 Valor Ecclesiasticus, the census that Henry VIII commissioned on monastic incomes.


Using the presence of monastically owned land at the parish level as our main explanatory variable we showed that the Dissolution had significant positive effects on industrialization which we measured using data from the 1838 Mill Census, the first time the British government collected systematic data on this driving sector of the Industrial Revolution.


We also showed the Dissolution was associated with structural change, specifically the movement of labor out of agriculture and into more industrialized sectors of the economy.


We then examined several channels which might link the Dissolution to these longrun outcomes. We showed that the Dissolution was associated, as Tawney hypothesized, with social change and the rise of a new class of commercially minded farmer. It was also associated with faster conversion from Catholicism, another factor plausibly linked to better economic performance.


We further found the Dissolution to be associated with greater agricultural investment, measured by parenting and land enclosures, and higher wheat yields. All in all, our findings support a quite traditional theory of the industrial, and perhaps the agricultural, revolution; that it was at least partially caused by the increasing commercialization of the economy which had a series of institutional, social and economics effect.

Probability of males to outlive females: an international comparison from 1751 to 2020

Probability of males to outlive females: an international comparison from 1751 to 2020. Marie-Pier Bergeron-Boucher et al. BMJ Open, Volume 12, Issue 8, Aug 2022. https://bmjopen.bmj.com/content/12/8/e059964

Abstract

Objective To measure sex differences in lifespan based on the probability of males to outlive females.

Design International comparison of national and regional sex-specific life tables from the Human Mortality Database and the World Population Prospects.

Setting 199 populations spanning all continents, between 1751 and 2020.

Primary outcome measure We used the outsurvival statistic ( φ ) to measure inequality in lifespan between sexes, which is interpreted here as the probability of males to outlive females.

Results In random pairs of one male and one female at age 0, the probability of the male outliving the female varies between 25% and 50% for life tables in almost all years since 1751 and across almost all populations. We show that φ is negatively correlated with sex differences in life expectancy and positively correlated with the level of lifespan variation. The important reduction of lifespan inequality observed in recent years has made it less likely for a male to outlive a female.

Conclusions Although male life expectancy is generally lower than female life expectancy, and male death rates are usually higher at all ages, males have a substantial chance of outliving females. These findings challenge the general impression that ‘men do not live as long as women’ and reveal a more nuanced inequality in lifespans between females and males.


Discussion

Our study reveals a nuanced inequality in lifespan between females and males, with between one and two men out of four outliving a randomly paired woman in almost all points in time across 199 populations. These results complement the picture given by the comparisons based on life expectancy, which is a summary measure with no information on variation. A blind interpretation of life expectancy differences can sometimes lead to a distorted perception of the actual inequalities. Not all females outlive males, even if a majority do. But the minority that do not is not small. For example, a sex difference in life expectancy at birth of 10 years can be associated with a probability of males outliving females as high as 40%, indicating that 40% of males have a longer lifespan than that of a randomly paired female. Not all males have a disadvantage of 10 years, which is overlooked by solely making comparisons of life expectancy. However, a small number of males will live very short lives to result in that difference. For example, more baby boys die than baby girls in most countries.

The length of the lifespan of an individual results from a complex combination of biological, environmental and behavioural factors. Being male or female does impact lifespan, but it is not the only determinant contributing to inequalities. Lifespan has been shown to be influenced by marital status, income, education, race/ethnicity, urban/rural residence, etc.33 As we only disaggregated the population by sex and because of this complex interaction, lifespan distributions of females and males overlap. This nuance is captured by the φ metric. Males with a lower education level or who are unmarried have a particularly low chance of outliving a female. But males with a university degree or who are married have a higher chance of outliving females, in particular females with a lower education level and who are single.

As previously discussed, the φ metric expresses the probability of males to outlive females among randomly paired individuals, assuming independence between populations. However, males and females in a population are generally not random pairs but often couples, whose health and mortality have been found to be positively correlated due to a strong effect of social ties on health and longevity.34 Coupled individuals also influence each other’s health,35 and this is particularly true for males, who benefit more than females from being in a stable relationship.36 The datasets used for the analysis do not permit the estimation of the probability of males outliving females for non-randomly paired individuals. However, the outsurvival statistic relates to the probability of the husbands to outlive their wives, and even though such a measure accounts for the difference in age between husband and wife, it has been shown generally to be between 30% and 40%,37–39 values that are quite close to φ .

Other measures of overlap and distance between distributions could have been used. In the online supplemental materials, we compare the outsurvival statistic with a stratification index used by Shi and colleagues20 and the KL divergence. We found that all three indicators are strongly correlated and using any one of these would not have changed the general conclusions from this article. However, unlike the other indicators, φ directly indicates when males live longer than females, which we found in a few instances.

Trends over time in φ are consistent with the reversed trends in sex differences in life expectancy40: in developed countries, the probability of males outliving females decreased until the 1970s, after which it gradually increased in all populations. Studies showed that the increase in sex differences in mortality emerged in cohorts born after 1880,10 41 which is consistent with our analysis of φ (see online supplemental materials). The increase and decrease in sex differences in life expectancy were mainly attributed to the smoking epidemic and other behavioural differences between sexes.7 13 42

The φ values are generally higher in low/middle-income countries. However, this should not be interpreted as a sign of greater gender equality in survival. Southern Asian countries had very high φ values, above 50% in the 1950s and 1960s. Studies for India showed that mortality below age 5 was higher for females than males and remained higher for females in recent years.43 44 However, females had a growing mortality advantage above age 15 years since the 1980s, ‘balancing out’ the disadvantage at younger ages. The reasons for the higher φ and decreasing trends in developing regions vary across countries. It is outside the scope of this study to provide a detailed explanation for the trends in each country.

The outsurvival statistic can be informative for public health interventions.21 Governments develop public health programmes to reduce lifespan inequalities at different levels (eg, socioeconomic status, race, sex, etc). It would be misleading to say that half of the population is disadvantaged by sex differences in lifespan. The inequalities are more nuanced. If 40% of males live longer than females, it could be argued that if a policy aiming at reducing inequalities between sexes targeted the full male population, some of the efforts and investments would be misallocated. Such a policy could be more efficient if φ approaches 0, indicating that sex would explain a large part of the lifespan inequalities within the population, whereas a φ closer to 0.5 indicates that other characteristics (eg, socioeconomic and marital statuses) are involved in creating inequalities. We showed that some subpopulations of males have a high probability (above 50%) of outliving some subpopulations of females. Males who are married or have a university degree tend to outlive females who are unmarried or do not have a high school diploma. Inequalities in lifespan between sexes are attributable to some individuals within each population and not to the whole population. Indeed, Luy and Gast12 found that male excess mortality is mainly caused by some specific subpopulations of males with particularly high mortality. Being able to better identify the characteristics of the short-lived men could more efficiently help tackle male–female inequality.

An important result of our analysis is that the smaller the SD in the age at death, the smaller the φ . The reduction of lifespan inequality observed over time has then made it less likely for males to outlive females. This is partly explained by the fact that lifespan variation reduction has been driven by mortality declines at younger ages.45 When looking at the lifespan distribution (as in figure 1, scenario D), survival improvements at younger ages narrowed the left tails of the distribution for both sexes. By reducing the left tail of female distribution, without increasing the right tail of the male distribution, the overlapping area is reduced. In other words, the number of females with shorter lifespan, easier to outlive, decreased over time. Indeed, it has been shown that mortality declined at a faster pace for females than males below age 50, especially in the first half of the 20th century.46 47 This finding implies that more efforts are required today than in the past to reduce these inequalities, for a same difference in life expectancy. While inequalities were mainly attributable to infant and child mortality, they are today increasingly attributable to older and broader age groups. Men maintained their disadvantage at younger ages, but also faced an increasing disadvantage at older ages. Men are more prone to accidents and homicides in their 20s and 30s than females, and they tend to smoke and drink more leading to higher cancer prevalence and death in their 60s. At the same time, women benefited from reduced maternal mortality and recorded faster mortality decline at older ages. Efforts in reducing lifespan inequalities must thus target diverse factors, causes and ages.13 46 48

A decrease of φ might indicate a discrepancy in the causes of death that affect males and females. External mortality due to accidents and suicide has become more relevant in shaping sex differences in survival in recent years in high-income populations.12 Another example is observed in Latin American populations, where homicides and violent deaths have had an increased burden among males in comparison with females since the 1990s.49 50 In Mexico, for example, the increase in homicide mortality, especially among men between 20 and 40 years, contributed to increasing the gap in mortality between females and males.51 This phenomenon is reflected in the decrease over time in the overlapping of lifespan distributions, directly informing healthcare systems of emerging inequalities.

However, one might ask if a wider overlapping is necessarily better for healthcare systems. On the one hand, a larger overlapping means less inequality between sexes, but on its own it does not ensure that there is more ‘health justice’. For example, if the overlapping areas are large, this still shows a situation of great uncertainty in lifespan for both groups. One health evaluator actor could even prefer a situation where there is a small gap between groups but less inequality within the groups. In the case of sex differences, there might always be between-group differences due to biological factors,2 52 but more health equity could be reached by reducing within-group inequalities. We argue that the outsurvival statistic is a new tool to evaluate health inequalities between groups within a population by uncovering underlying dynamics that are otherwise hidden when looking only at conventional indicators. Therefore, it can inform healthcare systems of the subsequent directions to reach the preferred goal.


 

If children with low socio-economic level parents were to grow up in counties with economic connectedness comparable to that of the average child with high level parents, their incomes in adulthood would increase by 20% on average

Social Capital I: Measurement and Associations with Economic Mobility. Raj Chetty, Matthew O. Jackson, Theresa Kuchler, Johannes Stroebel, Nathaniel Hendren, Robert B. Fluegge, Sara Gong, Federico González. NBER Working Paper 30313. July 2022. DOI 10.3386/w30313

Abstract: In this paper—the first in a series of two papers that use data on 21 billion friendships from Facebook to study social capital—we measure and analyze three types of social capital by ZIP code in the United States: (i) connectedness between different types of people, such as those with low vs. high socioeconomic status (SES); (ii) social cohesion, such as the extent of cliques in friendship networks; and (iii) civic engagement, such as rates of volunteering. These measures vary substantially across areas, but are not highly correlated with each other. We demonstrate the importance of distinguishing these forms of social capital by analyzing their associations with economic mobility across areas. The fraction of high-SES friends among low-SES individuals—which we term economic connectedness—is among the strongest predictors of upward income mobility identified to date, whereas other social capital measures are not strongly associated with economic mobility. If children with low-SES parents were to grow up in counties with economic connectedness comparable to that of the average child with high-SES parents, their incomes in adulthood would increase by 20% on average. Differences in economic connectedness can explain well-known relationships between upward income mobility and racial segregation, poverty rates, and inequality. To support further research and policy interventions, we publicly release privacy-protected statistics on social capital by ZIP code at www.socialcapital.org.


Social Capital II: Determinants of Economic Connectedness. Raj Chetty, Matthew O. Jackson, Theresa Kuchler, Johannes Stroebel, Nathaniel Hendren, Robert B. Fluegge, Sara Gong, Federico Gonzalez. NBER Working Paper 30314. July 2022, revised August 2022. DOI 10.3386/w30314

Abstract: Low levels of social interaction across class lines have generated widespread concern and are associated with worse outcomes, such as lower rates of upward income mobility. Here, we analyze the determinants of cross-class interaction using data from Facebook, building upon the analysis in the first paper in this series. We show that about half of the social disconnection across socioeconomic lines—measured as the difference in the share of high-socioeconomic status (SES) friends between low- and high-SES people—is explained by differences in exposure to high- SES people in groups such as schools and religious organizations. The other half is explained by friending bias—the tendency for low-SES people to befriend high-SES people at lower rates even conditional on exposure. Friending bias is shaped by the structure of the groups in which people interact. For example, friending bias is higher in larger and more diverse groups and lower in religious organizations than in schools and workplaces. Distinguishing exposure from friending bias is helpful for identifying interventions to increase cross-SES friendships (economic connectedness). Using fluctuations in the share of high-SES students across high school cohorts, we show that increases in high-SES exposure lead low-SES people to form more friendships with high-SES people in schools that exhibit low levels of friending bias. Hence, socioeconomic integration can increase economic connectedness in communities where friending bias is low. In contrast, when friending bias is high, increasing cross-SES interaction among existing members may be necessary to increase economic connectedness. To support such efforts, we release privacy-protected statistics on economic connectedness, exposure, and friending bias for each ZIP code, high school, and college in the U.S. at www.socialcapital.org.


How does early menopause and menopause symptoms affect women’s careers? The conservative estimate of the cost of early menopause for a woman is £20k, while the cost of suffering an average level of menopause symptoms is £10k

The consequences of early menopause and menopause symptoms for labour market participation. Alex Bryson et al. Social Science & Medicine, Volume 293, January 2022, 114676. https://doi.org/10.1016/j.socscimed.2021.114676

Highlights

• Early natural menopause substantially reduces employment rates among women.

• The number of menopause symptoms women face is also associated with lower employment rates.

• These effects are larger for symptoms which women say “bother me a lot”.

• Psychological problems due to menopause are associated with the biggest employment effects.

Popular version: How does early menopause and menopause symptoms affect women’s careers? Mar 8 2022. https://blog.ukdataservice.ac.uk/how-does-early-menopause-and-menopause-symptoms-affect-womens-careers

Abstract: Using a difference-in-difference estimator we identify the causal impact of early menopause and menopause symptoms on the time women spend in employment through to their mid-50s. We find the onset of early natural menopause (before age 45) reduces months spent in employment by 9 percentage points once women enter their 50s compared with women who do not experience early menopause. Early menopause is not associated with a difference in full-time employment rates. The number of menopause symptoms women face at age 50 is associated with lower employment rates: each additional symptom lowers employment rates and full-time employment rates by around half a percentage point. But not all symptoms have the same effects. Vasomotor symptoms tend not to be associated with lower employment rates, whereas the employment of women who suffer psychological problems due to menopause is adversely affected. Every additional psychological problem associated with menopause reduces employment and full-time employment rates by 1–2 percentage points, rising to 2–4 percentage points when those symptoms are reported as particularly bothersome.

Keywords: MenopauseEarly menopauseMenopausal symptomsVasomotor symptomsEmploymentFull-time employmentBirth cohort

5. Conclusions

Our paper is the first to estimate the effects of early menopause and menopausal symptoms on employment and full-time employment rates among women. We exploit prospective birth cohort data for all women born in a particular week in 1958 to estimate the causal effects of menopause on employment rates using a difference-in-difference strategy. This technique compares the gap in employment rates during their 20s and early 30s with the employment gap in their 50s for women who went onto experience early menopause versus those who did not. We make similar comparisons between women according to the intensity with which they experienced menopausal symptoms when aged 50. In doing so we control for a rich array of variables collected at birth, in childhood, and in early adulthood which can affect employment prospects and experiences of menopause. We show employment and full-time employment trends during their 20s and early 30s did not differ significantly between the ‘treated’ group – those who went on to experience early menopause or more menopausal symptoms – and their ‘control’ groups who did not experience early menopause or did not suffer many menopausal symptoms. This provides some assurance that their employment rates may have trended in similar fashions later in their lives if they had not experienced menopause differently.

We find women's employment rates, and their full-time employment rates fall as the number of menopausal symptoms they report rises. Effects are larger for symptoms that are reported as ‘bothersome’. The effects are quantitatively large. For instance, a woman who experiences the mean number of menopausal symptoms at age 50 can expect to have an employment rate in her 50s that is 4 percentage points lower than a woman who has no menopausal symptoms.

Different types of menopause symptom have different employment effects. For instance, vasomotor symptoms do not affect full-time employment rates, and they only affect employment rates where they are considered ‘bothersome’. In contrast, psychological health problems associated with menopause significantly lower employment and full-time employment rates, and effects are much larger when those symptoms are ‘bothersome’.

Early menopause is associated with a very large (9 percentage point) reduction in employment rates once women reach their 50s, yet it has no statistically significant effect on women's full-time employment rates. It is unclear why early menopause should affect employment rates, but not full-time employment rates. This issue is worthy of further investigation.

It is striking that the inclusion and exclusion of potential confounders makes very little difference to the impact of early menopause and menopause symptoms on employment and full-time employment rates. Even though their inclusion increases the variance in employment rates explained by our models (as indicated by the adjusted R-squared) the coefficient and statistical significance of the interaction capturing the impact of menopause are nearly identical in all cases. Following Oster (2019) we take coefficient stability in the face of adjustments to conditioning covariates as an indication that results are unlikely to be biased by omitted variables.

There are some limitations to this study. First, although women are asked specifically to identify health-related symptoms due to the menopause, in some cases those symptoms may be due to other changes women are going through at the same time which are not directly linked to the menopause. Second, our data only collect information on symptoms related to menopause in the year leading up to the survey interview at age 50. Some women may have experienced symptoms earlier which did not persist to age 50, leading to some error in our ability to accurately capture symptoms related to menopause. Some women who experienced symptoms, but not at age 50, will be misclassified as having no symptoms. However, assuming symptoms experienced earlier than age 50 also have a detrimental impact on employment, this will mean our estimates of symptoms’ effects on employment are downwardly biased. Third, it is worth recalling that the Great Recession hit when the women in the study reached age 50. This was a very severe recession creating what were, at the time, unprecedented labour market problems for many. It would be valuable to see whether our results are replicated in more benign labour market conditions.

These negative employment effects of early menopause and menopausal symptoms are cause for some concern, not only because the size of the effects is large, but also because so many women suffer these problems. As we have shown, the mean number of menopausal symptoms experienced by women in this birth cohort when aged 50 was 8, including 2 particularly ‘bothersome’ symptoms. Five percent of women in the estimation sample had experienced early menopause.

These employment effects of early menopause and menopause symptoms add to the personal costs they have for women suffering from them in terms of their physical and mental health, and potentially their effects on women's private lives, although we do not quantify them here. They also have costs for society, in terms of the health care costs of treating women's symptoms, potential productivity losses from women's lost hours of work and ability to work productively. It is conceivable that they will also affect women's retirement decisions and thus pension entitlements.

Having identified the size and extent of the problem government and employers should consider steps that could be taken to ameliorate the problems women face in their working lives due to the menopause. That said, this is the first study of its kind, so there is value in seeking to replicate and extend research investigating the impact of early menopause and menopausal symptoms on labour market outcomes. First, it would be valuable to know whether the effects we identify might vary for other cohorts of women including more recent entrants to the labour market. Second, there would be value exploring the heterogeneity of menopausal effects and whether there are aspects of women's experiences that may ameliorate the effects of menopause. For instance, it may be that women are better able to manage menopause symptoms where they have greater opportunities to manage working patterns or working hours, as might be the case among self-employed women or employees in workplaces with policies and practices expressly intended to assist women affected by menopause. Third, we know very little about the effects of early menopause and menopausal symptoms on other aspects of women's labour market experiences. We would have a better picture if studies were undertaken to investigate the impacts of menopause on women's wellbeing at work, their job satisfaction and their earnings. Finally, we know of no studies piloting policies or practices in the workplace that might assist women in raising health-related problems they may have during menopause, nor in coping with those problems. These evaluations are needed to provide the evidence base employers and government need so they know what actions to take to improve women's working lives.