Tweets We Like Aren’t Alike: Time of Day Affects Engagement with Vice and Virtue TweetsOzum Zor, Kihyun Hannah Kim, Ashwani Monga. Journal of Consumer Research, Volume 49, Issue 3, October 2022, Pages 473–495, https://doi.org/10.1093/jcr/ucab072
Abstract: Consumers are increasingly engaging with content on social media platforms, such as by “following” Twitter accounts and “liking” tweets. How does their engagement change through the day for vice content offering immediate gratification versus virtue content offering long-term knowledge benefits? Examining when (morning vs. evening) engagement happens with which content (vice vs. virtue), the current research reveals a time-of-day asymmetry. As morning turns to evening, engagement shifts away from virtue and toward vice content. This asymmetry is documented in three studies using actual Twitter data—millions of data points collected every 30 minutes over long periods of time—and one study using an experimental setting. Consistent with a process of self-control failure, one of the Twitter data studies shows a theory-driven moderation of the asymmetry, and the experiment shows mediation via self-control. However, multiple processes are likely at play, as time does not unfold in isolation during a day, but co-occurs with the unfolding of multiple events. These results provide new insights into social media engagement and guide practitioners on when to post which content.
Keywords: time of day, vice, virtue, content engagement, self-control failure, Twitter
Tuesday, September 27, 2022
As morning turns to evening, engagement on Twitter shifts away from virtue and toward vice content (celebrity gossip, food, etc.)
Monday, September 26, 2022
Setting ambitious goals is a proven strategy for improving performance, but those with highly ambitious goals (and those with unambitious goals) were seen as less warm and as offering less relationship potential
Interpersonal consequences of conveying goal ambition. Sara Wingrove, GrĂ¡inne M. Fitzsimons. Organizational Behavior and Human Decision Processes, Volume 172, September 2022, 104182. https://doi.org/10.1016/j.obhdp.2022.104182
Highlights
• Ambition influences interpersonal expectations of warmth and relationship potential.
• Interpersonal expectations are driven by perceived goal supportiveness.
• Unambitious and highly ambitious goals both signal lower goal supportiveness.
• Moderately ambitious goals are evaluated the best interpersonally.
Abstract: Setting ambitious goals is a proven strategy for improving performance, but we suggest it may have interpersonal costs. We predict that relative to those with moderately ambitious goals, those with highly ambitious goals (and those with unambitious goals) will receive more negative interpersonal evaluations, being seen as less warm and as offering less relationship potential. Thirteen studies including nine preregistered experiments, three preregistered replications, and one archival analysis of graduate school applications (total N = 6,620) test these hypotheses. Across career, diet, fitness, savings, and academic goals, we found a robust effect of ambition on judgments, such that moderately ambitious goals led to the most consistently positive interpersonal expectations. To understand this phenomenon, we consider how ambition influences judgments of investment in one’s own goals as opposed to supportiveness for other people’s goals and explore expectations about goal supportiveness as one mechanism through which ambition may influence interpersonal judgments.
Keywords: GoalsAmbitionInterpersonal perceptionAttributions
Sunday, September 25, 2022
Sexual Repertoire, Duration of Partnered Sex, Sexual Pleasure, and Orgasm: A US Nationally Representative Survey of Adults show that while women and men reported a similar actual duration of sex, men wished it to last longer
Sexual Repertoire, Duration of Partnered Sex, Sexual Pleasure, and Orgasm: Findings from a US Nationally Representative Survey of Adults. Debby Herbenick, Tsung-chieh Fu & Callie Patterson. Journal of Sex & Marital Therapy, Sep 23 2022. https://doi.org/10.1080/0092623X.2022.2126417
Abstract: In a confidential U.S. nationally representative survey of 2,525 adults (1300 women, 1225 men), we examined participants’ event-level sexual behaviors, predictors of pleasure and orgasm, and perceived actual and ideal duration of sex, by gender and age. Event-level kissing, cuddling, vaginal intercourse, and oral sex were prevalent. Sexual choking was more prevalent among adults under 40. While women and men reported a similar actual duration of sex, men reported a longer ideal duration. Participants with same-sex partners reported a longer ideal duration than those with other-sex partners. Finally, findings show that gendered sexual inequities related to pleasure and orgasm persist.
Credence to assign to philosophical claims that were formed without any knowledge of the current philosophical debate & little or no knowledge of the relevant empirical or scientific data
The end of history. Hanno Sauer. Inquiry, Sep 19 2022. https://doi.org/10.1080/0020174X.2022.2124542
Abstract: What credence should we assign to philosophical claims that were formed without any knowledge of the current state of the art of the philosophical debate and little or no knowledge of the relevant empirical or scientific data? Very little or none. Yet when we engage with the history of philosophy, this is often exactly what we do [sic, it means, to give credence]. In this paper, I argue that studying the history of philosophy is philosophically unhelpful. The epistemic aims of philosophy, if there are any, are frustrated by engaging with the history of philosophy, because we have little reason to think that the claims made by history’s great philosophers would survive closer scrutiny today. First, I review the case for philosophical historiography and show how it falls short. I then present several arguments for skepticism about the philosophical value of engaging with the history of philosophy and offer an explanation for why philosophical historiography would seem to make sense even if it didn’t.
Keywords: History of philosophymetaphilosophyphilosophical methodologysocial epistemologyepistemic peerhood
Consider Plato’s or Rousseau’s evaluation of the virtues and vices of democracy. Here is a (non-exhaustive) list of evidence and theories that were unavailable to them at the time:
Historical experiences with developed democracies
Empirical evidence regarding democratic movements in developing countries
Various formal theorems regarding collective decision making and preference aggregation, such as the Condorcet Jury-Theorem, Arrow’s Impossibility-Results, the Hong-Page-Theorem, the median voter theorem, the miracle of aggregation, etc.
Existing studies on voter behavior, polarization, deliberation, information
Public choice economics, incl. rational irrationality, democratic realism
The whole subsequent debate on their own arguments
[…]
When it comes to people currently alive, we would steeply discount the merits of the contribution of any philosopher whose work were utterly uninformed by the concepts, theories and evidence just mentioned (and whatever other items belong on this list). It is not clear why the great philosophers of the past should not be subjected to the same standard. (Bear in mind that time and attention are severely limited resources. Therefore, every decision we make about whose work to dedicate our time and attention to faces important trade-offs.)
The nature/nurture debate in moral psychology illustrates the same point. Philosophers have long discussed whether there is an innate moral faculty, and what its content may consist in. Now consider which theories and evidence were unavailable to historical authors such as Hume or Kant when they developed their views on the topic, and compare this to a recent contribution to the debate (Nichols et al. 2016):
Linguistic corpus data
Evolutionary psychology
Universal moral grammar theory
Sophisticated statistical methods
Bayesian formal modeling
250 years of the nature/nurture debate
250 years of subsequent debates on Hume or Kant
[…]
Finally, consider Hobbes’ justification of political authority in terms of how it allows us to avoid the unpleasantness of the state of nature. Here are some concepts and theories that were not available to him when he devised his arguments:
Utility functions
Nash equilibria
Dominant strategy
Backward induction
Behavioral economics
Experimental game theory
Biological evidence on the adaptivity of cooperation
Empirical evidence regarding life in hunter/gatherer societies
Cross-cultural data regarding life in contemporary tribal societies
[…]
Again, when it comes to deciding whose philosophical work to devote our time and attention to, any person that didn’t have any knowledge whatsoever of the above items would be a dubious choice.
A version of this problem that is somewhat more specific to moral philosophy is that in ethics, it is often important not to assign disproportionate testimonial weight to people of which we have good reasons to suspect that they harbored deeply objectionable attitudes or publicly expressed moral beliefs we have reason to deem unjustified and/or morally odious. Personally, I have made a habit of not heeding the ethical advice of Adolf Eichmann, Ted Bundy, and various of my family members. But upon looking at the moral views held by many of the most prominent authors in the history of philosophy, one often cannot help but shudder: Plato advocated abolishing the family, violently if need be; Aristotle defended (a version of) slavery as natural; Locke advocated religious toleration, only to exclude atheists from the social contract; Kant argued that masturbation is one of the gravest moral transgressions there is; Hegel claimed that it is an a priori truth that the death penalty is morally obligatory, and indeed a form of respect towards the executed; the list of historical philosophers who held sexist, racist and other discriminatory views would be too long to recount here.
Saturday, September 24, 2022
Acute climate risks in the financial system: 'Top-down' approaches are likely to be flawed when applied at spatial and temporal granular scales. as the Network of Central Banks and Supervisors for Greening the Financial System does
Pitman AJ; Fiedler T; Ranger N; Jakob C; Ridder N; Perkins-Kirkpatrick S; Wood N; Abramowitz G, Aug 2022, 'Acute climate risks in the financial system: examining the utility of climate model projections', Environmental Research: Climate, vol. 1, pp. 025002 - 025002, http://dx.doi.org/10.1088/2752-5295/ac856f
Abstract: Efforts to assess risks to the financial system associated with climate change are growing. These commonly combine the use of integrated assessment models to obtain possible changes in global mean temperature (GMT) and then use coupled climate models to map those changes onto finer spatial scales to estimate changes in other variables. Other methods use data mined from 'ensembles of opportunity' such as the Coupled Model Intercomparison Project (CMIP). Several challenges with current approaches have been identified. Here, we focus on demonstrating the issues inherent in applying global 'top-down' climate scenarios to explore financial risks at geographical scales of relevance to financial institutions (e.g. city-scale). We use data mined from the CMIP to determine the degree to which estimates of GMT can be used to estimate changes in the annual extremes of temperature and rainfall, two compound events (heatwaves and drought, and extreme rain and strong winds), and whether the emission scenario provides insights into the change in the 20, 50 and 100 year return values for temperature and rainfall. We show that GMT provides little insight on how acute risks likely material to the financial sector ('material extremes') will change at a city-scale. We conclude that 'top-down' approaches are likely to be flawed when applied at a granular scale, and that there are risks in employing the approaches used by, for example, the Network of Central Banks and Supervisors for Greening the Financial System. Most fundamental, uncertainty associated with projections of future climate extremes must be propagated through to estimating risk. We strongly encourage a review of existing top-down approaches before they develop into de facto standards and note that existing approaches that use a 'bottom-up' strategy (e.g. catastrophe modelling and storylines) are more likely to enable a robust assessment of material risk.
4. Discussion and conclusions
We welcome the initiatives within the global financial system to examine acute risks associated with physical climate change and we strongly concur that acute risks associated with weather and climate threaten elements of the financial system. Using physical climate models to examine large-scale risks or as guides for scenario or storyline planning is useful, and using reduced complexity models such as IAMs to develop large ensembles of how GMT responds to emission scenarios is well established. Our analysis is not examining whether acute risks are material, rather we examine the assumption, within methodologies including but not limited to NGFS, that large ensembles of GMT can be used to inform acute climate risk at spatial scales well below the sub-regional scale.
The NGFS methodology links large ensembles of GMT, via ISIMIP, to local and regional-scale climate risk. The methods used by NGFS to create large ensembles of GMT are not in question, nor are the climate models used in ISIMIP which have considerable validity for the large-scale assessment of impacts of climate change. The issue is the link implied within the NGFS methodology that translates GMT, through ISIMIP, to a granular level of physical climate risk which, in reality, is generated through climate-induced weather-scales and weather-related extremes. This link depends on the patterns simulated by the ISIMIP models, balancing the thermodynamic and dynamic responses, and their capacity to reflect the correlations between GMT and material extremes at a granular scale.
Our results show that irrespective of the capacity to derive a distribution for possible changes in GMT, and however well this distribution samples uncertainty, the methods used to link GMT to local, i.e. city-scale, annual extremes of rainfall and wind, or the return periods of two compound events, or the 1 in 20, 1 in 50 and 1 in 100 year rainfall or temperature extremes is deeply uncertain. Whether the ISIMIP models, or CMIP models are used, the translation of GMT into spatial expressions of extremes leads to uncertainty not merely in the magnitude of change, but in the sign of many changes. The uncertainty dwarfs any signal from emission scenarios, at least over the next 50 year. There are strategies to reduce the apparent uncertainty in projected extremes by sampling climate models according to skill or independence, but whether this reduces actual uncertainty, thereby enabling more robust decisions on managing risk, is unknown. Before we continue, we emphasise that the conclusion that there is no useful link between GMT and material risks does not mean that climate models have no role to play in assessing the impact of climate change on financial risk.
One of the advantages of physical climate models, including those used in ISIMIP, is that they provide easily accessible and quantitative information. Within CMIP6 for example, which include newer models than ISIMIP, a multi-petabyte store of open access climate change information exists. This is obviously very attractive to groups seeking to build approaches, or undertake analyses, that can be applied anywhere in the world. However, there are two fundamental principles to consider in using any physical climate modelling system. First, accuracy and precision are not the same thing; physical climate models are very precise, but not necessarily accurate and may not be accurate for problems they were not designed for. Second, uncertainty cannot be ignored; deep uncertainty exists in climate projections (Lempert et al 2013) and affects both the magnitude and sign of the change in most physical risks and very probably most material risks. This cannot be ignored because the consequences are not easy to predict. Ranger et al (2022) describe, for example, the stress testing run by the Bank of England (2020), noting that the input data is largely sourced via the NGFS methodology and that no uncertainty information is provided. From a physical climate projections perspective this is simply flawed. Refer to figures 2(d) and 3(a) and take any value of GMT and select the associated wind speed change or return period. Depending on which CMIP6 model and emission scenario is selected, increases, decreases or no change can be obtained. It is deeply misleading to select a single value from the ranges shown in figures 2 and 3 without also accounting for the uncertainty. Further, despite claims within NGFS (Bertram et al 2021) that the IAM used (MAGICC6) is designed 'to capture the full GMT uncertainty for different emissions scenarios', and accepting MAGICC6 is a legitimate tool to use, it is misleading to suggest it captures the full range of uncertainty. It is not known, and it is probably unknowable, to what degree any IAM captures the full range of uncertainty. The ISIMIP project is not designed to select global models that capture uncertainty, or independence (Abramowitz and Bishop 2014), or particularly good or bad models. It is simply an ensemble of opportunity (Tebaldi and Knutti 2007) with strengths and weaknesses. The ISIMIP models are legitimate tools to use, but they are quite old model versions, quite coarse in terms of spatial resolution and only six models complete the ensemble. Referring to the uncertainty bars shown in figures 4–7, selecting six CMIP models would reduce the apparent uncertainty because of the smaller sample size, but it would not reduce the actual uncertainty. It is noteworthy here that even the full CMIP6 ensemble, which now includes over 50 models, samples an unknown fraction of the true uncertainty. We also note that assessing material risks using CMIP6 (with SSPs) is unlikely to lead to more robust conclusions that using CMIP5 (with RCPs). While climate models are improving, at the spatial scales of individual cities and on time scales of decades both CMIP5 and CMIP6 provide projections that cannot be clearly differentiated.
We acknowledge that many of these issues are clearly highlighted in the literature. Bertram et al (2021) notes that 'findings from the Climate Impact Explorer should thus be used to supplement rather than replace national or regional risk assessments'. They further note that 'uncertainty in the climate sensitivity is sampled by considering four different GCMs', and that several impact models are used to sample the uncertainty'. Bertram et al (2021) also notes:
Following established approaches in the scientific literature (see e.g. James et al 2017 ), we assess impact indicators as a function of the GMT level. This means we assume that a given GMT level will on average lead to the same change in that indicator even if it is reached at two different moments in time in two different emission scenarios. This assumption is generally well justified and differences are small compared to the spread across changes projected by different models (Herger et al 2015 ).
We strongly agree with these statements and emphasize the 'on average' and 'generally well justified'. The problem is, however, that while these approaches are well justified on average, the acute physical risks and the material extremes associated with regional-scale and finer scale climate change are not well described by averages. After all, the financial sector seeks to know which specific regions are most at risk, not that a fraction of the globe is at increased risk. If financial risk is aggregated to a continent, systematic errors associated with these assumptions might be averaged out, but the NGFS methodology is being used at a granularity well below that examined in this paper. This involves very significant uncertainties and determining whether climate change results in a material extreme is country, economy and business specific. At these scales, and in the context of material extremes associated with climate-induced weather-scale phenomenon, the ways in which the NGFS methodology are being employed is very likely misleading. There is a key implication here that is deeply concerning:
If all Central Banks (or the over 100 members of NGFS) use a methodology that is systemically biased, this could itself lead to a major systemic risk to the global financial system.
The current NGFS scenarios do not represent the range of plausible climate outcomes possible at a country level—a systematic bias—and most banks, insurers and investors are using these scenarios without fully accounting for uncertainty. Misuse or misunderstanding of what climate models tell us, and assumptions that products like NGFS have utility at sub-national scales could make the risks we are trying to avoid through the NGFS scenarios worse. Rectifying this is important and requires an open collaboration between banks and the scientific community to develop scenarios appropriate for stress testing.
The most fundamental issue with assessing financial risk associated with acute physical risk relates to the acknowledgement that these risks are associated with weather, usually locally, and usually (but not necessarily) statistically extreme. The use of global climate models, which do not resolve weather-scales, are not appropriate for local scales and may not capture material extremes, is highly questionable. While using the quantitative information from climate models is tempting and provides a considerable amount of apparently precise information, failure to fully represent uncertainty leads to false confidence. By contrast, there are well-known ways to decouple assessments of acute physical risks from climate model quantitative information. Using climate models to inform scenarios, storylines (Shepherd 2019, Jack et al 2020) and stress testing, or using climate models to modify the statistics represented in current-day catastrophe modelling can all help break the false assumption that the numerical precision in climate models equates to accuracy at a granular level. In many ways, this echoes guidance from Schinko et al (2017) to consider models as tools to explore a system as distinct from predicting a system, or Saravanan (2022) who explores the need to take climate models seriously, but not literally. Given the material risks from climate change are commonly the tail risks, more use of catastrophe modelling might lead to decision making that builds more resilient systems. However, some material risks are likely associated with long periods of drizzle, or of high cloud cover and still winds. These are events associated with persistence which climate models are known to capture with relatively low skills (see for example Kumar et al 2013).
The relative ease with which large ensembles using IAMs can be generated and linked to acute risk at sub-regional scales is understandably attractive for large financial institutions, central banks and financial regulators. It is therefore unlikely that these will be wholly replaced by an alternative approach. This relative ease, however, hides immense uncertainty that is likely material, and that risks misleading an institution or regulator, exposing entities to litigation, and directly challenging centuries of accounting and assurance practice. We suggest three immediate actions:
- (a)the NGFS method is likely misleading in determining granular level acute or material risks to the financial sector and we strongly advise that it is openly critiqued and does not become a de facto standard by default.
- (b)no products or methods should be employed that fail to properly account for uncertainty, and how uncertainty is estimated needs very carefully consideration. There is no evidence that merely adding more climate models, or more estimates of GMT reduces uncertainty.
- (c)there is a rich history of assessing risk at the local scale (Ranger et al 2022). This 'bottom-up' assessment can utilize historical climate data, existing risk estimates, analysis of the vulnerability of an entity to these acute physical risks, stress testing of investment portfolios and so on. The historical data can be perturbed using expert judgement based on multiple lines of evidence, including climate models. A financial institution should confront the 'top-down' methodologies proposed by regulators with bottom-up assessments of their acute physical risks and review how different the resulting estimates are.
Perhaps the single most important point here is that while the 'top-down' approach is likely to become the de facto standard for assessing a financial institution's exposure to climate change, this should only be done in conjunction with alternative 'bottom-up' methods.
Finally, we note that climate science and the science of climate projections is evolving rapidly. Further, regulation and disclosure linked with climate risk is developing rapidly. A company with the ability to undertake, at least to some degree, a bottom-up assessment of material risks, and to engage with external parties from a position of understanding, will be well positioned as climate projections change. A company with internal capability will be more able to ask the right questions, avoid buying risk advice that is misleading, and be able to identify opportunities associated with climate change more quickly. While building some internal capability might seem confronting and expensive, building future strategies on information that is not understood and is potentially misleading is likely more so, and quite possibly exposes the global financial system to systemic risks of its own making.
Self-preferencing shouldn't be an antitrust offense
Antitrust Unchained: The EU’s Case Against Self-Preferencing. Giuseppe Colangelo. International Center for Law and Economics Working Paper No. 2022-09-22, Sep 22 2022. https://laweconcenter.org/resource/antitrust-unchained-the-eus-case-against-self-preferencing
Abstract: Whether self-preferencing is inherently anticompetitive has emerged as perhaps the core question in competition policy for digital markets. Large online platforms who act as gatekeepers of their ecosystems and engage in dual-mode intermediation have been accused of taking advantage of these hybrid business model to grant preferential treatment to their own products and services. In Europe, courts and competition authorities have advanced new antitrust theories of harm that target such practices, as have various legislative initiatives around the world. In the aftermath of the European General Court’s decision in Google Shopping, however, it is important to weigh the risk that labeling self-preferencing as per se anticompetitive may merely allow antitrust enforcers to bypass the legal standards and evidentiary burdens typically required to prove anticompetitive behavior. This paper investigates whether and to what extent self-preferencing should be considered a new standalone offense under European competition law.
Envy seems a rather stable disposition (both at the global level and within specific envy domains)
Erz, Elina, and Katrin Rentzsch. 2022. “Stability and Change in Dispositional Envy: Longitudinal Evidence on Envy as a Stable Trait.” PsyArXiv. September 20. doi:10.1177/08902070221128137
Abstract: Dispositional envy has been conceptualized as an emotional trait that varies across comparison domains (e.g., attraction, competence, wealth). Despite its prevalence and potentially detrimental effects, little is known about stability and change in dispositional envy across time due to a lack of longitudinal data. The goal of the present research was to close this gap by investigating stability and developmental change in dispositional envy over time. In a preregistered longitudinal study across 6 years, we analyzed data from N = 1,229 German participants (n = 510-634 per wave) with a mean age of 47.0 years at intake (SD = 12.4, range 18 to 88). Results from latent factor models revealed that both global and domain-specific dispositional envy were stable across 6 years in terms of their rank order and mean levels, with stability coefficients similar to those of other trait measures reported in literature. Moreover, a substantial amount of variance in global and domain-specific dispositional envy was accounted for by a stable trait factor. Results thus provide evidence for a stable disposition toward the experience of envy both at the global level and within specific envy domains. The present findings have important theoretical and practical implications for the stability and development of dispositional envy in adulthood and advance the understanding of emotional traits in general.
Specific cognitive abilities (SCA) are 56% heritable, similar to general intelligence, g; some SCA are significantly more or less heritable than others, 39-64%; SCA do not show the dramatic developmental increase in heritability seen for g
The genetics of specific cognitive abilities. Francesca Procopioa et al. Intelligence, Volume 95, November–December 2022, 101689. https://doi.org/10.1016/j.intell.2022.101689
Highlights
• Specific cognitive abilities (SCA) are 56% heritable, similar to g.
• Some SCA are significantly more heritable than others, 39% to 64%.
• Independent of g (‘SCA.g’), SCA remain substantially heritable (∼50%).
• SCA do not show the dramatic developmental increase in heritability seen for g.
• Genomic research on SCA.g is needed to create profiles of strengths and weaknesses.
Abstract: Most research on individual differences in performance on tests of cognitive ability focuses on general cognitive ability (g), the highest level in the three-level Cattell-Horn-Carroll (CHC) hierarchical model of intelligence. About 50% of the variance of g is due to inherited DNA differences (heritability) which increases across development. Much less is known about the genetics of the middle level of the CHC model, which includes 16 broad factors such as fluid reasoning, processing speed, and quantitative knowledge. We provide a meta-analytic review of 747,567 monozygotic-dizygotic twin comparisons from 77 publications for these middle-level factors, which we refer to as specific cognitive abilities (SCA), even though these factors are not independent of g. Twin comparisons were available for 11 of the 16 CHC domains. The average heritability across all SCA is 56%, similar to that of g. However, there is substantial differential heritability across SCA and SCA do not show the developmental increase in heritability seen for g. We also investigated SCA independent of g (SCA.g). A surprising finding is that SCA.g remain substantially heritable (53% on average), even though 25% of the variance of SCA that covaries with g has been removed. Our review highlights the need for more research on SCA and especially on SCA.g. Despite limitations of SCA research, our review frames expectations for genomic research that will use polygenic scores to predict SCA and SCA.g. Genome-wide association studies of SCA.g are needed to create polygenic scores that can predict SCA profiles of cognitive abilities and disabilities independent of g.
Keywords: Specific cognitive abilityIntelligencemeta-analysisTwin studyHeritability
4. Discussion
Although g is one of the most powerful constructs in the behavioural sciences (Jensen, 1998), there is much to learn about the genetics of cognitive abilities beyond g. Our meta-analysis of 747,567 twin comparisons yielded four surprising findings. One of the most interesting findings about g is that its heritability is similar to that of SCA. The heritability of g is about 50% (Knopik et al., 2017) and the average heritability of SCA from our meta-analysis is 56%.
We focused on three additional questions: Are some SCA more heritable than others (differential heritability)? Does the heritability of SCA increase during development as it does for g? What is the heritability of SCA independent of g?
4.1. Differential heritability
We conclude that some SCA are more heritable than others. The estimates ranged from 39% for auditory processing (Gt) to 64% for quantitative knowledge and processing speed (Gs). Our expectation that domains conceptually closer to g would have higher heritability than ones more conceptually distinct from g led us to be surprised which SCA were most heritable.
For example, we hypothesised that acquired knowledge would be less heritable than fluid reasoning. This is because acquired knowledge is a function of experience, while fluid reasoning involves the ability to solve novel problems. To the contrary, our results indicate that acquired knowledge is the most heritable grouping of CHC domains, with an average heritability of 58%. In contrast, fluid reasoning has a comparatively low heritability estimate of 40%.
We were also surprised to find significantly large differences in heritability between SCA within the same functional grouping. For example, processing speed (Gs), one of the most heritable CHC domains, is within the functional grouping of general speed. Processing speed is defined as ‘the ability to automatically and fluently perform relatively easy or over-learned elementary cognitive tasks, especially when high mental efficiency (i.e., attention and focused concentration) is required’ (McGrew, 2009, p. 6). In contrast, reaction and decision speed (Gt), another CHC domain within the functional grouping of general speed for which twin comparisons were available, yielded one of the lowest heritabilities of 42%. It is defined as ‘the ability to make elementary decisions and/or responses (simple reaction time) or one of several elementary decisions and/or responses (complex reaction time) at the onset of simple stimuli’ (McGrew, 2009, p. 6). Why is reaction and decision speed (Gt) so much less heritable than processing speed (Gs) (42% vs 64%)? One possibility is that processing speed picks up ‘extra’ genetic influence because it involves more cognitive processing than reaction time, as suggested by their definitions. Moreover, Schneider and McGrew (2018) propose a hierarchy of speeded abilities (Kaufman, 2018, p. 108) in which Gs (which they call broad cognitive speed) has a higher degree of cognitive complexity than Gt (broad decision speed). But this would not explain why processing speed is more heritable than fluid reasoning (40%), which seems to involve the highest level of cognitive processing such as problem solving and inductive and deductive reasoning.
One direction for future research is to understand why some SCA are more heritable than others. A first step in this direction is to assess the extent to which differential reliability underlies differential heritability because reliability, especially test-retest reliability rather than internal consistency, creates a ceiling for heritability. For example, the least heritable SCA is short-term memory (Gsm), for which concerns about test-retest reliability have been raised (Waters & Caplan, 2003).
If differential reliability is not a major factor in accounting for differential heritability, a substantive direction for research on SCA is to conduct multivariate genetic analyses investigating the covariance among SCA to explore the genetic architecture of SCA. This would be most profitable if these analyses also included g, as discussed below (SCA.g).
4.2. Developmental changes in SCA heritability
One of the most interesting findings about g is that its heritability increases linearly from 20% in infancy to 40% in childhood to 60% in adulthood. SCA show average decreases in heritability from childhood to later life (column 1 in Fig. 4). Although several CHC domains show increases from early childhood (0–6 years) to middle childhood (7–11 years), this seems likely to be due at least in part to difficulties in reliably assessing cognitive abilities in the first few years of life.
It is puzzling that heritability increases developmentally for g but not for SCA because g represents what is in common among SCA. A previous meta-analysis that investigated cognitive aging found that the heritability of verbal ability, spatial ability and perceptual speed decreased after the age of around 60 (Reynolds & Finkel, 2015). While we did not find evidence for this for any of the SCA domains, we did observe the general trend of decreasing heritability for reading and writing (Grw) and visual processing (Gv) from middle childhood onwards.
We hoped to investigate the environmental hypothesis proposed by Kovas et al. (2013) to account for their finding that the heritability of literacy and numeracy SCA were consistent throughout the school years (∼65%), whereas the heritability of g increased from 38% age 7 to 49% at age 12 (Kovas et al., 2013). They hypothesised that universal education for basic literacy and numeracy skills in the early school years reduces environmental disparities, which leads to higher heritability as compared to g, which is not a skill taught in schools.
We hoped to test this hypothesis by comparing SCA that are central to educational curricula and those that are not. For example, reading and writing (Grw), quantitative knowledge (Gq) and comprehension-knowledge (Gc) are central to all curricula, whereas other SCA are not explicitly taught in schools, such as auditory processing (Ga), fluid reasoning (Gf), processing speed (Gs), short-term memory (Gsm) and reaction and decision speed (Gt). Congruent with the Kovas et al. hypothesis, Grw, Gq and Gc yield high and stable heritabilities of about 60% during the school years. However, too few twin comparisons are available to test whether Ga, Gf, Gs, Gsm and Gt show increasing heritability during the school years.
4.3. SCA independent of g (SCA.g)
Although few SCA.g data are available, they suggest another surprising finding. In these studies, SCA independent of g are substantially heritable, 53%, very similar to the heritability estimate of about 50% for SCA uncorrected for g. This finding is surprising because a quarter of the variance of SCA is lost when SCA are corrected for g. More SCA.g data are needed to assess SCA issues raised in our review about the influence of g in differential heritability and developmental changes in heritability.
Although more data on SCA.g are needed, our preliminary results are encouraging in suggesting that genetic influence on SCA does not merely reflect genetic influence on g. Although g drives much of the predictive power of cognitive abilities, it should not overshadow the potential for SCA to predict profiles of cognitive strengths and weaknesses independent of g.
An exciting aspect of these findings is their implication for research that aims to identify specific inherited DNA differences responsible for the heritability of SCA and especially SCA.g. Genome-wide association (GWA) methods can be used to assess correlations across millions of DNA variants in the genome with any trait and these data can be used to create a polygenic score for the trait that aggregates these weighted associations into a single score for each individual (Plomin, 2018). The most powerful polygenic scores in the behavioural sciences are derived from GWA analyses for the general cognitive traits of g (Savage et al., 2018) and educational attainment (Lee et al., 2018; Okbay et al., 2022). It is possible to use these genomic data for g and educational attainment to explore the extent to which they can predict SCA independent of g and educational attainment even when SCA were not directly measured in GWA analyses, an approach called GWAS-by-subtraction (Demange et al., 2021), which uses genomic structural equation modeling (Grotzinger et al., 2019). We are also employing a simpler approach using polygenic scores for g and educational attainment corrected for g, which we call GPS-by-subtraction (Procopio et al., 2021).
Ultimately, we need GWA studies that directly assess SCA and especially SCA.g. Ideally, multiple measures of each SCA domain would be used and a general factor extracted rather than relying on a single test of the domain. The problem is that GWA requires huge samples to detect the miniscule associations between thousands of DNA variants and complex traits known to contribute to their heritabilities. The power of the polygenic scores for g and educational attainment comes from their GWA sample sizes of >250,000 for g and more than three million for educational attainment. It is daunting to think about creating GWA samples of this size for tested SCA as well as g in order to investigate SCA.g. However, a cost-effective solution is to create brief but psychometrically valid measures of SCA that can be administered to the millions of people participating in ongoing biobanks for whom genomic data are available. For example, a gamified 15-min test has been created for this purpose to assess verbal ability, nonverbal ability and g (Malanchini et al., 2021). This approach could be extended to assess other SCA and SCA.g.
We conclude that SCA.g are reasonable targets for genome-wide association studies, which could enable polygenic score predictions of profiles of specific cognitive strengths and weaknesses independent of g (Plomin, 2018). For example, SCA.g polygenic scores could predict, from birth, aptitude for STEM subjects independent of g. Polygenic score profiles for SCA.g could be used to maximise children's cognitive strengths and minimise their weaknesses. Rather than waiting for problems to develop, SCA.g polygenic scores could be used to intervene to attenuate problems before they occur and help children reach their full potential.
4.4. Other issues
An interesting finding from our review is that SCA.g scores in which SCA are corrected phenotypically for g by creating residualised scores from the regression of g on SCA yield substantially higher estimates of heritability than SCA.g derived from Cholesky analyses.
We suspect that the difference is that regression-derived SCA.g scores remove phenotypic covariance with g, thus removing environmental as well as genetic variance associated with g. In contrast, Cholesky-derived estimates of the heritability of SCA independent of g are calibrated to the total variance of SCA, not to the phenotypic variance of SCA after g is controlled. Regardless of the reason for the lower Cholesky-derived estimates of the heritability of g as compared to regression-derived SCA.g scores, regression-derived phenotypic scores of SCA.g are likely the way that phenotypic measures of SCA will be used in phenotypic and genomic analyses. Instead, the Cholesky models involve latent variables that cannot be converted to phenotypic scores for SCA.g.
Another finding from our review is that heritability appears to be due to additive genetic factors. The average weighted MZ and DZ correlations across the 11 CHC domains for which twin comparisons were available were 0.72 and 0.44, respectively. This pattern of twin correlations, which is similar to that seen across all SCA as well as g, is consistent with the hypothesis that genetic influence on cognitive abilities is additive (Knopik et al., 2017). Additive genetic variance involves genetic effects that add up according to genetic relationships so that if heritability were 100%, MZ twins would correlate 1.0 and DZ twins would correlate 0.5 as dictated by their genetic relatedness. In contrast, if genetic effects operated in a non-additive way, the correlation between DZ twins would be less than half the correlation between MZ twins. Because MZ twins are identical in their inherited DNA sequence, only MZ twins capture the entirety of non-additive interactions among DNA variants. In other words, the hallmark of non-additive genetic variance for a trait is that the DZ correlation is less than half the MZ correlation. None of the SCA show this pattern of results (Fig. 3), suggesting that genetic effects on SCA are additive.
Finding that genetic effects on SCA are additive is important for genomic research because GWA models identify the additive effects of each DNA variant and polygenic scores sum these additive effects (Plomin, 2018). If genetic effects were non-additive, it would be much more difficult to detect associations between DNA variants and SCA. The additivity of genetic effects on cognitive abilities is in part responsible for the finding that the strongest polygenic scores in the behavioural sciences are for cognitive abilities (Allegrini et al., 2019) (Cheesman et al., 2017) (Plomin et al., 2013).
4.5. Limitations
The usual limitations of the twin method apply, although it should be noted that twin results in the cognitive domain are supported by adoption studies (Knopik et al., 2017) and by genomic analyses (Plomin & von Stumm, 2018).
As noted earlier, a general limitation is that some CHC categories have too few studies to include in meta-analyses. This is especially the case in the developmental analyses. Power is diminished by dividing the twin comparisons into five age categories. In addition, different measures are used at different ages; even when measures with the same label are used across ages, they might measure different things. Finally, the developmental results are primarily driven by cross-sectional results from different studies. Nonetheless, longitudinal comparisons within the same study have also found no developmental change in heritability estimates for some SCA (Kovas et al., 2013).
Another limitation of this study is that there might be disagreement concerning the CHC categories to which we assigned tests. We reiterate that we used the CHC model merely as a heuristic to make some sense of the welter of tests that have been used in twin studies, not as a definitive assignment of cognitive tests to CHC categories. We hope that Supplementary Table S-3 with details about the studies and measures will allow researchers to categorise the tests differently or to focus on particular tests. This limitation is also a strength of our review in that it points to SCA for which more twin research is needed. The same could be said for other limitations of SCA twin research such as the use of different measures across studies and the absence of any twin research at some ages.
A specific limitation of SCA.g is that removing all phenotypic covariance with g might remove too much variance of SCA, as mentioned in the Introduction. A case could be made that bi-factor models (Murray & Johnson, 2013) or other multivariate genetic models (Rijsdijk, Vernon, & Boomsma, 2002) would provide a more equitable distribution of variance between SCA and g indexed as a latent variable representing what is in common among SCA. However, the use of bifactor models is not straightforward (Decker, 2021). Moreover, phenotypic and genomic analyses of SCA.g are likely to use regression-derived SCA.g scores because bifactor models, like Cholesky models, involve latent variables that cannot be converted to phenotypic scores for SCA.g.
Finally, in this paper we did not investigate covariates such as average birth year of the cohort, or country of origin, nor did we examine sex differences in differential heritability or in developmental changes in heritability or SCA.g. Opposite-sex DZ twins provide a special opportunity to investigate sex differences. We have investigated these group differences in follow-up analyses (Zhou, Procopio, Rimfeld, Malanchini, & Plomin, 2022).
4.6. Directions for future research
SCA is a rich territory to be explored in future research. At the most general level, no data at all are available for five of the 16 CHC broad categories. Only two of the 16 CHC categories have data across the lifespan.
More specifically, the findings from our review pose key questions for future research. Why are some SCA significantly and substantially more heritable than others? How is it possible that SCA.g are as heritable as SCA? How is it possible that the heritability of g increases linearly across the lifespan, but SCA show no clear developmental trends?
Stepping back from these specific findings, for us the most far-reaching issue is how we can foster GWA studies of SCA.g so that we can eventually have polygenic scores that predict genetic profiles of cognitive abilities and disabilities that can help to foster children's strengths and minimise their weaknesses.
Friday, September 23, 2022
Are people more averse to microbe-sharing contact with ethnic outgroup members? It seems not.
Are people more averse to microbe-sharing contact with ethnic outgroup members? A registered report. Lei Fan, Joshua M. Tybur, Benedict C. Jones. Evolution and Human Behavior, September 22 202. https://doi.org/10.1016/j.evolhumbehav.2022.08.007
Abstract: Intergroup biases are widespread across cultures and time. The current study tests an existing hypothesis that has been proposed to explain such biases: the mind has evolved to interpret outgroup membership as a cue to pathogen threat. In this registered report, we test a core feature of this hypothesis. Adapting methods from earlier work, we examine (1) whether people are less comfortable with microbe-sharing contact with an ethnic outgroup member than an ethnic ingroup member, and (2) whether this difference is exacerbated by additional visual cues to a target's infectiousness. Using Chinese (N = 1533) and British (N = 1371) samples recruited from the online platforms WJX and Prolific, we assessed contact comfort with targets who were either East Asian or White and who were either modified to have symptoms of infection or unmodified (or, for exploratory purposes, modified to wear facemasks). Contact comfort was lower for targets modified to have symptoms of infection. However, we detected no differences in contact comfort with ethnic-ingroup targets versus ethnic-outgroup targets. These results do not support the hypothesis that people interpret ethnic outgroup membership alone as a cue to infection risk.
5. Discussion
The current study was designed to improve upon van Leeuwen and Petersen (2018), which tested the outgroup-as-pathogen-cue hypothesis using only a small number of male targets and a two-item assessment of contact comfort via an English-language survey with participants recruited from the U.S. and India. Consistent with van Leeuwen and Petersen, but sampling from different populations, using larger stimulus pools and broader assessments of contact comfort, and presenting materials in participants' native languages, we did not detect effects supportive of the outgroup-as-pathogen-cue hypothesis. Nevertheless, many of our other findings were consistent with those from previous studies in the behavioral immune system literature. For example, contact comfort was negatively related to pathogen disgust sensitivity (Tybur et al., 2020; van Leeuwen & Jaeger, 2022), and was lower for faces manipulated to appear infectious relative to those unmanipulated (e.g., van Leeuwen & Petersen, 2018; van Leeuwen & Jaeger, 2022). Hence, while results indicated that people are more motivated to avoid microbe-sharing contact with individuals possessing symptoms of current infection, they did not reveal evidence that people are motivated to avoid microbe-sharing contact with ethnic-outgroup members more than ethnic-ingroup members.
5.1. Do other findings support the outgroup-as-pathogen-cue hypothesis?
We found that ethnic outgroup targets were rated as slightly more likely to have an infectious disease than were ethnic ingroup targets. However, participants reported no greater discomfort with pathogen-risky contact with outgroup members. This finding complements findings suggesting that people are averse to indirect contact with individuals possessing facial disfigurements known to not be symptoms of infection (Ryan et al., 2012). Here, rather than contact avoidance being higher for targets believed to be non-infectious, contact avoidance was no higher for targets believed to be (slightly) more infectious (cf. Petersen, 2017). Thus, such results did not entirely support the outgroup-as-pathogen-cue hypothesis.
We also detected a small relation between contact comfort and perceptions that a target is similar to individuals in the local community (Bressan, 2021). Although perceived similarity has been interpreted as a continuous measure of outgroupness (Bressan, 2021), it can also reflect myriad factors unrelated to group membership (e.g., facial morphology, eye color, etc.). Further, similarity perceptions could reflect outputs of the behavioral immune system rather than inputs into it, if similarity perceptions partially regulate contact. And, while we also detected a relation between contact comfort and reported frequency of contact with members of the target's ethnic group, the pattern was quadratic. Contact comfort was lowest for participants who reported the least previous contact with people of the target's ethnicity. However, it was lower for participants who reported the most contact frequency than it was for people who reported intermediate contact frequency.
5.2. Effects of facemasks
In addition to investigating the effects of group membership and explicit cues of infectious disease on contact comfort, we also tested whether people were more or less comfortable with microbe-sharing contact with targets wearing facemasks. We carried out this latter test because facemasks might be interpreted as indicative of infection risk and/or prosociality, and perhaps differently in a Western versus an East Asian country. Although masked targets were perceived as slightly more likely to be infectious than unmasked targets (and more so among British participants than Chinese participants), we did not detect an effect of facemask wearing on contact comfort. However, the perception of infectiousness of targets wearing a facemask varied across the two samples. As with ethnic outgroups, beliefs about infectiousness in mask wearers might not influence the infection-neutralizing motivations outputted by the behavioral immune system. Alternatively, beliefs about target infectiousness could also be offset by beliefs about the prophylactic effects of facemasks. Future research could distinguish between these possibilities.
5.3. The impact of the COVID-19 pandemic
We collected data in January 2022, when many countries were experiencing a surge in infections caused by the Omicron variants of the SARS-CoV-2 virus. The degree to which pandemic conditions impact the behavioral immune system is an open question (Ackerman, Tybur, & Blackwell, 2021). Nevertheless, this surge – as well as infections over the previous two years – might have led to a general decrease in contact comfort across targets. Even so, any decrease in global contact comfort did not prevent us from observing an effect of infection symptom on contact comfort, nor did it prevent us from observing a relation between pathogen disgust sensitivity and contact comfort (cf. Tybur et al., 2022). Indeed, the relation between pathogen disgust sensitivity and contact comfort observed here (r = −0.28) was nearly identical to that observed in similar studies before the pandemic (e.g., Tybur et al., 2020, r's = −0.22, −0.24, and −0.33 across three studies). The pandemic might have also influenced how masked faces are perceived. Given that wearing a facemask was mandatory in many settings in both the UK and China from 2020 to 2022, the pandemic might have decreased the degree to which a mask is interpreted as providing information regarding infectiousness. Further, the widespread use of facemasks across the world might have also dampened cross-cultural differences in how masks are perceived.
5.4. Limitations and future research
We recruited from the White population in the UK and the East-Asian population in China, and we used White and East-Asian stimuli. Our inferences are thus limited to these two populations, both in terms of targets and perceivers. Some findings suggest that pathogen-avoidance motives only impact antipathy toward members of groups that are sufficiently culturally distant or sufficiently associated with infectious disease (Faulkner et al., 2004; Ji et al., 2019). Even so, UK participants explicitly associated China with infectious disease, as did Chinese participants the UK, perhaps due to the origins of COVID-19 (in the case of China) and the high number of COVID-19 cases in deaths in 2020 and 2021 (in the case of the UK). Further, China and the UK differ markedly along broad cultural variables (Muthukrishna et al., 2020). For these reasons, the UK and China seem they appear suitable for testing even a narrower version of the outgroup-as-pathogen-cue hypothesis that require additional associations between a target group and cultural differences in pathogens. Nevertheless, future work could certainly test the outgroup-as-pathogen-cue hypothesis using different target groups.
We also used only a single cue to infectiousness – a skin condition intended to mimic the appearance of shingles. Naturally, infectious disease can lead to other symptoms, including other skin changes (e.g., pallor, rashes, jaundice), vocal changes (e.g., hoarseness), behavioral changes (e.g., lethargy, coughing). Infectiousness and health status can also be detected via other senses, such as olfaction (e.g., body odor, Sarolidou et al., 2020; Zakrzewska et al., 2020) and audition (e.g., voice; Fasoli, Maass, & Sulpizio, 2018). Future studies could test whether the outgroup-as-pathogen-cue hypothesis applies when targets possess different cues to infectiousness.
To date, the literature examining relations between pathogen-avoidance and intergroup biases has largely focused on phenomena such as explicit prejudice (e.g., Huang, Sedlovskaya, Ackerman, & Bargh, 2011; O'Shea et al., 2019) or implicit attitudes (e.g., Faulkner et al., 2004; Klavina et al., 2011). Less work has focused on whether people treat individual outgroup members as if they pose more of a pathogen threat than individual ingroup members. Results reported here and in van Leeuwen and Petersen (2018) cast doubt on the outgroup-as-pathogen-cue interpretation of relations between disgust sensitivity and, for example, anti-immigrant bias. Future work can naturally use approaches apart from contact-comfort ratings to evaluate the outgroup-as-pathogen-cue hypothesis. In the meantime, the field will benefit from generating and testing other hypotheses for explaining why more pathogen-avoidant individuals might feel more negatively toward outgroups.