Monday, June 27, 2022

Access to fast food has often been blamed for the rise in childhood obesity; our econometric evidence suggests that fast food exposure had no effect

Childhood obesity, is fast food exposure a factor? Peter J. Dolton, WiktoriaTafesse. Economics & Human Biology, June 26 2022, 101153. https://doi.org/10.1016/j.ehb.2022.101153

Highlights

• Access to fast food has often been blamed for the rise in childhood obesity.

• Possible links to obesity has motivated policies to curb the spread of fast food.

• Spatial and timing data on early fast food outlets from the UK from 1968-86 is used.

• Medically measured data on Body Mass Index for a British Cohort is exploited.

• Our econometric evidence suggests that fast food exposure had no effect.

Abstract: Access to fast food has often been blamed for the rise in obesity which in turn has motivated policies to curb the spread of fast food. However, robust evidence in this area is scarce, particularly using data outside of the US. It is difficult to estimate a causal effect of fast food given spatial sorting and ever-present exposure. We investigate whether the residential access to fast food increased BMI of adolescents at a time when fast food restaurants started to open in the UK. The time period presents the study with large spatial and temporal differences in exposure as well as plausibly exogenous variation. We merge data on the location and timing of the first openings of all fast food outlets in the UK from 1968 -1986, with data on objectively measured BMI from the 1970 British Cohort Survey. The relationship between adolescent BMI and the distance from the respondents’ homes and time since opening, is studied using OLS and Instrumental Variables regression. We find that fast food exposure had no effect on BMI. Extensive robustness checks do not change our conclusion.


JEL: I120I190

Keywords ObesityDiet

8. Conclusion

This paper studied the relationship between exposure to fast food and adolescent BMI using the BCS and historical data relating to the inception of fast food in Great Britain. The data on the timing of establishment and location of all fast food outlets prior to 1986 allowed us to investigate whether fast food proximity, duration since opening, as well as a generated intensity measure taking into account the the proximity and durations of multiple outlets, affects BMI. We do not find any evidence of a positive association between numerous measures of exposure in the home environment and adolescent BMI. This study has filled a gap in the existing literature which has mostly focused on the distance to ever-present fast food restaurants using American data.

Our results are robust to instrumenting for the distance to one’s closest fast food outlet with the distance to a fast food distribution centre. Additionally, one company, Wimpy, suddenly increased its number of fast food outlets which did not allow for a strategic timing and citing of their outlets. Restricting the analysis to Wimpy outlets confirms the zero results. The lack of a relationship is supported by previous research, see; Anderson and Matsa, 2011Fraser et al., 2012Lee, 2012Dunn et al., 2012 and Asirvatham et al. (2019).

There are several potential explanations for our null findings. Firstly, the effect may be highly context specific, see Dunn, 2010Anderson and Matsa, 2011Dunn et al., 2012Grier and Davis, 2013. Adolescents may have less control over food choices in their home environment compared to their school environment. Our findings might differ from studies using American data where fast food is eaten more frequently than in the UK (Fraser et al., 2012). In fact, our analysis does not find support of fast food proximity having a meaningful impact on the frequency of takeaway consumption. Alternatively, the time period of our study may not translate to large effects on obesity as only a small proportion of our sample was exposed to fast food very near their home, particularly at younger ages. Specifically, the lack of weight gain during our study period may be explained by fast food being a novelty or not being comparably cheap relative to other foods as they are today which may cause different consumption patterns (Wiggins et al., 2015). Moreover, it is uncertain how well diets and caloric expenditure during the 1980’s compare to current levels and how this may interact with access to different size and scope of fast food establishments. Additionally, the population studied might not have a large propensity to gain weight if exposed to fast food given that positive effects have been found for specific sub-groups such as ethnic minority urban youth (Currie et al., 2010Grier and Davis, 2013 and youth living in the poorest and one of the least healthy American states (Alviola et al., 2014). Our reduced sample size does not permit us to do a heterogeneity analysis.

We contribute to the existing literature, often based on data for smaller geographical areas, by using nationwide data. Our results are based on the sample of BCS respondents who did not relocate in the last 6 years and for whom anthropometric and postcode information is available. However, it should be noted that this population could differ from the overall nationally representative sample. Various types of food outlets have been shown to cluster together (Hobbs et al., 2019a). Therefore, a limitation is that we are unable model the direct effects of community determinants of body weight, such as the commercial food environment or access to exercise inducing spaces, due to limited area-level information in the BCS and the lack of supplementary historical data. Furthermore, our small sample size and rich set of controls does not allow us to control for local area fixed effects.

Keeping these caveats in mind, there is no evidence of that the introduction of fast food induced any behavioural change which resulted in weight gain amongst adolescents in the UK in the 1980s. Our overall findings are supported by there being a decrease in total calories purchased since the 1980’s (Griffith et al., 2016). Thus, we suggest that it is unlikely that the access to fast food caused the British obesity epidemic. Half of local government areas in England have enacted policies to curb takeaway food outlets which for example restrict new outlets from opening in designated exclusion zones around places used by children (Keeble et al., 2019). However, despite such policies being common, there is a scarcity of literature evaluating these effects besides Sturm and Hattori (2015) which showed that zoning interventions do not deliver the expected results (Keeble et al., 2019). Thus, our paper supplements the evidence base regarding the lack of a relationship between a changing access to fast food and childhood and adolescent obesity which suggests that complementary interventions need to be considered.

Sunday, June 26, 2022

Compared to moderate levels of drinking, both abstinence and heavier drinking in late adolescence/early adulthood predicted greater likelihood of lifetime childlessness and fewer children

Alcohol consumption at age 18-25 and number of children at a 33-year follow-up individual and within-pair analyses of Finnish twins. Richard J. Rose et al. Alcoholism, June 19 2022. https://doi.org/10.1111/acer.14886

Abstract

Background: Do drinking patterns in late adolescence/early adulthood predict lifetime childlessness and number of children? Past research is but tangentially relevant, inconsistent in results, and compromised in design. Genetic and environmental confounds are poorly controlled; covariate effects of smoking and education often ignored; males are understudied; population-based sampling is rare, and long-term prospective studies with genetically informative designs are yet to be reported.

Method: In a 33-year follow-up, we linked drinking patterns of >3,500 Finnish twin pairs, assessed at ages 18-25, to registry data on their eventual number of children. Analyses distinguished associations of early drinking patterns with lifetime childlessness from those predictive of family size. Within-twin pair analyses used fixed-effects regression models to account for shared familial confounds and genetic liabilities. Childlessness was analyzed with Cox proportional hazards models and family size with Poisson regression. Analyses within-pairs and of twins as individuals were made before and after adjustment for smoking and education, and for oral contraceptive use in individual-level analyses of female twins.

Results: Baseline abstinence and heavier drinking significantly predicted lifetime childlessness in individual-level analyses. Few abstinent women used OCs, but they were nonetheless more often eventually childless; adjusting for smoking and education, abstinence-childless associations remained. Excluding childless twins, Poisson models of family size found heavier drinking at 18-25 predictive of fewer children in both men and women. Those associations replicated in within-pair analyses of DZ twins, each level of heavier drinking associated with smaller families. Among MZ twins, associations of drinking with completed family size yielded effects of similar magnitude, reaching significance at highest levels of consumption, ruling out familial confounds.

Conclusions: Compared to moderate levels of drinking, both abstinence and heavier drinking in late adolescence/early adulthood predicted greater likelihood of lifetime childlessness and fewer children. Familial confounds do not fully explain these associations.


Sociodemographic data from an agropastoralist Buddhist population in western China: Religious celibacy brings inclusive fitness benefits

Religious celibacy brings inclusive fitness benefits. Alberto J. C. Micheletti et al. Proceedings of the Royal Society B: Biological Sciences, June 22 2022. https://doi.org/10.1098/rspb.2022.0965

Abstract: The influence of inclusive fitness interests on the evolution of human institutions remains unclear. Religious celibacy constitutes an especially puzzling institution, often deemed maladaptive. Here, we present sociodemographic data from an agropastoralist Buddhist population in western China, where parents sometimes sent a son to the monastery. We find that men with a monk brother father more children, and grandparents with a monk son have more grandchildren, suggesting that the practice is adaptive. We develop a model of celibacy to elucidate the inclusive fitness costs and benefits associated with this behaviour. We show that a minority of sons being celibate can be favoured if this increases their brothers' reproductive success, but only if the decision is under parental, rather than individual, control. These conditions apply to monks in our study site. Inclusive fitness considerations appear to play a key role in shaping parental preferences to adopt this cultural practice.

4. Discussion

Taken together, our analyses show that lifelong celibacy can be adaptive under certain conditions. Men with a monk brother have more children and men who sent one of their sons to the monastery have more grandchildren. These effects are strongly significant despite a three-child policy introduced in this area in the late 1980s. With our inclusive fitness model, we have shown that a substantial minority of men can be favoured by selection to be celibate, when the decision is under parental control and when having monk brothers makes men more competitive, leading to higher reproductive success. These conditions are met in our study population, suggesting that this cultural practice has been shaped heavily by the inclusive fitness interests of the monks' parents.

Monks may be enhancing the reproductive success of their brothers in at least two non-mutually exclusive ways. First, as monks do not inherit wealth from their parents [32,38], having a celibate brother might reduce male competition over family resources. We have shown elsewhere [37] that, in this population, men with a monk brother are wealthier than men with a non-celibate brother. Here, we have found that they also have more children, which reveals a key role for brother–brother competition over family wealth. The possibility that monks can also provide material benefits to their brother's family—and thus increase the couple's overall reproductive success—by exploiting their prestigious positions cannot be excluded, as monks command great respect in these communities [38]. We did not find that men with a monk brother have more children than only sons; our analysis of women's age at first birth suggests that their wives might be having children earlier. Further investigation is required to clarify the avenues through which monks benefit their natal families. In other societies, individuals who engage more frequently in religious acts have been shown to have more numerous supportive relations [48,49]. In our case, the fact that only sons and men with a monk brother have the same reproductive success suggests that no such social network effects are present, or that they are unlikely to be important.

Previous research on celibacy suggested that lifelong abstinence could lead to greater lineage survival [5,2224]. Our sociodemographic analysis has clearly demonstrated that having a celibate child or sibling can be associated with higher reproductive success. Our results help clarify what conditions are necessary for celibacy to appear and be maintained through kin-selected benefits. Genealogical analyses of Medieval and Early Modern European nobility have shown that more children were directed to religious careers in higher social strata [22] and a comparison of two French noble families has suggested that lineages with more celibates were more likely to persist [23]. Both our model and data have shown that celibacy can appear and be maintained in a society without social stratification and hypergamy, two factors that have previously been suggested to be crucial [22,23,50,51]. It has been argued that psychological reinforcement mechanisms and costly ostracism in the case of abandonment of the monastery are key for religious celibacy to appear and be maintained, and they might be used as proximate mechanisms for parents and religious institutions to enforce their own interests [4]. Census data of Catholic priests in ninteenth century Ireland have shown that families who sent at least one son to the seminary were larger, richer and more likely to own land [24]. By contrast, in the present-day United States, Catholic priests tend to come from larger but poorer families [5]. We did not find an effect of wealth in our population: the number of yaks owned by a household does not seem to mediate the effect of having a monk brother or son on reproductive success. Notice, however, that we used current wealth as a proxy for wealth at the time of the celibacy decision, because our data are not longitudinal.

Our model has shown that selection favours celibacy only if it relaxes competition within the monk's family, not within the wider social group. Just like infanticide by parents did not evolve for population regulation [12,52], committing one's son to religious celibacy cannot be favoured by selection for the ‘good of the group’. Moreover, we have shown formally that parents and offspring are indeed in conflict over religious celibacy, with parents favouring higher levels of altruism, analogously to what has been shown for other behaviours subject to parent–offspring conflict [46,47,5355]. By developing a demographically explicit model employing the latest inclusive fitness methodologies [25,2731], we have also clarified that dispersal rates influence celibacy decisions when under individual control (and, since only the celibate's brothers benefit from his altruism, celibacy can be favoured even when males never disperse, cf. [25,45,56,57]; see electronic supplementary material for details). On the other hand, dispersal rates have no effect in the more likely scenario when parents decide. In this case, only costs and benefits matter because parents are equally related to all sons, analogously to what has been shown in models of mother–offspring conflict over offspring size [55].

By elucidating the inclusive fitness costs and benefits associated with celibacy, our analysis has highlighted that parent–offspring conflict over the decision is substantial and that only when parents win that conflict would a reasonable proportion of the population become monks. In our population—and in the context of several other religions worldwide [4]—parents induced sons to become monks at a young age, configuring this behaviour as a form of parental manipulation [12]. Several studies have revealed discriminative parental solicitude in a range of contexts that may share similar patterns to the case studied here. Parents often penalize later born sons in regard to care and other investments, including wealth inheritance [1418,5861]. For example, Gibson & Gurmu [61] have shown that, in Ethiopia, competition between brothers has adverse effects on later born sons when land is inherited from fathers, but not when it is assigned by the government. In our population, as new and more remunerative job opportunities become available in nearby towns and cities, competition between brothers over family resources may be declining, the incentives for celibacy thus decreasing and parent–offspring conflict gradually disappearing—contributing to a gradual abandonment of the practice.

We have shown that religious celibacy can be adaptive: so why is this practice not more widespread? Two non-mutually exclusive reasons exist. First, as we have discussed above, the conditions for it to be adaptive are not met everywhere or—as could be the case for Europe—were once met but are not any longer. The Tibetan plateau is a harsh environment where competition between siblings for parental resources is likely to be high. Second, religious celibacy as a culturally recognized option needs to be available to a population for it to be adopted as a parental discrimination strategy. In our population, Tibetan Buddhism affords this opportunity. In Medieval and Early Modern Europe, Catholic Christianity also offered this way for parents to suppress their children's reproduction [22,23]. Practices are adopted when they are in line with an individual's interests and, when they are not, they are either abandoned or altered. In this regard, the cases of religions that dropped celibacy requirements for their practitioners—like Protestant Christianity or Japanese Zen Buddhism—are a promising avenue for future research.

Much of the current literature on the evolution of cultural phenomena focuses on transmission biases [62,63], as potential proximate mechanisms for cultural change. However, that framework does not have the power or generality of inclusive fitness theory to help us understand the design and diversity of cultural phenotypes along ecological lines. Humans are strategic in terms of the design or acceptance of cultural traits, adopting those that satisfy preferences that are beneficial to their fitness, as also suggested by recent other work [6466]. So inclusive fitness is still a framework that potentially has predictive power with respect to the design of cultural phenotypes. Behavioural ecology models have long been used to increase our understanding of the diversity of human behaviour, including cultural behaviour [42], and here we have shown that inclusive fitness interests appear to play a role in shaping both parental preferences and the design of a costly religious institution. Inclusive fitness can help us to make predictions about the phenotypes of cultural institutions that develop in human populations [42].

Evidence that accounts for misreporting: Our preferred estimate suggests reduced smoking accounts for 6% of the concurrent rise in obesity

Does quitting smoking increase obesity? Evidence that accounts for misreporting. Rusty Tchernis, Keith Teltser, Arjun Teotia. Southern Economic Journal, June 25 2022. https://doi.org/10.1002/soej.12591

Abstract: Studying the relationship between smoking and obesity, the leading causes of preventable deaths in the U.S., helps assess the potential unintended consequences of policy efforts to reduce smoking. Because existing literature is mixed among studies using experimental and observational data, we investigate the role of misreporting in observational data. We use the Behavioral Risk Factor Surveillance System, cigarette taxes to instrument for smoking, and survey completion to instrument for misreporting. Starting with the seminal two-stage least squares (2SLS) approach, our estimates similarly suggest quitting smoking substantially reduces body mass index (BMI). However, the relationship between cigarette taxes and BMI has shrunk over time, and the 2SLS estimates are sensitive to specification, functional form, and misreporting. Accounting for misreporting using the 2-step estimator from Nguimkeu et al. (2019) yields estimates consistent with the experimental literature; quitting smoking modestly increases BMI. Our preferred estimate suggests reduced smoking accounts for 6% of the concurrent rise in obesity.


Saturday, June 25, 2022

About 11pct of participants evaluated their conscientiousness as adequate, 75pct thought it was "too high" (?!)

Too little, just right or too much? Assessing how people evaluate their conscientiousness levels. Sofie Dupré et al. Personality and Individual Differences, Volume 197, October 2022, 111789. https://doi.org/10.1016/j.paid.2022.111789

Abstract: Recent advances in personality theory and research have led to the introduction of the “Too-much-of-a-good-thing-effect” in the relationship between conscientiousness and desirable outcomes, challenging the “more is better” idea that has been dominating research on this trait for a long time. Thus, the question arises as to how people evaluate their conscientiousness levels themselves, more specifically, whether they regard their trait levels as “too little”, “the right amount”, or “too much”. The current study describes how an existing personality inventory can be adjusted to explore such evaluations of conscientiousness levels by incorporating a too little/too much response format. The structural characteristics of this new assessment approach are examined and compared against responses that are collected using a traditional Likert rating format asking people to describe themselves. Results show that – in this sample (N = 367) – about 11 % of participants evaluated their conscientiousness as adequate, whereas the majority (75 %) indicated it to be too high. Further, the “right amount” of conscientiousness was most frequently associated with a 7 on a 9-point Likert scale, while very high Likert-scale ratings of 9 were regarded as “too much” in over three-fourth of the ratings. Implications and directions for future research are discussed.

Introduction

High conscientiousness is a reliable correlate of multiple desirable outcomes in different areas of people's lives, such as physical and psychological well-being (Friedman, Kern, & Reynolds, 2010), academic success (Mammadov, 2021), and work performance (Wilmot & Ones, 2019). Although these conscientiousness-outcome relations have been deemed positive and linear in most research (i.e., “more is better”), contradictory evidence from recent lines of theoretical and empirical work indicates that there may be such a thing as being too conscientious (e.g., Carter et al., 2014; Carter, Guan, Maples, Williamson, & Miller, 2016; Denissen et al., 2018; Le et al., 2011). This phenomenon – also referred to as the “Too-much-of-a-good-thing-effect” (TMGT effect; Pierce & Aguinis, 2013) – raises the question how people feel about or evaluate their conscientiousness levels. While current personality assessment instruments provide valuable insights into people's positions on the underlying trait dimension, they cannot capture such evaluations (i.e., whether people regard their trait levels as “too little”, “the right amount”, or “too much”). The current study is a first effort to explore how people evaluate their conscientiousness levels by introducing the too little/too much rating scale (Kaiser & Kaplan, 2005) in personality assessment.

Although the TMGT-effect manifests in different life domains (Pierce & Aguinis, 2013), evidence for this effect regarding conscientiousness has mainly been reported in the work setting, where curvilinear relationships have been demonstrated with various indicators of job performance. That is, several studies demonstrate that employees with moderate conscientiousness levels tend to perform better than those with very high levels (Carter et al., 2014; Denissen et al., 2018; Le et al., 2011). Highly conscientious employees may be so perfectionist that they waste time on unimportant details. Importantly, Le et al. (2011) found that higher levels of conscientiousness only become detrimental to performance in low-complexity jobs, which demonstrates the importance of the work context in explaining the nature of the curvilinear relationship. Similarly, Denissen et al. (2018) showed that high conscientiousness in employees has a detrimental effect on performance when conscientiousness exceeds the level their job demands, which provides further evidence for the significance of a good match between personality and the job context.

While the issue of curvilinearity has been mainly addressed in research on the conscientiousness-performance relationship, it is not exclusive to the work context. For example, Carter et al. (2016) found a similar trend in the relation between conscientiousness and psychological well-being. Taken together, these findings raise the possibility that people who report higher conscientiousness levels potentially experience these trait levels as a hindrance in life, and may feel that lower levels are more desirable.

For decades, psychologists have been in pursuit of valid and reliable ways to assess people's personality, but despite the emergence of a multitude of instruments, none have aimed to capture the way people evaluate their reported trait levels. While Likert scales are designed to grasp the person's actual position on the underlying trait, they are not intended to reveal discrepancies between actual trait levels and desired trait levels. For example, people who rate themselves as highly conscientious on a Likert scale may regard these levels as either adequate or potentially “too high”. Similarly, people with lower Likert-scale rated levels are not necessarily dissatisfied with these levels. Previous efforts have been made toward better operationalizing potential adaptive and maladaptive functioning at both poles of a trait dimension, such as the Five Factor Form (FFF) and the Sliderbar Inventory (SI) (Rojas & Widiger, 2018). However, like the Likert scale, these scales are still aimed at – more accurately – describing people's trait levels.

Similar remarks were made previously by Kaiser and Kaplan (2005), who aimed to assess deficiency and/or excess in leadership behavior. In their study, they suggested a new rating scale format: the too little/too much rating scale, ranging from −4 (too little) to 0 (the right amount) and +4 (too much) (Kaiser & Kaplan, 2005). The usefulness of this scale lies in its ability to tap into perceived overdoing or underdoing, which are areas of functioning not captured by the Likert scale rating format. In the current study, our goal is to introduce this scale in personality assessment to explore employees' evaluations of their conscientiousness levels.

The current study presents the development and validation of a personality instrument that focuses on mapping people's evaluations of their conscientiousness levels (“too much”, “too little”, or “the right amount”) rather than the actual conscientiousness levels (1 = very low; 9 = very high). The characteristics of this instrument will be investigated in two ways. First, the factor structure of the TLTM-rated version will be compared to that of the traditional Likert-rated version of the instrument. Second, associations between ratings on TLTM- and Likert- versions of the instrument will be investigated to build an understanding of how people evaluate different conscientiousness trait levels.

There is no empirical justification for declaring believers in the paranormal to be crazy

Variations in Well-Being as a Function of Paranormal Belief and Psychopathological Symptoms: A Latent Profile Analysis. Neil Dagnall et al. Front. Psychol., June 24 2022 | https://doi.org/10.3389/fpsyg.2022.886369

Abstract: This study examined variations in well-being as a function of the interaction between paranormal belief and psychopathology-related constructs. A United Kingdom-based, general sample of 4,402 respondents completed self-report measures assessing paranormal belief, psychopathology (schizotypy, depression, manic experience, and depressive experience), and well-being (perceived stress, somatic complaints, and life satisfaction). Latent profile analysis identified four distinct sub-groups: Profile 1, high Paranormal Belief and Psychopathology (n = 688); Profile 2, high Paranormal Belief and Unusual Experiences; moderate Psychopathology (n = 800); Profile 3, moderate Paranormal Belief and Psychopathology (n = 846); and Profile 4, low Paranormal Belief and Psychopathology (n = 2070). Multivariate analysis of variance (MANOVA) found that sub-groups with higher psychopathology scores (Profiles 1 and 3) reported lower well-being. Higher Paranormal Belief, however, was not necessarily associated with lower psychological adjustment and reduced well-being (Profile 2). These outcomes indicated that belief in the paranormal is not necessarily non-adaptive, and that further research is required to identify the conditions under which belief in the paranormal is maladaptive.

Discussion

Emergent subgroups reflected subtle variations in paranormal belief and psychopathology, which were associated with differences on well-being measures. Specifically, Profile 1 (high Paranormal Belief and Psychopathology) indexed lower well-being in comparison with the other profiles (Profile 2–4). Contrastingly, Profile 4 (low Paranormal Belief and Psychopathology) evidenced greater well-being vs. the other profiles (Profiles 1–3). Profile 3 (moderate Paranormal Belief and Psychopathology) indexed lower well-being than Profile 2 (high Paranormal Belief and Unusual Experiences; moderate Psychopathology), suggesting that belief in the paranormal is not necessarily contributory to psychological adjustment. Additionally, results indicated that believers are a heterogenous rather than homogeneous population.

Zero-order correlations were consistent with preceding research. Paranormal Belief demonstrated a similar pattern of associations with O-LIFEshort subscales to those reported by Dagnall et al. (2016c). Particularly, Paranormal Belief was most strongly related to Unusual Experiences, correlated with Cognitive Disorganisation and Impulsive Non-conformity, but was not significantly associated with Introvertive Anhedonia. These outcomes correspond to general dimensional models of schizotypy (Kwapil et al., 2008). For instance, they are consistent with the distinction between positive (i.e., unusual experiences, perceptions, beliefs, and magical thinking) and negative (i.e., withdrawal and attenuated ability to experience pleasure) factors.

The positive association between Introvertive Anhedonia and Paranormal Belief is explained by the fact that negative features reflect the tendency to gain less satisfaction from engaging in effortful and deliberative thought (Broyd et al., 2019). Thus, in comparison to positive schizotypy, which is associated with the production of unusual experiences, perceptions, beliefs and magical thinking, negative schizotypy is less cognitive (Broyd et al., 2019). This, in part, explains why positive characteristics are conducive to the generation and maintenance of paranormal beliefs, whereas negative features are unlikely to directly influence supernatural credence. Future research is required to assess the extent to which differences in cognitive engagement influence belief in the paranormal.

Examination of profiles indicated that belief and psychopathological factors interacted in complex ways. Respondents high in Paranormal Belief were differentiated by elevated global (Profile 1) vs. specific (Unusual Experiences) (Profile 2) Psychopathology scores. The presence of a profile characterised by high Unusual Experiences aligns with Loughland and Williams (1997). The Unusual Experiences subscale reflects mainly positive schizotypal characteristics such as perceptual distortions and magical thinking, which align with the reality distortion syndrome of positive schizophrenic symptoms (Liddle, 1987Loughland and Williams, 1997). Perceptual distortions represent an attenuated form of hallucination, and magical thinking signifies weaker type delusional thoughts.

In the present study, Profile 2 attributes were associated with higher levels of well-being than the global high (Profile 1) and moderate psychological adjustment (Profile 3) subgroups. This suggests that high Paranormal Belief is not necessarily concomitant with lower psychological adjustment and reduced well-being. Although, caution is required when drawing comparison with Loughland and Williams (1997), since they used agglomerative hierarchical clustering rather than LPA, and their analysis considered only schizotypy.

Despite this caveat, the presence of differing high belief profiles has important implications for subsequent research as they are differentially associated with well-being. The presence of a Paranormal sub-group with relatively low Psychopathology scores is consistent with the high levels of supernatural endorsement observed in general populations. It also aligned with the notion that paranormal beliefs in non-clinical samples represent non-psychotic delusions (Irwin et al., 2012a,b). In this context, beliefs often arise from reality testing deficits where individuals fail to adequately assess the validity of propositions and the evidence from which they derive (Dagnall et al., 2015Drinkwater et al., 2020). Thus, beliefs alone reflect thinking style preferences rather than variations in psycholopathology.

Using LPA to study paranormal belief and psychopathology is conceptually significant because the method recognises that individuals because of life history vary on both constructs. This is important as paranormal belief and psychopathology may concurrently influence psychological adjustment and well-being. Hence, identifying differing profiles advances knowledge in terms of appreciating how specific combinations of paranormal belief and psychopathology relate to well-being. In this instance, demonstrating that although higher Paranormal Belief and psychopathology generally relate to lower well-being, high Paranormal Belief is not inevitably attendant with poorer psychological functioning and lower well-being.

This conclusion is consistent with related work postulating the existence of happy or benign schizotypes. That is, individuals who experience psychotic-like experiences as rewarding and enhancing. These are individuals, who (in relation to the population means) score extremely high on the positive characteristics, but below average on negative and cognitive/disorganised factors (see Claridge, 2018Grant and Hennig, 2020).

Limitations

A limitation concerns the relative distributions of Paranormal Belief and psychopathology-related scores. Explicitly, Paranormal Belief exhibited greater variation compared with psychopathology measures such as schizotypy. Though schizotypy sum totals were analogous to established norms (Mason et al., 2005), range restriction existed because participants came from a non-clinical, general population. In addition, differences existed as a function of the number of items per measure (e.g., Paranormal Belief 26-items vs. Unusual Experiences 12-items). While scaled means were utilised to minimise this (which is advocated with LPA; Uckelstam et al., 2019), high scores on variables should be interpreted as relative rather than absolute.

Moreover, recoding continuous data to create meaningful profiles can lead to information loss (Lanza and Rhoades, 2013). The profiles in this study were statistically and conceptually meaningful, however, it is necessary to guard against reification. Particularly, LPA profiles relate to heterogeneity across a model’s variables, not subtypes of individuals in the population (Lanza and Rhoades, 2013). Too few or too many profiles can be identified through LPA, and it would be valuable for subsequent research to corroborate the current findings by replication and cross-validation (Collins et al., 1994).


Low fertility has persisted in Japan for decades; inactive sexual lives within intimate and committed relationships may be linked to sexual activity outside such relationships (casual sex)

Casual Sex and Sexlessness in Japan: A Cross-Sectional Study. Shoko Konishi et al. Sexes 2022, 3(2), 254-266; May 1 2022. https://doi.org/10.3390/sexes3020020


Abstract: Low fertility has persisted in Japan for decades. Sexless marriages may indirectly contribute to low fertility. Inactive sexual lives within intimate and committed relationships may be linked to sexual activity outside such relationships, called “casual sex”. This study aimed to explore the correlates of casual sex and sexlessness. A web-based questionnaire survey was conducted among married and single men (n = 4000) aged 20–54 years in Japan. Sexlessness were reported by 56% of men, whereas 11% had had casual sex and 31% had had non-casual sex (with spouse, fiancé, or girlfriends/boyfriends) in the last month. Among married men, higher income and long working hours were positively associated with casual sex. Regarding never-married men: those with lower educational status and without full-time jobs were more likely to report casual sex, those in rural areas were more likely to be sexless than those in urban and suburban areas, and those with depression were more likely to be sexless than those without depression. Matching app use was strongly associated with casual sex among married and never-married men, suggesting that such tools may facilitate sexual activity outside committed and intimate relationships. Sexual behavior is closely linked to one’s social and economic environment and health status. View Full-Text


Keywords: Asia; geographic factors; health; Japan; male; marriage; sexual behavior; social networking; socioeconomic status


Friday, June 24, 2022

From 2020... All over the world, people overestimate how negatively their political opponents view their side

Ruggeri, Kai, Bojana Većkalov, Lana Bojanić, Thomas L. Andersen, Sarah Ashcroft-Jones, Nélida Ayacaxli, Paula Barea Arroyo, et al. 2020. “The General Fault in Our Fault Lines.” OSF Preprints. September 8. doi:10.31219/osf.io/xvksa

Abstract: A pervading global narrative suggests that political polarisation is increasing in the US and around the world. Beliefs in increased polarisation impact individual and group behaviours regardless of whether they are accurate or not. One driver of polarisation are beliefs about how members of the out-group perceive us, known as group meta-perceptions. A 2020 study by Lees and Cikara in US samples suggests that not only are out-group meta-perceptions highly inaccurate, but informing people of this inaccuracy reduces negative beliefs about the out-group. Given the importance of these findings for understanding and mitigating polarisation, it is essential to test to what extent they generalise to other countries. We assess that generalisability by replicating two of the original experiments in 10,207 participants from 26 countries in the first experiment and 10 in the second. We do this by studying local group divisions, which we refer to as fault lines. In line with our hypotheses, results show that the pattern found in the US broadly generalises, with greater heterogeneity explained by specific policies rather than between-country differences. The replication of a simple disclosure intervention in the second experiment yielded a modest reduction in negative motive attributions to the out-group, similar to the original study. These findings indicate first that inaccurate and negative group meta-perceptions are exhibited in a large number of countries, not only the US, and that informing individuals of their misperceptions can yield positive benefits for intergroup relations. The generalisability of these findings highlights a robust phenomenon with major implications for political discourse worldwide.


Men tended to report more problematic pornography use than women; sexual minority men and women tend to report more PPU than heterosexual men and women

Understanding Differences in Problematic Pornography Use: Considerations for Gender and Sexual Orientation. Nicholas C.Borgogna et al. The Journal of Sexual Medicine, June 23 2022. https://doi.org/10.1016/j.jsxm.2022.05.144

Abstract

Background: While preliminary research suggests non-heterosexual men and women view more pornography than their heterosexual counterparts, few studies have examined how problematic use differs across sexual and gender identity groups.


Aim: We sought to test measurement invariance across popular measures of problematic pornography use (PPU) and examine mean PPU differences across heterosexual men, non-heterosexual men, heterosexual women, and non-heterosexual women.


Methods: We used 3 large archival datasets to examine psychometrics/group differences on the Brief Pornography Screen (BPS; N = 1,439), Problematic Pornography Use Scale (PPUS; N = 5,859), and Cyber Pornography Use Inventory-4 (CPUI-4; N = 893).


Outcomes: Most PPU scales/subscales demonstrated acceptable fit, and non-heterosexual men and women tended to report more PPU than heterosexual men and women (though exceptions were evident).


Results: Confirmatory factor analyses revealed good fit across each group and instrument, with exception to sexual minority women on the CPUI-4. Each instrument demonstrated at least metric invariance between groups, with exception to one item between heterosexual and sexual minority men on the CPUI-4. Mean differences suggested that sexual minority men and women tend to report more PPU than heterosexual men and women, though several exceptions were evident depending on the PPU dimension. Men tended to report more PPU than women, though exceptions were also evident. Effect sizes ranged from large-to-non-significant depending on PPU dimension.


Clinical Implications: Researchers and clinicians should consider sexual orientation, gender, and PPU dimension when addressing PPU concerns.


Strengths & Limitations: A primary strength of this study is the use of multiple large samples, meaning our results are likely highly generalizable. However, this study is limited in that it only examined sexual orientation groups broadly and did not account for non-cisgender identities.


Conclusions: The BPS, PPUS, and CPUI-4 are all appropriate tools to measure PPU depending on researcher and clinician needs.


Key Words: Problematic Pornography UseSexual OrientationGenderMeasurementCompulsive Sexual Behavior Disorder


Check also Are Playboy (and girl) Norms Behind the Relationship Problems Associated with Pornography Viewing in Men and Women? Nicholas C. Borgogna, Tracey Smith, Ryon C. McDermott & Matthew Whatley. Journal of Sex & Marital Therapy, May 7 2020. https://www.bipartisanalliance.com/2020/05/are-playboy-and-girl-norms-behind.html


Overall, rates of mental disorders within clinical, counseling, and school-psychology faculty and trainees were similar to or greater than those observed in the general population

Only Human: Mental-Health Difficulties Among Clinical, Counseling, and School Psychology Faculty and Trainees. Sarah E. Victor et al. Perspectives on Psychological Science, June 22, 2022. https://doi.org/10.1177/17456916211071079

Abstract: How common are mental-health difficulties among applied psychologists? This question is paradoxically neglected, perhaps because disclosure and discussion of these experiences remain taboo within the field. This study documented high rates of mental-health difficulties (both diagnosed and undiagnosed) among faculty, graduate students, and others affiliated with accredited doctoral and internship programs in clinical, counseling, and school psychology. More than 80% of respondents (n = 1,395 of 1,692) reported a lifetime history mental-health difficulties, and nearly half (48%) reported a diagnosed mental disorder. Among those with diagnosed and undiagnosed mental-health difficulties, the most common reported concerns were depression, generalized anxiety disorder, and suicidal thoughts or behaviors. Participants who reported diagnosed mental disorders endorsed, on average, more specific mental-health difficulties and were more likely to report current difficulties than were undiagnosed participants. Graduate students were more likely to endorse both diagnosed and undiagnosed mental-health difficulties than were faculty, and they were more likely to report ongoing difficulties. Overall, rates of mental disorders within clinical, counseling, and school-psychology faculty and trainees were similar to or greater than those observed in the general population. We discuss the implications of these results and suggest specific directions for future research on this heretofore neglected topic.

Keywords: clinical psychology, mental health, psychopathology, mental illness, prevalence


Thursday, June 23, 2022

Memories with a blind mind: Remembering the past and imagining the future with aphantasia

Memories with a blind mind: Remembering the past and imagining the future with aphantasia. Alexei J. Dawes et al. Cognition, Volume 227, October 2022, 105192, https://doi.org/10.1016/j.cognition.2022.105192

Abstract: Our capacity to re-experience the past and simulate the future is thought to depend heavily on visual imagery, which allows us to construct complex sensory representations in the absence of sensory stimulation. There are large individual differences in visual imagery ability, but their impact on autobiographical memory and future prospection remains poorly understood. Research in this field assumes the normative use of visual imagery as a cognitive tool to simulate the past and future, however some individuals lack the ability to visualise altogether (a condition termed “aphantasia”). Aphantasia represents a rare and naturally occurring knock-out model for examining the role of visual imagery in episodic memory recall. Here, we assessed individuals with aphantasia on an adapted form of the Autobiographical Interview, a behavioural measure of the specificity and richness of episodic details underpinning the memory of events. Aphantasic participants generated significantly fewer episodic details than controls for both past and future events. This effect was most pronounced for novel future events, driven by selective reductions in visual detail retrieval, accompanied by comparatively reduced ratings of the phenomenological richness of simulated events, and paralleled by quantitative linguistic markers of reduced perceptual language use in aphantasic participants compared to those with visual imagery. Our findings represent the first systematic evidence (using combined objective and subjective data streams) that aphantasia is associated with a diminished ability to re-experience the past and simulate the future, indicating that visual imagery is an important cognitive tool for the dynamic retrieval and recombination of episodic details during mental simulation.

Keywords: AphantasiaVisual imageryMemoryImaginationEpisodic simulation


Rolf Degen summarizing... Around the age of 5 or 6, it dawns on children that others sometimes consider themselves better than they are, and it displeases them

The better to fool you with: Deception and self-deception. Jade Butterworth, Robert Trivers, William von Hippel. Current Opinion in Psychology, June 9 2022, 101385. https://doi.org/10.1016/j.copsyc.2022.101385

Abstract: Deception is used by plants, animals, and humans to increase their fitness by persuading others of false beliefs that benefit the self, thereby creating evolutionary pressure to detect deception and avoid providing such unearned benefits to others. Self-deception can disrupt detection efforts by eliminating cognitive load and idiosyncratic deceptive cues, raising the possibility that persuading others of a false belief might be more achievable after first persuading oneself. If people self-deceive in service of their persuasive goals, self-deception should emerge whenever persuasion is paramount and hence should be evident in information sharing, generalized beliefs about the self, and intergroup relations. The mechanism, costs, and benefits of self-deceptive biases are explored from this evolutionary perspective.


Wednesday, June 22, 2022

Identity fusion is traditionally conceptualized as innately parochial, with fused actors motivated to commit acts of violence on out-groups, largely conditional on threat perception

The Fusion-Secure Base Hypothesis. Jack W. Klein, Brock Bastian. Personality and Social Psychology Review, June 16, 2022. https://doi.org/10.1177/10888683221100883

Abstract: Identity fusion is traditionally conceptualized as innately parochial, with fused actors motivated to commit acts of violence on out-groups. However, fusion’s aggressive outcomes are largely conditional on threat perception, with its effect on benign intergroup relationships underexplored. The present article outlines the fusion-secure base hypothesis, which argues that fusion may engender cooperative relationships with out-groups in the absence of out-group threat. Fusion is characterized by four principles, each of which allows a fused group to function as a secure base in which in-group members feel safe, agentic, and supported. This elicits a secure base schema, which increases the likelihood of fused actors interacting with out-groups and forming cooperative, reciprocal relationships. Out-group threat remains an important moderator, with its presence “flipping the switch” in fused actors and promoting a willingness to violently protect the group even at significant personal cost. Suggestions for future research are explored, including pathways to intergroup fusion.

Keywords: social identity, identity fusion, intergroup relations, group attachment, secure base


In male–female pairs more men walk to the right of a female, possibly because men prefer to occupy the optimal “fight ready” side

Who goes where in couples and pairs? Effects of sex and handedness on side preferences in human dyads. Paul Rodway & Astrid Schepman. Laterality, Jun 21 2022. https://doi.org/10.1080/1357650X.2022.2090573

Abstract: There is increasing evidence that inter-individual interaction among conspecifics can cause population-level lateralization. Male–female and mother–infant dyads of several non-human species show lateralised position preferences, but such preferences have rarely been examined in humans. We observed 430 male–female human pairs and found a significant bias for males to walk on the right side of the pair. A survey measured side preferences in 93 left-handed and 92 right-handed women, and 96 left-handed and 99 right-handed men. When walking, and when sitting on a bench, males showed a significant side preference determined by their handedness, with left-handed men preferring to be on their partner’s left side and right-handed men preferring to be on their partner’s right side. Women did not show significant side preferences. When men are with their partner they show a preference for the side that facilitates the use of their dominant hand. We discuss possible reasons for the side preference, including males prefering to occupy the optimal “fight ready” side, and the influence of sex and handedness on the strength and direction of emotion lateralization.

Keywords: Evolutionfighting hypothesisbehavioural asymmetryaggressionleftward gaze

Discussion

In the observational study it was found that in 57% of pairs, men walked on the right side of the pair, and women in 43%. The proportion of men walking on the right was robustly above the chance value of 50%, based on Bonferroni-corrected frequentist inferencing and on Bayesian credible intervals and the Bayes Factor. This shows that humans, like many other species (Regaiolli, Spiezio, Ottolini, Sandri, & Vallortigara, 2021; Zaynagutdinova et al., 2021), exhibit lateralized position preferences when in a pair. In addition, the finding adds to the examples of other human social behaviours, such as kissing, cradling and embracing, which show patterns of lateralization. The greater number of men on the right side is compatible with the readiness to fight hypothesis which predicted that men would want to walk with their dominant hand on the outside of the formation, and because right-handed men are more frequent in a population, there would be more men on the right side of a pair. We evaluate this interpretation in more depth shortly.

For the survey data, the prediction from the readiness to fight hypothesis that men, but not women, would show a preference to have their dominant hand on the outside of the formation, was supported for walking and sitting on a bench. In contrast, there was no evidence for an overall preference to be on the right of a pair, and thus the data were not compatible with a right-hemisphere hypothesis that applies universally to all. The dominant-hand hypothesis also did not receive clear support because the preference to be on the right, when walking and sitting, applied to men but not women. Despite this, the dominant-hand hypothesis could account for the observational and survey data if it is assumed that men get to be on their preferred side more often, perhaps by having a stronger side preference and / or being more assertive. For example, manipulating objects might be more important to men than women, and so men might have a stronger preference to have their dominant hand on the outside of a pair, so it is free to manipulate objects (we are grateful to Dr Tucker Gilman for this suggestion). The stronger effect of handedness for side preferences in bench sitting compared to walking could be related to this. Further research is required to examine these possibilities. Taken together, without including additional moderating influences, the observational data and survey data are more compatible with a readiness to fight hypothesis than with a universal right-hemisphere hypothesis, or the dominant-hand hypothesis.

In the side of the bed preferences, the survey data showed that participants largely avoided “both sides equally” responses. Individual preferences for the side of the bed were not associated with sex or handedness. The bed question is helpful in the interpretation of the data. First, it showed that participants were not responding in a mindless or heuristic way, simply selecting the same response for each question. This provides confidence in the walking and bench data, because it suggests that participants expressed genuine preferences. Second, it showed that the typically habitual, long-lasting, and daily arrangement of a couple in bed did not determine the preferred side when walking and sitting on a bench. Third, although this is debatable, lying in bed is not usually a situation in which a man is required to fight a foe, and therefore this question served as a useful control condition. The observed pattern is compatible with the predictions from a fighting readiness hypothesis, if fighting-related behaviours are context-specific and only instantiated in relevant settings where fighting may be required. However, the bed data may contain some noise, because people tend not to sleep on their backs, and rotating into other positions can reverse the left and right sides, so some caution is required in the interpretation of the bed data.

Some researchers have discussed whether a fighting drive is still relevant in modern Western societies. For example, Faurie and Raymond (2013) suggest that, rather than fighting leading to survival of the individual involved in the fight, in modern non-violent societies, fighting, along with sporting achievements, may instead serve as a ritualized display aimed at attracting mates and promoting procreation. However, evolutionary theorizing accommodates vestigial behaviours that may have conferred Darwinian fitness in the past but that are no longer adaptive (see e.g., Rognini, 2018). A drive to be prepared to fight, even if it is unlikely to be necessary, may still be present in men as a behavioural trait at an implicit level, even if, in most situations, it is not necessary to exhibit the fighting for which the man readies himself.

Other potential explanations of these findings cannot be discounted. One interpretation is that they were caused by sex and handedness differences in emotional lateralization, which caused a stronger side preference in right and left-handed males. There is evidence that males are more strongly lateralized than females (Hirnstein, Hugdahl, & Hausmann, 2019), and stronger leftward perceptual asymmetries in males have been reported with line bisection (Jewell & McCourt, 2000) and the chimeric faces task (Innes, Burt, Birch, & Hausmann, 2016). There is also evidence for sex differences in the lateralization of emotion perception (Bourne, 2005; Burton & Levy, 1989; Rodway et al., 2003; Van Strien & Van Beek, 2000; but see Borod et al., 2001), and in social behaviours involving emotional connections, such as cradling (Packheiser, Schmitz, et al., 2019). It is possible that stronger emotional asymmetries in men may predispose them to more strongly prefer to occupy one side of the pair, because it aids social interaction or the monitoring threats from other males. In relation to this, Marzoli, Prete, and Tommasi (2014) propose that the leftward gaze bias could facilitate the monitoring of the dominant hand of other people, either for aiding communication or for monitoring potentially aggressive acts. A stronger leftward gaze to the hands than to other body parts, when looking at angry bodily postures, is in accord with this suggestion (Calbi, Langiulli, Siri, Umiltà, & Gallese, 2021; see also Lucafò et al., 2021). In addition, men have been found to be more strongly lateralized than women when looking at facial emotions of threat in male faces (Rahman & Anchassi, 2012). However, for stronger emotional lateralization in males to account for the current findings, it would have to be assumed that left-handed males have opposite emotional asymmetries to right-handed males. Some research has found this reversal in left-handers (Willems et al., 2010) while other evidence indicates weaker right hemisphere emotional lateralization, but not a reversal (Elias et al., 1998). In sum, it is apparent that an explanation of the side preference in terms of differences in lateralized emotion processing, due to sex and handedness, is consistent with the current findings and other research.

More research is needed to determine which of these alternative explanations is supported by the evidence. The fighting readiness explanation predicts that men will show a stronger side preference in more threatening environments, such as when it is dark, or when walking through a crowd of unfamiliar people. In contrast, a sex and handedness difference in lateralized emotion processing would not predict a stronger side preference in men under more threatening circumstances, if the asymmetry was operating to facilitate social communication with a partner. However, if it was functioning to facilitate the monitoring of the environment for potential threats from aggressive conspecifics, then a stronger side preference could be expected. In this case, the men’s preference for the side that facilitates the use of their dominant hand, and the monitoring of threats in the environment, could be the manifestation of a general behaviour to be “fight ready”.

Small effect sizes for some of our data could be said to be a limitation of the findings. For example, in the observational study, we observed a Cohen’s h of .13 which is a very small effect size. However, the preference was statistically robust, and the proportion of men on the right (.57) was meaningfully higher than chance (.50). The small effect size is most probably a reason why this subtle bias had not yet been discovered, because it is not noticeable via everyday non-systematic observation. If the effect had been larger, it is likely that people would have noticed the bias routinely in their daily lives. The survey data showed a medium effect size for the key interaction between Sex and Handedness for the walking and bench data (OR = 6.388). This may be because in the survey, people could express their preferences, which were not diluted to the same extent as the observational data, where additional random effects and conflicting inter-individual preferences could interfere with individual preferences. In this respect, the observational data and survey data are most usefully considered together because they complement each other’s limitations.

There are other limitations with the present research. First, it is unclear whether the side preferences in the survey reflect actual preferences, or the side that a person typically occupies, reflecting a memory rather than a preference, or a mixture of these influences. Clarifying this will be relevant to understanding the cause of the side bias. Second, the observational study was conducted in a particular location in the UK. While we have no reason to believe that the findings do not generalize to other locations where male–female pairs are able to walk freely, this requires testing. However, it can be noted that the survey data were from participants from throughout the UK, and the side preferences complemented those obtained in the observational study. This provides confidence in the generalizability of the observational data. Finally, it would have been desirable for all survey images to be matched for the presence of stick figures, so that all conditions were matched along that dimension.

In sum, our observations show that in male–female pairs more men walk to the right of a female than to the left from the pair’s perspective. Unlike women, men report significant side preferences when walking or sitting with a female partner, and this preference is dependent on their handedness. The side that men prefer, when with a partner, facilitates the use of their dominant hand and this might be because men want to be in the best position to fight effectively. There are other plausible explanations of the side preference exhibited by men, including an effect of sex and handedness on the strength and direction of emotional lateralization, or a stronger desire in males to be able use their dominant hand. Further research is desirable to clarify the cause of the side preference in human dyads.