Sunday, May 9, 2021

People who are involved in politics tend "to be attracted by its intellectual challenge," and tend "not to be easily shamed, nor to be particularly humble"

Bromme, Laurits, and Tobias Rothmund. 2021. “Mapping Political Trust and Involvement in the Personality Space – A Systematic Review and Analysis.” PsyArXiv. May 2. doi:10.31234/osf.io/hrk8f

Abstract: Individual differences in political trust and involvement in politics have been linked to Big Five personality dispositions. However, inconsistent correlational patterns have been reported. As a systematic review is still missing, the present paper provides an overview of the current state of the empirical literature. A systematic review of 43 publications (N = 215,323 participants) confirmed substantial inconsistency in the correlational patterns and corroborated a suspicion that the frequent use of low-bandwidth personality short scales might be responsible, among other reasons. In a second step, we conducted two empirical studies (N1 = 988 and N2 = 795), estimating latent correlations between the Big Five and political trust and involvement at different hierarchical levels. We found that personality relations were consistent across different subdimensions of trust (e.g., trust in politicians, institutional trust) and involvement (e.g., political interest, political self-efficacy, participation propensity) and are therefore best estimated at aggregated levels (i.e., general political trust and involvement). Meanwhile, correlational patterns differed substantially between Big Five facets, confirming that previous inconsistencies can be partly attributed to a misbalanced representation of facets in Big Five short scales and indicating that associations should be estimated at lower levels of the personality hierarchy.


Rolf Degen summarizing... Is everybody's sin nobody's sin? The prevalence of behaviors has only very small effects on their evaluation as morally good or bad

Do behavioral base rates impact associated moral judgments? Carl P. Jago. Journal of Experimental Social Psychology, Volume 95, July 2021, 104145. https://doi.org/10.1016/j.jesp.2021.104145

Abstract: In a series of studies, we ask whether and to what extent the base rate of a behavior influences associated moral judgment. Previous research aimed at answering different but related questions are suggestive of such an effect. However, these other investigations involve injunctive norms and special reference groups which are inappropriate for an examination of the effects of base rates per se. Across five studies, we find that, when properly isolated, base rates do indeed influence moral judgment, but they do so with only very small effect sizes. In another study, we test the possibility that the very limited influence of base rates on moral judgment could be a result of a general phenomenon such as the fundamental attribution error, which is not specific to moral judgment. The results suggest that moral judgment may be uniquely resilient to the influence of base rates. In a final pair of studies, we test secondary hypotheses that injunctive norms and special reference groups would inflate any influence on moral judgments relative to base rates alone. The results supported those hypotheses.

Keywords: MoralityMoral judgmentBlameCharacterHarmNormBase rateNormalityDescriptive normInjunctive normSocial influenceCorruptConformityInfluenceMoral norm


Check also: Making moral principles suit yourself. Matthew L. Stanley, Paul Henne, Laura Niemi, Walter Sinnott-Armstrong & Felipe De Brigard. Psychonomic Bulletin & Review, May 4 2021. People’s commitment to moral principles may be maintained when they recall others’ past violations, but their commitment may wane when they recall their own violations

Up to the magical number seven (maximum number of items in short term memory): Physical limitations? Evolutionary optimal number?

Up to the magical number seven: An evolutionary perspective on the capacity of short term memory. Majid Manoochehri. Heliyon, Volume 7, Issue 5, May 2021, e06955. https://doi.org/10.1016/j.heliyon.2021.e06955

Abstract: Working memory and its components are among the most determinant factors in human cognition. However, in spite of their critical importance, many aspects of their evolution remain underinvestigated. The present study is devoted to reviewing the literature of memory studies from an evolutionary, comparative perspective, focusing particularly on short term memory capacity. The findings suggest the limited capacity to be the common attribute of different species of birds and mammals. Moreover, the results imply an increasing trend of capacity from our non-human ancestors to modern humans. The present evidence shows that non-human mammals and birds, regardless of their limitations, are capable of performing memory strategies, although there seem to be some differences between their ability and that of humans in terms of flexibility and efficiency. These findings have several implications relevant to the psychology of memory and cognition, and are likely to explain differences between higher cognitive abilities of humans and non-humans. The adaptive benefits of the limited capacity and the reasons for the growing trend found in the present study are broadly discussed.

Keywords: Working memoryShort term memoryEvolution of memoryEvolution of cognitive system

7. Hypotheses concerning the capacity

Why memory span has a limited capacity or why there is an increasing trend of capacity towards humans? In the first place, I will argue the potential reasons for the limited capacity. In order to provide a more explicit discussion, the relevant studies are divided into two groups: those that based their discussion on a capacity about seven items or a temporary, passive storage (i.e., STM) and those that based their discussion on a capacity about three to four items or the focus of attention (i.e., WM).

7.1. Hypotheses of the limited capacity

7.1.1. STM hypotheses of the limited capacity

To begin with, some previous studies have suggested that “short-term memory limitations do not have a rational explanation” (Anderson, 1990, pp. 91/92) or larger capacities are biologically expensive or impossible. For instance, it has been postulated that greater STM size may have required additional tissue, which increases body mass and energetic expenditure, and therefore it is impossible with the biological characteristics of humans (e.g., Dukas, 1999). Other researchers rejected both of these assumptions (Todd et al., 2005). Moreover, the second assumption (i.e., assuming larger capacities as biologically expensive/impossible options) does not seem reasonable considering the diversity of extraordinary physiological and behavioral characteristics of different animal species. Also, if any of these suggestions is correct, we should perhaps be able to find various capacities of STM in different animals, which the present study does not indicate it.

One of the studies concerning the capacity of STM has been conducted by MacGregor (1987). Using a mathematical model, he highlighted the importance of efficient retrieval for STM. According to him, the limited capacity of STM could be the consequence of an efficiency of design. He argued that chunking facilitates retrieval when there are seven or five items in an unorganized memory. In a memory system evolved for efficiency, there is an upper effective limit to STM and a capacity beyond this limit would not be required.

In another study, Saaty and Ozdemir (2003) argued that in making preference judgments on pairs of elements in a group, the number of elements in the group should be no more than seven. The mind is sufficiently sensitive to improve large inconsistencies but not small ones and the most inconsistent judgment is easily determined. When the number of elements is seven or less, the inconsistency measurement is relatively large with respect to the number of elements involved. As the number of elements being compared is increased, the measure of inconsistency decreases slowly. Therefore, in order to serve both consistency and redundancy, it is best to keep the number of elements seven or less. When the number of elements increases past seven, the resulting increase in inconsistency is too small for the mind to single out the element that causes the greatest inconsistency to scrutinize and correct its relation to the other elements.

In a series of studies, Kareev has proposed that capacity limitation maximizes the chances for the early detection of strong and useful relations (Kareev, 19952000Kareev et al., 1997; for a controversial discussion of this hypothesis see Anderson et al., 2005Juslin and Olsson, 2005Kareev, 2005). From his standpoint, a STM capacity of size seven, which characterizes human adults, is of particular value in detecting imperfect correlations between features in the environment. The limited capacity may serve as an amplifier, strengthening signals which may otherwise be too weak to be noticed. He argued that, because correlations underlie all learning, their early detection is of great importance for the functioning and well-being of organisms. Therefore, the cognitive system might have evolved so as to increase the chances for early detection of strong correlations. In addition to the theoretical contribution, Kareev and colleagues in an experimental study found that people with smaller STMs are more likely to perceive a correlation than people with larger STMs (Kareev et al., 1997).

Some of the suggestions for the reason behind the limited capacity can be found in the studies of decision-making cognition. Here, it has been shown that people tend to rely on relatively small samples from payoff distributions (Hertwig and Pleskac, 2010). The size of these samples is often considered related to the capacity of STM (Hahn, 2014Hertwig et al., 2004Hertwig and Pleskac, 2010). In this context, a capacity-limited STM has been proposed as a possible cause (Hahn, 2014Hertwig et al., 2004Hertwig and Pleskac, 2010Todd et al., 2005) or a requirement (Plonsky et al., 2015) for relying on small samples. More relevant to the present discussion, Todd et al. (2005) suggested that the benefits of using small samples or the costs of using too much information resulted in selective pressures that have produced particular patterns of forgetting in LTM and limits of capacity in STM (see also Hahn, 2014). So, what are these costs and benefits? Limited information use can lead simple heuristics to make more robust generalizations in new environments (Todd et al., 2005). Small samples amplify the difference between the expected earnings associated with the payoff distributions, thus making the options more distinct and choice easier (Hertwig and Pleskac, 2010). Relying on small samples has also been suggested to result in saving time and energy (Plonsky et al., 2015Todd et al., 2005). Even if we assume that there is no cost (energy or time) for gathering information, by considering too much information, we are likely to add noise to our decision process, and consequently make worse decisions (Martignon and Hoffrage, 2002Todd et al., 2005). Among these, the one which is perhaps associated with strong selective forces is saving time. There are different occasions that timely decisions play a vital role in the life of animals. But perhaps of most importance is the case of hunting situations. The encounters between prey and predators were an integral part of the daily life of our ancestors through deep evolutionary time. It is also clear that the penalties for any kind of inefficiency in such encounters are immediate and fatal, which thus results in intense selection for particular cognitive abilities and predation avoidance mechanisms (see Mathis and Unger, 2012Rosier and Langkilde, 2011Whitford et al., 2019). For instance, any prey that is attacked by several predators and cannot quickly decide which one to avoid at first or which way and which method to choose for escaping or perhaps defending will be eliminated at once. A similar discussion can be developed for predators (see Lemasson et al., 2009).

Another line of studies has stressed the importance of the limited capacity for foraging activities (e.g., Bélisle and Cresswell, 1997Real, 19911992Thuijsman et al., 1995). According to it, the limited capacity may result in an overall optimization of food search behaviors. Similarly, Murray et al. (2017) have contended that the memory systems of anthropoids have been primarily evolved to reduce foraging errors. Foraging activities, however, do not appear to be the underlying reason for the capacity-limited STM. This is because, if foraging were the fundamental reason, then there would be remarkable sex differences in memory span, similar to that observed, for instance, in spatial abilities (Ecuyer-Dab and Robert, 2007Voyer, Postma, Brake and Imperato-McGinley, 2007). According to the division of labor in ancestral hunter-gatherer societies, men were predominantly hunters and women were gatherers (Ecuyer-Dab and Robert, 2007Marlowe, 2007), and it is likely that each one of these activities demands a different memory span. Namely, because a hunter has to focus on prey and ignore distracting information, while a successful gatherer can, or should, simultaneously consider many stationary targets (e.g., seeds, fruits, etc.). Contrary to this, many studies of sex differences in memory span show no significant difference (GrÉGoire and Van Der Linden, 1997Monaco et al., 2013Orsini et al., 1986Peña-Casanova et al., 2009). Foraging activities, if they were the underlying reason, could also result in remarkable differences among different species. The present study, however, does not indicate such differences. Therefore, although the limited capacity may have provided benefits for foraging activities, it seems reasonable to propose that foraging, after all, is not the main and direct reason for the limited memory space.

Among the hypotheses reviewed here, Kareev's suggestion (i.e., early detection of useful relations) is among the ones that have received relatively more attention. Also, his assumption seems reasonable in a comparative context and appears consistent with the findings of the present review. But of more importance is the fact that a memory system that has the ability of early detection of useful relations is likely to cause higher performance in associative learning and also saving time in decision making. In the case of learning, Kareev himself noted that: “Because correlations underlie all learning, their early detection and, subsequently, accurate assessment are of great importance for the functioning and well-being of organisms” (Kareev, 2000, p. 398). Leaning, certainly, is one of the first and main challenges of any cognitive system. Besides, there are broad similarities in basic forms of learning in different species (Dugatkin, 2013). It is also certain that through deep evolutionary time there has been intense selection for individuals with higher performance in learning. In this regard, Dugatkin (2013) stated that: “The ability to learn should be under strong selection pressure, such that individuals that learn appropriate cues that are useful in their particular environment should be strongly favored by natural selection” (p. 141). In summary, these considerations motivate the idea that associative learning and saving time in decision making are most likely the underlying reasons for the emergence and maintenance of limited capacity.

7.1.2. WM hypotheses of the limited capacity

There are, on the other side, some other studies of the limited capacity that based their analyses on a capacity about three to four chunks or the focus of attention (i.e., WM). Some of them will be briefly reviewed here. Sweller (2003), for instance, proposed that no more than two or three elements can be handled in WM, because any more elements would result in more potential combinations than could be tested realistically. According to him, as the number of elements in WM increases, the number of permutations rapidly becomes very large (e.g., 5! = 120). With random choice, the greater the number of alternatives from which to choose while problem solving, the less likelihood that an appropriate choice will be made.

Many other possibilities have been discussed by Cowan (Cowan, 200120052010). For instance, based on the notion that it is biologically impossible for the brain to have a larger capacity, he declared that the representation of a larger number of items could fail because together they take too long to be activated in turn (Cowan, 2010). Another discussion by Cowan is that the WM capacity limit is the necessary price of avoiding too much interference (Cowan, 2005). According to him, activation of the memory system would go out of control if WM capacity was not limited to about four items at once. A relatively small central WM may allow all concurrently active concepts to become associated with one another without causing confusion or distraction (Cowan, 2010). Oberauer and Kliegl (2006) similarly stated that:

The capacity of working memory is limited by mutual interference between the items held available simultaneously. Interference arises from interactions between features of item representations, which lead to partially degraded memory traces. The degradation of representations in turn leads to slower processing and to retrieval errors. In addition, other items in working memory compete with the target item for recall, and that competition becomes larger as more items are held in working memory and as they are more similar to each other. (p. 624).

7.2. The increasing trend of capacity

Archaeological evidence of an enhancement in the WM system has been presented by Coolidge and Wynn (2005) (see also Coolidge et al., 2013Wynn and Coolidge, 2006). The core idea of their hypothesis is a genetic mutation that affected neural networks approximately 60,000 to 130,000 years ago and increased the capacity of general WM and phonological storage. In the case of phonological storage, which is of more interest in the present review, they stipulated that: “A relatively simple mutation that increased the length of phonological storage would ultimately affect general working-memory capacity and language” (Coolidge and Wynn, 2005, p. 14). They proposed that the enhancement of WM capacities was the final piece in the evolution of human executive reasoning ability, language, and culture. From their point of view, the larger capacity is a necessary precondition for symbolic thought, which selective pressures contributed to the growth of it. They noted that an increase in WM capacities of pre-modern H. sapiens would have allowed greater articulatory rehearsal, consequently allowing for automatic long-term storage, and the beginnings of introspection, self-reflection, and consciousness. In line with Coolidge and colleagues’ hypothesis, Aboitiz et al. (2010) proposed that during the course of human evolution, a development in the phonological loop occurred. They maintained that this development produced a significant increase in STM capacity and subsequently resulted in the evolution of language.

Many researchers, at least in the field of archeology, tend to agree with the idea of enhanced WM (e.g., Aboitiz et al., 2010Haidle, 2010Lombard and Wadley, 2016Nowell, 2010Putt, 2016, for a review of criticisms, see Welshon, 2010), though there seems to be a disagreement on its time. Almost all, however, suggest a time in the Pleistocene about or after the appearance of the genus Homo (Aboitiz et al., 2010Coolidge and Wynn, 2005Haidle, 2010Putt, 2016). Some also suggest a gradual development (Haidle, 2010).

Once we accept the idea of the enhanced WM, important questions arise as to the cause and the process of this phenomenon. The enhancement of WM has been argued as a prerequisite for the evolution of some complex cognitive abilities of humans, such as language (Aboitiz et al., 2010Coolidge and Wynn, 2005) and tool use (Haidle, 2010Lombard and Wadley, 2016). For instance, Aboitiz et al. (2010) pointed out the existence of selective benefits for individuals with larger phonological capacities, which, in their view, were linguistically more apt. From their standpoint:

The development of the phonological loop produced a significant increase in short-term memory capacity for voluntary vocalizations, which facilitated learning of complex utterances that allowed the establishment of stronger social bonds and facilitated the communication of increasingly complex messages, eventually entailing external meaning and generating a syntactically ordered language. (p. 55).

In the case of tool use, Haidle (2010) argued that a basic trait of all object behaviors is the increased distance between problems and solutions. Given this, more complex object behaviors possess longer distances. According to her, during the process of tool use the immediate desire (e.g., getting the kernel of the nut) must be set aside and replaced by one or several intermediate objectives, such as finding or producing an appropriate tool. Thus, thinking must depart from the immediate problem and shift to abstract conceptualizations of potential solutions, which results in sequences of physical actions with objects appropriate to achieve a solution in the near future (see also Lombard and Wadley, 2016). Given her discussion, it is clear how individuals or populations with an enhanced WM system, which provides the possibility of maintaining and manipulating more information, could take advantage of their superiority to excel others in tool-use performance and, consequently, to win competitions.

Arguably, if we assume the enhancement of WM as a gradual process, which has started long before our common ancestor with chimpanzees (as it was found by the present study), neither tool use nor language can be considered as the primary reason for it. But complex problem solving, because of its commonness, can be nominated as the primary cause, which has then been supported by tool use and language (see also Putt, 2016). This assumption can be aided by the evidence indicating the critical role of an elaborate WM system in problem solving tasks (Logie et al., 1994Zheng et al., 2011).

After all, there are many obscure aspects regarding the evolution of WM. Needless to say, deep disagreements in the related fields and issues, such as the process and timeline of language evolution (Progovac, 2019), make the puzzle more difficult to solve. It goes beyond the limits of the present article to pursue this further, but perhaps a possible way to settle this problem is looking for advantages and disadvantages of high and low STM/WM capacities (Engle, 2010). Such findings from experimental psychology in conjunction with archaeological and comparative evidence can shed light on the evolution of the WM system.

8. Conclusion

The first and obvious implication of the present findings is that the limited capacity is the common attribute of different species of birds and mammals. The present results also indicate an increasing trend of capacity from our non-human ancestors to modern humans. Among the potential explanations of the limited capacity, associative learning and saving time in decision making, particularly because of the strong selective forces that associate with them and their vital importance for different species, seem to be more likely. On the other hand, the enhancement of the WM system appears to be a prerequisite for the evolution of some higher cognitive abilities of modern humans, such as languagetool use, and complex problem solving.

A question yet to be answered is whether the current size of STM/WM in humans is the end of the line or not. The current size has been considered by some to be the end point (e.g., MacGregor, 1987). As opposed to it, Cowan declared that it is possible to imagine that larger capacities would have been preferable or doable, but still did not happen. Therefore, our current capacity just reflects our place in the middle of an ongoing evolutionary process, not an end point. If this is the case, one might expect the present capacity to expand in the future, assuming that it offers a sufficient survival advantage (Cowan, 2005). However, the current review suggests that considering the resistance of memory span scores to the Flynn effect, it is difficult to expect substantial changes in small periods of time.

All in all, many of us, instead of the wild nature, are living in artificial and unnatural environments. Are these unnatural environments along with their overwhelming and escalating complex problem solving tasks imperceptibly pushing us towards a WM system with even a larger capacity and, if so, what is the price for that? Here, the most important point to stress is that evolution does not drive towards perfection. However, thanks to our elaborate information processing system and consciousness, we now have the ability to purposefully plan for the future of our own evolution.

The evidence reviewed in this paper shows that many species of birds and mammals are capable of performing memory strategies, although there seem to be some differences between humans and non-humans in terms of flexibility and efficiency. An enhancement in the capacity of the WM system might be the reason, or part of the reason, for the emergence of superior memory strategies in humans.

Striking similarities in the primacy and recency effect in conjunction with other evidence, such as similarities in the size of STM and performing memory strategies, suggest a similar memory structure in different species of birds and mammals. This is in accordance with Wright's inference that there is a qualitative similarity in memory processing across mammals and birds. The present findings have several implications relevant to the psychology of memory and cognition. For instance, the differences found in the ability to perform memory strategies and the size of STM, may provide an explanation for some of the differences between cognitive abilities of humans and non-humans.

People’s commitment to moral principles may be maintained when they recall others’ past violations, but their commitment may wane when they recall their own violations

Making moral principles suit yourself. Matthew L. Stanley, Paul Henne, Laura Niemi, Walter Sinnott-Armstrong & Felipe De Brigard. Psychonomic Bulletin & Review, May 4 2021. https://link.springer.com/article/10.3758/s13423-021-01935-8

Abstract: Normative ethical theories and religious traditions offer general moral principles for people to follow. These moral principles are typically meant to be fixed and rigid, offering reliable guides for moral judgment and decision-making. In two preregistered studies, we found consistent evidence that agreement with general moral principles shifted depending upon events recently accessed in memory. After recalling their own personal violations of moral principles, participants agreed less strongly with those very principles—relative to participants who recalled events in which other people violated the principles. This shift in agreement was explained, in part, by people’s willingness to excuse their own moral transgressions, but not the transgressions of others. These results have important implications for understanding the roles memory and personal identity in moral judgment. People’s commitment to moral principles may be maintained when they recall others’ past violations, but their commitment may wane when they recall their own violations.


Negative attitudes toward cosmetic surgery prevail, partly because it seems to give people an unfair advantage, similar to performance enhancing drugs in sports

The cosmetic surgery paradox: Toward a contemporary understanding of cosmetic surgery popularisation and attitudes. Sarah Bonell, Fiona Kate Barlow, Scott Griffiths. Body Image, Volume 38, September 2021, Pages 230-240. https://doi.org/10.1016/j.bodyim.2021.04.010

Highlights

• Negative attitudes toward cosmetic surgery prevail despite its surging popularity.

• Idealistic female beauty standards encourage the popularisation of cosmetic surgery.

• These same beauty standards inform negative attitudes toward cosmetic surgery.

• These co-occurring phenomena are described as the cosmetic surgery paradox.

Abstract: Modern women feel compelled to meet near-impossible standards of beauty. For many, this pursuit ultimately culminates in cosmetic surgery – a radical form of beautification that is rapidly becoming popular worldwide. Paradoxically, while prevalent, artificial beauty remains widely unaccepted in contemporary society. This narrative review synthesizes feminist dialogue, recent research, and real-world case studies to argue that female beauty standards account for both the growing popularity of cosmetic surgery and its lack of mainstream acceptance. First, we implicate unrealistic beauty standards and the medicalization of appearance in popularizing cosmetic surgery. Second, we analyze how negative attitudes toward cosmetic surgery are also motivated by unrealistic beauty standards. Finally, we generate a synthesized model of the processes outlined in this review and provide testable predictions for future studies based on this model. Our review is the first to integrate theoretical and empirical evidence into a cohesive narrative that explains the cosmetic surgery paradox; that is, how cosmetic surgery remains secretive, stigmatized, and moralized despite its surging popularity.

Keywords: Cosmetic surgeryPlastic surgeryPhysical appearanceFeminismBody imageFeminine beauty


Changes in age-at-marriage laws rarely achieve the desired outcome; for this change to be effective, better laws must be accompanied by better enforcement & monitoring to delay marriage & protect the rights of women & girls

Trends in child marriage and new evidence on the selective impact of changes in age-at-marriage laws on early marriage. Ewa Batyra, Luca Maria Pesando. SSM - Population Health, May 4 2021, 100811. https://doi.org/10.1016/j.ssmph.2021.100811

Highlights

• Changes in age-at-marriage laws have only a limited impact on delaying marriage age.

• In Benin, Mauritania, Kazakhstan and Bhutan, laws did not curb early marriage.

• In Tajikistan and Nepal results depends on model specification.

• Better enforcement must accompany the implementation of the age-at-marriage laws.

Abstract: This study adopts a cohort perspective to explore trends in child marriage – defined as the proportion of girls who entered first union before the age of 18 – and the effectiveness of policy changes aimed at curbing child marriage by increasing the minimum legal age of marriage. We adopt a cross-national perspective comparing six low- and middle-income countries (LMICs) that introduced changes in the minimum age at marriage over the past two decades. These countries belong to three broad regions: Sub-Saharan Africa (Benin, Mauritania), Central Asia (Tajikistan, Kazakhstan), and South Asia (Nepal, Bhutan). We combine individual-level data from Demographic and Health Surveys and Multiple Indicator Cluster Surveys with longitudinal information on policy changes from the PROSPERED (Policy-Relevant Observational Studies for Population Health Equity and Responsible Development) project. We adopt data visualization techniques and a regression discontinuity design to obtain causal estimates of the effect of changes in age-at-marriage laws on early marriage. Our results suggest that changes in minimum-age-at-marriage laws were not effective in curbing early marriage in Benin, Mauritania, Kazakhstan, and Bhutan, where child marriage did not show evidence of decline across cohorts. Significant reductions in early marriage following law implementations were observed in Tajikistan and Nepal, yet their effectiveness depended on the specification and threshold adopted, thus making them hardly effective as policies to shape girls' later life trajectories. Our findings align with existing evidence from other countries suggesting that changes in age-at-marriage laws rarely achieve the desired outcome. In order for changes in laws to be effective, better laws must be accompanied by better enforcement and monitoring to delay marriage and protect the rights of women and girls. Alternative policies need to be devised to ensure that girls’ later-life outcomes, including their participation in higher education and society, are ensured, encouraged, and protected.

Keywords: Early marriageLawsPolicy changesAge at marriageWomen's statusLMICs

Conclusions and discussion

This study has adopted a cohort perspective to explore the extent to which changes in age-at-marriage laws were effective in curbing early marriage across six LMICS which introduced a policy change regarding the minimum age at marriage. We tackled our research question using survey data from countries located in different regions of the world combined with novel longitudinal information on policy changes. We adopted simple causal inference techniques to obtain estimates of the causal effect of changes in age-at-marriage laws on early marriage, in line with the existing literature on the topic relying primarily on difference-in-differences strategies or regression discontinuity designs (Bellés-Obrero & Lombardi, 2020Collin & Talbot, 2018Dahl, 2010McGavock, 2021). In so doing, we reached two different sets of findings. The first set of results suggests that only in Tajikistan and Nepal there have been noticeable declines in child marriage across cohorts. Despite these declines observed in more recent cohorts, the majority of the countries under investigation – particularly Nepal, Bhutan, Mauritania, and Benin – are far from eradicating early marriage, as even among the youngest cohorts around one third of women marries before the age of 18.

Echoing much of the literature (Collin & Talbot, 2018Kalamar et al., 2016Koski et al., 2017), the second set of findings suggests that even in these previously unexplored countries, changes in minimum-age-at-marriage laws were not effective in curbing early marriage in Benin, Mauritania, Kazakhstan, and Bhutan, where child marriage showed little evidence of decline across cohorts to start with. Significant reductions in early marriage following law implementations were instead observed in Tajikistan and Nepal – showing, respectively, a decreasing trend overtime after the 1975–79 and 1980–84 cohorts – yet their effectiveness depends on the specification and threshold (window) adopted, thus making them hardly effective as policies to shape girls’ later life trajectories. As the literature tends to focus on Sub-Saharan Africa, India, Bangladesh, and Mexico, we believe our results for Nepal and Tajikistan are informative and stress an additional layer of novelty even just from a purely “geographical” standpoint.

Our findings relating to the mixed and context-specific effectiveness of policy changes aimed at curbing early marriage aligns with claims made by Arthur et al. (2018) and Collin and Talbot (2018) that, despite the increasing prevalence of legal provisions aimed at increasing the legal age at marriage, the level of enforcement varies widely, and legal exceptions based on parental consent and customary or religious laws remain in place – alongside high rates of illegal or informal marriages (Bellés-Obrero & Lombardi, 2020Collin & Talbot, 2018) – thus preventing the full effectiveness of the legal provisions. Unfortunately, we did not have data on exact implementation procedures, monitoring, or enforcement, which could also cast light on why reductions in child marriage in Nepal and Tajikistan were more pronounced than in other countries. Nonetheless, several sources suggest that these are serious issues that underlie the very limited effectiveness of changes in legal provisions in countries covered by our analyses overall. In Bhutan, although child marriage is punishable by fine, enforcement is considered to be weak and is exacerbated by lack of reliable marriage registration system (ICRW, 2012). In Nepal, child marriage can carry a prison sentence and/or a fine, but punishment is weakened by wide discretionary sentencing powers given to courts (CRR, 2016). Overall, poor implementation and limited awareness of child marriage legislation are considered to be one of the reasons for the high prevalence of child marriage in South Asia, including Bhutan and Nepal (Khanna et al., 2013). In Tajikistan, child marriages carry a prison sentence of up to six months but, in practice, most cases are only punished by a fine. Moreover, religious ceremonies are sometimes performed without registration of marriages to civil authorities, thus bypassing legislation (UNFPA, 2014). Similar issues around the implementation of laws as in Tajikistan have been identified in Kazakhstan (UNFPA, 2012). We found little information about the working of legislation in Sub-Saharan African countries covered by our study, most notably Benin. Nonetheless, available sources suggest that, in Mauritania, the lack of clarity of the Personal Status Code that sets the minimum age at marriage renders it ineffective in having any meaningful impact on marriage practices.3 Albeit scarce, the available evidence consistently points to weak enforcement of the new legislation across countries included in our study.

Our results combined suggest that there is a long way to go before child marriage is eradicated, and changes in legal provisions are playing only a minimal role, if any. This pushes scholars and policymakers to think about alternative policies that might be more effective in curbing early marriage or delaying age at first union. For instance, Kalamar et al. (2016) found that providing incentives for girls’ education was one of the few interventions that have been shown to effectively prevent child marriage. Many of the studies included in that review were conducted in Sub-Saharan Africa. Randomized evaluations in Kenya, Malawi, and Zimbabwe also found that reducing the cost of schooling by providing school fees, uniforms, or cash transfers conditional on attendance reduced the incidence of child marriage (Baird et al, 20102012Hallfors et al., 2015). A pilot program in Ethiopia that provided girls with mentorship and economic incentives to remain in school – and facilitated community discussion about the harms associated with child marriage – reduced the proportion of girls between ages 10 and 14 who were married by approximately 8 percentage points after a two-year follow-up period, although the program had no measurable effect on the marriage of girls between ages 15 and 19 (Erulkar & Muthengi, 2009).

Preventing early, coerced, and forced marriage has been on the global agenda for several decades, first in 2000 with the Millennium Development Goals (MDGs) highlighting the reduction of child marriage as a global priority, and then in 2015 as part of the global agenda with the establishment of the SDGs. We have here argued that SDG Goal 5 – focusing on gender equality to empower all women and girls – is linked with progress on the elimination of early marriage, yet it is also inextricably linked with SDG Goal 4, related to better access and more gender-equal participation in education, Goal 3 (good health and wellbeing), Goal 8 (decent work and economic growth) and, ultimately, Goal 1 (no poverty). Significant progress is nowhere close, yet a clear implication ensuing from this study is that better enforcement and monitoring of legal provisions concerning the minimum age at marriage – if effective – would have the potential to raise women's status by simultaneously enabling the achievement of multiple goals. We thus posit two clear implications of this research for policy and speculate on a third point. First, the laudable goal of legislation curbing or banning early marriage must be accompanied by capacity-building and resourcing for more legal enforcement. Second, monitoring the efficacy of deterrence, including through exploiting cheap and plentiful micro-level data as we do here, is essential to test and improve the link from laws to ages at marriage, the outcome targeted by policy and the one that matters most for women and girls' later-life outcomes. Third, we speculate on the possibility that national marriage policies might have a more meaningful impact if part of a comprehensive, multi-pronged, and context-sensitive approach targeting poverty and rooted social norms in all their forms – such as some of the complementary interventions (including educational interventions) mentioned above.

What Drives the Dehumanization of Consensual Non-Monogamous Partners? Being perceived as less moral and less committed to their relationship

What Drives the Dehumanization of Consensual Non-Monogamous Partners? David L. Rodrigues, Diniz Lopes & Aleksandra Huic. Archives of Sexual Behavior, May 4 2021. DOI: 10.1007/s10508-020-01895-5

Abstract: We built upon a recent study by Rodrigues, Fasoli, Huic, and Lopes (2018) by investigating potential mechanisms driving the dehumanization of consensual non-monogamous (CNM) partners. Using a between-subjects experimental design, we asked 202 Portuguese individuals (158 women; Mage = 29.17, SD = 9.97) to read the description of two partners in a monogamous, open, or polyamorous relationship, and to make a series of judgments about both partners. Results showed the expected dehumanization effect, such that both groups of CNM partners (open and polyamorous) were attributed more primary (vs. secondary) emotions, whereas the reverse was true for monogamous partners. Moreover, results showed that the dehumanization effect was driven by the perception of CNM partners as less moral and less committed to their relationship. However, these findings were observed only for individuals with unfavorable (vs. favorable) attitudes toward CNM relationship. Overall, this study replicated the original findings and extended our understanding of why people in CNM relationships are stigmatized.


Atheists were perceived as more desirable in short-term mating than long-term mating (preference that did not translate to being preferred in that context over theists); this effect is specific to physically attractive targets

Brown, Mitch. 2021. “Preliminary Evidence for Aversion to Atheists in Long-term Mating Domains.” PsyArXiv. May 2. doi:10.31234/osf.io/gu7xy

Abstract: The centrality of religiosity in selecting long-term mates suggests the espousal of atheism could be undesirable for that context. Given recent findings suggesting the presence of several positive stereotypes about atheists, a largely distrusted group, it could be possible individuals prefer atheists in mating domains not emphasizing long-term commitment (i.e., short-term mating). I conducted two studies tasking participants with evaluating long-term and short-term mating desirability of theists and atheists while assessing perceptions of their personalities. Study 1 indicated atheists were perceived as more desirable in short-term mating than long-term mating, though this preference did not translate to being preferred in that context over theists. Study 2 demonstrated this effect is specific to physically attractive targets. I further found atheists were perceived as more prone to infidelity, especially if they were attractive. I frame results from an evolutionary perspective while discussing the pervasiveness of anti-atheist prejudice.



Saturday, May 8, 2021

Work from Home, IT Professionals: Total hours worked increased by roughly 30%, including a rise of 18% in working after normal business hours; average output did not significantly change, so productivity fell by about 20%

Work from Home & Productivity: Evidence from Personnel & Analytics Data on IT Professionals. Michael Gibbs, Friederike Mengel, Christoph Siemroth. Becker Friedman Institute, Working Paper 2021-56, May 6 2021. https://bfi.uchicago.edu/working-paper/2021-56

Using personnel and analytics data from over 10,000 skilled professionals at a large Asian IT services company, we compare productivity before and during the work from home [WFH] period of the Covid-19 pandemic. Total hours worked increased by roughly 30%, including a rise of 18% in working after normal business hours. Average output did not significantly change. Therefore, productivity fell by about 20%. Time spent on coordination activities and meetings increased, but uninterrupted work hours shrank considerably. Employees also spent less time networking, and received less coaching and 1:1 meetings with supervisors. These findings suggest that communication and coordination costs increased substantially during WFH, and constituted an important source of the decline in productivity. Employees with children living at home increased hours worked more than those without children at home, and suffered a bigger decline in productivity than those without children.


What is your earliest memory? It depends.

What is your earliest memory? It depends. Carole Peterson. Memory, May 6 2021. https://doi.org/10.1080/09658211.2021.1918174

Abstract: This article is a selective review of the literature on childhood amnesia, followed by new analyses of both published and unpublished data that has been collected in my laboratory over two decades. Analyses point to the fluidity of people’s earliest memories; furthermore, methodological variation leads to individuals recalling memories from substantially earlier in their lives. How early one’s “earliest” memory is depends on whether you have multiple interviews, how many early memories were requested within an interview, the type of interview, participation in prior tasks, etc. As well, people often provide chronologically earlier memories within the same interview in which they later identify a chronologically older memory as their “earliest”. There may also be systematic mis-dating to older ages of very early memories. Overall, people may have a lot more memories from their preschool years than is widely believed, and be able to recall events from earlier in their lives than has been historically documented.

KEYWORDS: Childhood amnesiainfantile amnesiaearly memoriesfirst memoriesautobiographical memory

Discussion

It is clear that very young children indeed form memories, and many of these can be verbally described (see Bauer et al., 2019, for an overview of types of relevant evidence). The question of “when do personal memories start” has been an often-asked question in the childhood amnesia literature, and answers to this question have influenced theory construction about early memory. However, recent research has shown that access to early memories is often shaped by a range of both cognitive and social factors that interact (see Wang & Gülgöz, 2019, for a number of articles that address this in a special issue on childhood memory, as well as the edited volume by Gülgöz & Sahin-Acar, 2020, on autobiographical memory development).

Theoretical implications

In the current article, I have reviewed relevant literature on childhood amnesia and then re-examined data collected from a range of research studies that has been conducted in my laboratory over a number of years as well as included new data that have not been previously published. These analyses have several theoretical implications.

First, an answer to the question of when one’s earliest memory occurs is a moving target rather than being a single static memory. Thus, what many people provide when asked for their earliest memory is not a boundary or watershed beginning, before which there are no memories. Rather, there seems to be a pool of potential memories from which both adults and children sample. Table 1 demonstrates considerable movement in the identification of their earliest memory, even though the memory they had described in an earlier interview was not forgotten. Moreover, almost half the time they retrieved a new and yet-earlier “first” memory when interviewed 2 years later. Some prior reports have emphasised the important role of forgetting (Cleveland & Reese, 2008; Van Abbema & Bauer, 2005), but Table 1 suggests that although forgetting is occurring and cannot be theoretically “forgotten”, as Bauer (2015) reminds us, it is but a partial explanation for changes in what is identified as the “earliest memory”.

Secondly, what is provided as a so-called “earliest memory” is highly malleable. Prior research has shown that it can be experimentally manipulated (Kingo et al., 2013b; Peterson et al., 2009b; Wessel et al., 2019). However, as Table 2 shows, one does not need external prompts; simply recalling one memory seems to internally cue others from that early period of life, and many of these later-mentioned memories are chronologically much earlier, on average a full year and a half earlier in our data. This self-cueing is also demonstrated in Table 3 when one compares the date of individuals’ identified earliest memory and their chronologically earliest memory (i.e., comparing the top panel to the bottom panel). Thus, providing an early memory often results in self-cueing to additional and yet-earlier memories. This mechanism of self-cueing is likely also responsible for participants who had a prior Memory Fluency Task subsequently providing earlier memories in the Earliest Memory Task (compare the left and right panels in Table 3).

Thirdly, when recalling multiple memories from the same life period, people do not seem to situate them on a continuous timeline as the memories are recalled. Prior research has suggested that the memories themselves and dating of those memories are independent; Table 4 suggests that memory dates are also independent of each other. How else can one explain the phenomenon of people providing memories from specific dates and a few minutes later identifying different and later-dated memories as their very first one? A mental timeline of memories does not seem to be constructed during recollection.

Limitations

In all of the analyses presented above, participants were providing their own dating of their very early memories. Yet people are notoriously poor at memory dating, as a host of other research studies have shown. The telescoping errors described above are only one example of dating error, and few other studies focus on the accuracy of the dating for people’s very early memories. What is needed in childhood amnesia research are independently confirmed or documented external dates against which personally derived dates can be compared. These are not found in the research cited above on telescoping errors since parental dating was used there for comparison with child dates, and parents too are likely to make dating errors. Such research using verified dating is currently ongoing, both in my laboratory and elsewhere.

Secondly, there are statistical limitations to the analyses presented above. Most of the analyses are post-hoc rather than pre-planned, and as such, are tentative. They can be seen as patterns that require further targeted research, and suggest avenues for additional exploration.