Friday, June 25, 2021

Fictions with imaginary worlds should be more appealing for individuals higher in Openness to experience, for younger people, & be more successful in more economically developed societies

Why imaginary worlds? Exploratory preferences explain the cultural success of fictions with imaginary worlds in modern societies. Edgar Dubourg. Human Behavior & Evolution Society HBES 2021, Jun-Jul 2021. Poster:

Abstract: Imaginary worlds are one of the hallmarks of modern culture. They are present in many of the most successful fictions, be it in novels (e.g., Harry Potter), films (e.g., Star Wars), video games (e.g., The Legend of Zelda), graphic novels (e.g., One piece) and TV series (e.g., Game of Thrones). This phenomenon is global (e.g., the emergence of xuanhuan and xanxia genres in China), and massive (e.g., the worldwide success of Lord of the Ring). Why so much attention devoted to nonexistent worlds? We propose that imaginary worlds in fictions co-opt exploratory preferences. Imaginary worlds are fictional superstimuli that tap into the human’s evolved interest for unfamiliar and potentially rewarding environments. This hypothesis can explain the cultural success of specific artefacts, such as maps in fictions, and the cultural distribution of such fictions across time, space, and individuals. Notably, this hypothesis makes predictions that rely on previous research in psychological and behavioral sciences: 1) fictions with imaginary worlds should be more appealing for individuals higher in Openness to experience (because this Big Five personality trait is associated with exploratory preferences), 2) such fictions should be more attractive for younger people (because young people reap more reward from exploratory behaviors, thanks to parental investments, and are thus adaptively more motivated to explore) and 3) such fictions should be more successful in more economically developed societies (because affluent and safe ecologies lower the costs of exploration, and phenotypic plasticity thus promotes exploratory preferences). We successively tested these predictions with two large open-collaborative datasets, namely IMDb (N=85,855 films) and Wikidata (N=96,711 literary works), and with the Movie Personality Dataset, which aggregates averaged personality traits and demographic data from the Facebook myPersonality Database (N=3.5 million). We provide evidence that the appeal for imaginary worlds relies on our exploratory psychology.

Modern racial categorization may be a byproduct of a system designed for ancestral alliance detection; phenotype-based classifications are suppressed when valid cues of allegiance are made

A Sufficiency Test of the Alliance Hypothesis of Race. Daniel Conroy-Beam. Human Behavior & Evolution Society HBES 2021, Jun-Jul 2021.

Abstract: Racial categorization is a widespread phenomenon at the root of many of the most pressing problems in modern human life. These facts are peculiar from an evolutionary perspective given that racial categories as we understand them today are not biologically real and are evolutionarily novel inventions. The alliance hypothesis of race attempts to reconcile these facts by proposing that modern racial categorization is a byproduct of a system designed for ancestral alliance detection. Support for this hypothesis comes from studies demonstrating that redirecting coalitional psychology can suppress racial categorization. However, the capacity of coalitional psychology to generate racial categories from scratch is less clear. Here we use a series of agent-based models to provide a sufficiency test of the alliance hypothesis. We generate populations of agents that vary on arbitrary phenotypic dimensions and engage in cooperative interactions with one another. We show that the introduction of a coalitional psychology that attempts to detect patterns of allegiance based on available cues can hallucinate and then reify correlations between phenotype and allegiance, leading to the emergence of social groups that vary systematically by phenotype. This occurs even when phenotype is in reality distributed continuously and has no true connection to behavior. Furthermore, consistent with psychological evidence, such phenotypic classification is suppressed when valid cues of allegiance are made available. These models provide evidence that a coalitional psychology alone can be sufficient to create beliefs in phenotype-based social categories even when no such categories truly exist.

It is more permissible to harm a few animals to save a greater number of animals than to harm a few humans to save a greater number of humans, even when animals are described as having greater suffering capacity than some humans

Caviola, L., Kahane, G., Everett, J. A. C., Teperman, E., Savulescu, J., & Faber, N. S. (2021). Utilitarianism for animals, Kantianism for people? Harming animals and humans for the greater good. Journal of Experimental Psychology: General, 150(5), 1008–1039. Jun 2021.

Abstract: Most people hold that it is wrong to sacrifice some humans to save a greater number of humans. Do people also think that it is wrong to sacrifice some animals to save a greater number of animals, or do they answer such questions about harm to animals by engaging in a utilitarian cost-benefit calculation? Across 10 studies (N = 4,662), using hypothetical and real-life sacrificial moral dilemmas, we found that participants considered it more permissible to harm a few animals to save a greater number of animals than to harm a few humans to save a greater number of humans. This was explained by a reduced general aversion to harm animals compared with humans, which was partly driven by participants perceiving animals to suffer less and to have lower cognitive capacity than humans. However, the effect persisted even in cases where animals were described as having greater suffering capacity and greater cognitive capacity than some humans, and even when participants felt more socially connected to animals than to humans. The reduced aversion to harming animals was thus also partly due to speciesism—the tendency to ascribe lower moral value to animals due to their species-membership alone. In sum, our studies show that deontological constraints against instrumental harm are not absolute but get weaker the less people morally value the respective entity. These constraints are strongest for humans, followed by dogs, chimpanzees, pigs, and finally inanimate objects.

The caution observed in medical decisions does not replicate in financial decisions with large amounts; in fact, risk-taking was accentuated for large amounts in the gain domain

Batteux, E., Ferguson, E., & Tunney, R. J. (2021). Do we become more cautious for others when large amounts of money are at stake? Experimental Psychology, 68(1), 32–40. Jun 2021.

Abstract: A considerable proportion of financial decisions are made by agents acting on behalf of other people. Although people are more cautious for others when making medical decisions, this does not seem to be the case for economic decisions. However, studies with large amounts of money are particularly absent from the literature, which precludes a clear comparison to studies in the medical domain. To address this gap, we investigated the effect of outcome magnitude in two experiments where participants made choices between safe and risky options. Decision-makers were not more cautious for others over large amounts. In fact, risk-taking was accentuated for large amounts in the gain domain. We did not find self-other differences in the loss domain for either outcome magnitude. This suggests that the caution observed in medical decisions does not replicate in financial decisions with large amounts, or at least not in the same way. These results echo the concerns that have been raised about excessive risk-taking by financial agents.

Toddlers from an indigenous people were considerably less likely to recognize themselves in the mirror, possibly due to lack of being imitated by their mothers

Cebio─člu, S., & Broesch, T. (2021). Explaining cross-cultural variation in mirror self-recognition: New insights into the ontogeny of objective self-awareness. Developmental Psychology, 57(5), 625–638. May 2021.

Abstract: Mirror self-recognition (MSR) is considered to be the benchmark of objective self-awareness—the ability to think about oneself. Cross-cultural research showed that there are systematic differences in toddlers’ MSR abilities between 18 and 24 months. Understanding whether these differences result from systematic variation in early social experiences will help us understand the processes through which objective self-awareness develops. In this study, we examined 57 18- to 22-month-old toddlers (31 girls) and their mothers from two distinct sociocultural contexts: urban Canada (58% of the subsample were Canadian-born native English-speakers) and rural Vanuatu, a small-scale island society located in the South Pacific. We had two main goals: (a) to identify the social-interactional correlates of MSR ability in this cross-cultural sample, and (b) to examine whether differences in passing rates could be attributed to confounding factors. Consistent with previous cross-cultural research, ni-Vanuatu toddlers passed the MSR test at significantly lower rates (7%) compared to their Canadian counterparts (68%). Among a suite of social interactive variables, only mothers’ imitation of their toddlers’ behavior during a free play session predicted MSR in the entire sample and maternal imitation partially mediated the effects of culture on MSR. In addition, low passing rates among ni-Vanuatu toddlers could not be attributed to reasons unrelated to self-development (i.e., motivation to show mark-directed behavior, understanding mirror-correspondence, representational thinking). This suggests that differences in MSR passing rates reflect true differences in self-recognition, and that parental imitation may have an important role in shaping the construction of visual self-knowledge in toddlers.

People With Larger Social Networks Show Poorer Voice Recognition

People With Larger Social Networks Show Poorer Voice Recognition. Shiri Lev-Ari. Quarterly Journal of Experimental Psychology, June 24, 2021.

Abstract: The way we process language is influenced by our experience. We are more likely to attend to features that proved to be useful in the past. Importantly, the size of individuals’ social network can influence their experience, and consequently, how they process language. In the case of voice recognition, having a larger social network might provide more variable input and thus enhance the ability to recognize new voices. On the other hand, learning to recognize voices is more demanding and less beneficial for people with a larger social network as they have more speakers to learn yet spend less time with each. This paper tests whether social network size influences voice recognition, and if so, in which direction. Native Dutch speakers listed their social network and performed a voice recognition task. Results showed that people with larger social networks were poorer at learning to recognize voices. Experiment 2 replicated the results with a British sample and English stimuli. Experiment 3 showed that the effect does not generalize to voice recognition in an unfamiliar language suggesting that social network size influences attention to the linguistic rather than non-linguistic markers that differentiate speakers. The studies thus show that our social network size influences our inclination to learn speaker-specific patterns in our environment, and consequently the development of skills that rely on such learned patterns, such as voice recognition.

Keywords: voice recognition, talker identification, social network size, social networks

Beyond Aesthetic Judgment: Beauty Increases Moral Standing Through Perceptions of Purity

Beyond Aesthetic Judgment: Beauty Increases Moral Standing Through Perceptions of Purity. Christoph Klebl, Yin Luo, Brock Bastian. Personality and Social Psychology Bulletin, June 24, 2021.

Abstract: Researchers have tended to focus on mind perception as integral to judgments of moral standing, yet a smaller body of evidence suggests that beauty may also be an important factor (for some people and animals). Across six studies (N = 1,662), we investigated whether beauty increases moral standing attributions to a wide range of targets, including non-sentient entities, and explored the psychological mechanism through which beauty assigns moral standing to targets. We found that people attribute greater moral standing to beautiful (vs. ugly) animals (Study 1 and Study 5a, preregistered) and humans (Study 2). This effect also extended to non-sentient targets, that is, people perceive beautiful (vs. ugly) landscapes (Study 3) and buildings (Study 4 and Study 5b, preregistered) as possessing greater moral standing. Across all studies, perceptions of purity mediated the effect of beauty on moral standing, suggesting that beauty increases the moral standing individuals place on targets through evoking moral intuitions of purity.

Keywords: beauty, aesthetic judgment, moral standing, purity

Check also Beauty of the Beast: Beauty as an important dimension in the moral standing of animals. Christoph Klebl et al. Journal of Environmental Psychology, May 7 2021, 101624.

Work with infants suggests that the adaptive problems humans faced with respect to plants have left their mark on the human mind

From 2019... How Plants Shape the Mind. Annie E.Wertz. Trends in Cognitive Sciences, Volume 23, Issue 7, July 2019, Pages 528-531.

Full PDF: How Plants Shape the Mind (

Abstract: Plants are easy to overlook in modern environments, but were a fundamental part of human life over evolutionary time. Recent work with infants suggests that the adaptive problems humans faced with respect to plants have left their mark on the human mind.

Keywords: cognitive evolutionplantssocial learningavoidanceinfancy

Full text, references, photos, charts, in the PDF above

Plants Are A(n Adaptive) Problem

In many societies, plants are no longer a conspicuous part of human life. Plants are a part of the scenery outside and available for purchase, already packaged and processed, in grocery stores and garden centers. This limited contact with plants may seem perfectly normal, but across the entirety of human history it is quite unusual. Taking as a starting point the emergence of the genus Homo, humans spent 99% of our evolutionary history as hunter–gatherers. In a hunter–gatherer world, there were no such shops where the necessities of life could be easily acquired. Instead, our ancestors had to make a living by effectively utilizing the natural environment. Plants were an essential part of this process.

The archeological record and studies of modern hunter–gatherer and hunter–horticulturalist populations show that humans relied on plants in a variety of ways [1].

Plants are an important component of human diets, particularly the roots, fruits, and nuts of plants. Plant materials are used to construct a diverse array of artifacts and shelters. Plant chemicals are used to facilitate hunting and fishing, as well as in rituals and medicines. However, despite all of these benefits, plants can inflict serious costs. Plants have evolved an impressive set of defenses to protect against damage from herbivores [2]. All plants produce toxic chemical defenses, some of which can be harmful or even fatal to humans when ingested. Some plants also have mechanical defenses, such as thorns or stinging hairs, that can cause serious skin injury and in some cases systemic effects.

The problem is: how do humans figure out which plants are food (or otherwise useful) and which ones are fatal? This turns out to be a very difficult task. There are myriad plant species and herbivores that feed on them. The result of these complex coevolutionary relationships is that, from a human perspective, there are no morphological features common to all edible or toxic plants, even in the scale of the environments humans typically encounter without modern means of travel. Therefore, using general rules such as ‘Avoid plants with white flowers’ or ‘Purple fruits are edible’ simply would not work. In the former case, one would miss out on pears, and in the latter one would end up eating deadly nightshade. Importantly, the presence of difficult-to-detect and potentially fatal toxins makes learning about plants through trial and error sampling very costly. The best outcome for this process involves large amounts of wasted time and repeated exposure to noxious plant defenses. The worst-case scenario is death. These kinds of circumstances select for the evolution of social learning mechanisms [3].

The recurrent adaptive problems our species encountered over evolutionary time have shaped the human mind [4] and it is well known that plant defenses have structured the physiology and behavior of many animal species [2], including humans [5].  Accordingly, along with my colleagues, I have recently proposed a solution for the learning problems plants pose. We argue that the human mind contains a collection of behavioral avoidance strategies and social learning rules geared toward safely acquiring information about plants [6,7]. I will refer to this collection of cognitive systems as Plant Learning and Avoidance of Natural Toxins, or PLANT.

Evidence for PLANT

We have begun testing PLANT with studies of human infants. One line of work examines whether infants possess behavioral strategies that would mitigate plant dangers, similar to plant food rejections in older children [8]. Unlike the animate dangers that infants readily attend to (e.g., snakes, spiders [9]), dangerous plant toxins are difficult to detect but relatively easy to avoid. Plants are quite literally rooted to the spot and consequently can inflict harm only on individuals that come into contact with them. Therefore, we propose that PLANT includes behavioral avoidance strategies that protect infants by minimizing their physical contact with plants. To test this proposal, we present infants with plants and different kinds of control objects and measure their reaching behavior (Figure 1). Our results show that, as predicted, infants are reluctant to touch plants compared with other object types [7,10,11] and touch plants less frequently after making contact with them [10]. Infants are similarly avoidant of benign-looking plants and plants covered in sharp-looking thorns [10], suggesting that they initially treat all plants as potentially dangerous – a sensible strategy given that delicate-ooking plants can be deadly poisonous.

Of course, not all plants can be avoided.  Plant foods and materials must be foraged, which necessarily means coming into contact with plants. In some modern hunter–gatherer societies, plant foraging can begin as early as 2–3 years of age [1]. Therefore, in a second line of work, we are investigating whether infants are vigilant for social information about plants and use it to guide their behavior. These studies allow us to test the proposal that PLANT includes social learning rules.

Thus far, we have found that infants look more often to adults when they first encounter plants, in the time before touching them [11], suggesting that behavioral avoidance strategies operate in concert with social learning processes. This structure would enable infants to observe signals from adults before making contact with potentially dangerous plants. Infants appear to be particularly attuned to social signals that allow them to learn which plants are edible [6,12]. In our studies, we show 6- and 18-month-olds an adult eating pieces of fruit from a plant and a manmade object (Figure 2A).

Despite seeing the same social information demonstrated with both object types, infants identify the plant, over the artifact, as a food source [6]. Once infants have learned that fruits from a particular plant are edible, 18-month-olds generalize this information only to other plants that share the same leaf shape and fruit color [12]. This combination of social learning and restrictive generalization rules would prevent infants from inadvertently ingesting toxic plants.

Seeing the Forest for the Trees

Our empirical findings to date are consistent with the proposed PLANT systems.  Infants appear to deploy a collection of behavioral avoidance strategies and social learning rules for plants. Consequently, PLANT minimizes infants’ exposure to harmful plant defenses and allows them to safely acquire information about the specific plants they encounter from more knowledgeable individuals. In short, this work supports the claim that plants have shaped the human mind.

PLANT is a novel research area that provides fertile ground for future exploration (Figure 2). Of particular interest are cross-cultural studies that can shed light on the development of PLANT in different environments and comparative studies that can clarify the evolution of PLANT. Further, the integral role that plants played in human life and human evolution means that PLANT is likely to be enmeshed in a web of cognitive systems that support broader capacities like food learning, threat mitigation, categorization, and cultural transmission, among others. This interconnectedness makes PLANT an excellent starting point for future inquiry in these areas. At the same time, it is highly unlikely that infants, or adults for that matter, will treat plants as a special category in all circumstances. Research on cognitive systems like PLANT can provide new ways of exploring fundamental aspects of human cognition and understanding the evolution of learning.

Results robustly show that humans across the Globe (25 countries, 6 continents) respond with stronger preferences for dominant leaders when they find themselves in contexts of intergroup conflict

Intergroup Conflict and Preferences for Dominant Leaders: Testing Universal Features of Human Followership Psychology across 25 Countries. Lasse Laustsen, Xiaotian Sheng, Mark van Vugt. Human Behavior & Evolution Society HBES 2021, Jun-Jul 2021.

Abstract: Research shows that followers exhibit heightened preferences for dominant leaders in situations of intergroup conflict and coalitional competition (e.g. Little et al., 2007; Spisak et al., 2012; Laustsen & Petersen, 2017). Accordingly, humans are theorized to possess an evolved psychology of adaptive followership that flexibly regulates preferences for leader dominance in accordance with levels of intergroup conflict (Laustsen & Petersen, 2015). However, existing research is based exclusively on studies conducted in the US or Western Europe. Consequently, the central claim that the adaptive followership psychology constitutes a human universal remains untested. This project tests if followers across the Globe-spanning 25 countries (across six continents) such as Colombia, Kenya, Pakistan, Hungary and China-hold stronger preferences for dominant leaders during intergroup conflict. Building on existing experimental protocols, subjects we assigned subjects randomly to either an intergroup conflict condition or a no-conflict condition asking them to choose their favored leader from dominant and non-dominant looking alternatives. Results robustly show that humans across the Globe respond with stronger preferences for dominant leaders when they find themselves in contexts of intergroup conflict. Hence, the project provides unique and unprecedented support for the notion of a universal and context sensitive human followership psychology.