Saturday, February 1, 2020

Policy interventions aimed at correcting self-servingly biased misperceptions are unlikely to be effective in the long run due to people’s ability to forget or suppress information that threatens their desired views

The Dynamics of Motivated Beliefs. Florian Zimmermann. American Economic Review. Feb 2020, Vol. 110, No. 2: Pages 337-363. https://pubs.aeaweb.org/doi/pdfplus/10.1257/aer.20180728

Abstract: A key question in the literature on motivated reasoning and self-deception is how motivated beliefs are sustained in the presence of feedback. In this paper, we explore dynamic motivated belief patterns after feedback. We establish that positive feedback has a persistent effect on beliefs. Negative feedback, instead, influences beliefs in the short run, but this effect fades over time. We investigate the mechanisms of this dynamic pattern, and provide evidence for an asymmetry in the recall of feedback. Finally, we establish that, in line with theoretical accounts, incentives for belief accuracy mitigate the role of motivated reasoning. (JEL C91, D83, D91)

---
We find that negative feedback is indeed recalled with significantly lower accuracy, compared to positive feedback, which suggests that the dynamic belief pattern we have identified is indeed driven by the selective recall of information. Next, we make use of additional outcome variables and a placebo condition to delve into how selective recall operates. In a nutshell, the following patterns emerge. Our results suggest that participants are able to suppress the recall of unwanted memories. Furthermore, participants appear to suppress the recall of not only negative feedback but also the IQ test more broadly. Our results lend direct support to key modeling assumptions in Bénabou and Tirole (2002, 2004). From a policy perspective, our findings suggest that policy interventions aimed at correcting self-servingly biased misperceptions via information or feedback are unlikely to be effective in the long run due to people’s ability to forget or suppress information that threatens their desired views.




Kissing frequency was a strong indicator of both specific sexual quality & global relationship connectivity, barometer of the more immediate quality of sexual relationships & the overall relationship quality

A kiss is not just a kiss: kissing frequency, sexual quality, attachment, and sexual and relationship satisfaction. Dean M. Busby,Veronica Hanna-Walker &Chelom E. Leavitt. Sexual and Relationship Therapy, Jan 31 2020. https://doi.org/10.1080/14681994.2020.1717460

Abstract: Kissing can be thought of as a relationship maintenance behavior and/or as part of the sexual repertoire. Using data from 1,605 participants in committed relationships for at least two years, we analyzed how kissing frequency was associated with specific aspects of the two most recent sexual experiences, attachment, and global sexual and relationship satisfaction using a structural equation model. Kissing frequency had a significant, positive association with all specific aspects of recent sexual experiences (levels or arousal, experience of orgasm, and event-specific sexual satisfaction) and global sexual and relationship satisfaction. Kissing frequency had a significant, negative association with anxious and avoidant attachment. The associations with attachment and specific aspects of recent sexual experiences only partially mediated the relationship between kissing frequency and global relationship and sexual satisfaction. The results indicated that kissing frequency was a strong indicator of both specific sexual quality and global relationship connectivity and may be a promising variable to utilize as a barometer of both the more immediate quality of sexual relationships as well as overall relationship quality.

Keywords: Kissing, attachment, sexual satisfaction, relationship satisfaction



Small-scale societies: Divinatory statements provide a version of the situation that most participants are motivated to agree with, providing efficient coordination at a minimal cost for almost all participants

Why Divination? Evolved Psychology and Strategic Interaction in the Production of Truth. Pascal Boyer. Current Anthropology, Jan 14, 2020. https://www.journals.uchicago.edu/doi/abs/10.1086/706879

Abstract: Divination is found in most human societies, but there is little systematic research to explain (1) why it is persuasive or (2) why divination is required for important collective decisions in many small-scale societies. Common features of human communication and cooperation may help address both questions. A highly recurrent feature of divination is “ostensive detachment,” a demonstration that the diviners are not the authors of the statements they utter. As a consequence, people spontaneously interpret divination as less likely than other statements to be influenced by anyone’s intentions or interests. This is enough to give divination an epistemic advantage compared with other sources of information, answering question 1. This advantage is all the more important in situations where a diagnosis will create differential costs and benefits, for example, determining who is responsible for someone’s misfortune in a small-scale community. Divinatory statements provide a version of the situation that most participants are motivated to agree with, as it provides a focal point for efficient coordination at a minimal cost for almost all participants, which would answer question 2.



Despite motivation to alleviate discomfort and to maintain a morally good self-concept, more severe moral transgressions are actually remembered more frequently, more vividly, and with more detail

The phenomenology of remembering our moral transgressions. Shenyang Huang, Matthew L. Stanley Felipe De Brigard. Memory & Cognition, January 27 2020. https://link.springer.com/article/10.3758/s13421-019-01009-0

Abstract: People tend to believe that they truly are morally good, and yet they commit moral transgressions with surprising frequency in their everyday lives. To explain this phenomenon, some theorists have suggested that people remember their moral transgressions with fewer details, lower vivacity, and less clarity, relative to their morally good deeds and other kinds of past events. These phenomenological differences are thought to help alleviate psychological discomfort and to help people maintain a morally good self-concept. Given these motivations to alleviate discomfort and to maintain a morally good self-concept, we might expect our more egregious moral transgressions, relative to our more minor transgressions, to be remembered less frequently, with fewer details, with lower vivacity, and with a reduced sense of reliving. More severe moral transgressions might also be less central to constructions of personal identity. In contrast to these expectations, our results suggest that participants’ more severe moral transgressions are actually remembered more frequently, more vividly, and with more detail. More severe moral transgressions also tend to be more central to personal identity. We discuss the implications of these results for the motivation to maintain a morally good self-concept and for the functions of autobiographical memory.

Keywords: Autobiographical memory Moral psychology Identity Self Phenomenology


British football leagues: Male height is positively associated with social dominance; but the ‘Napoleon complex’/‘small man syndrome’ suggests that smaller males are more assertive & punitive to compensate

Referee height influences decision making in British football leagues. Dane McCarrick, Gayle Brewer, Minna Lyons, Thomas V. Pollet & Nick Neave. BMC Psychology volume 8, Article number: 4 (2020). January 17. https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-020-0370-4

Abstract
Background Male height is positively associated with social dominance, and more agonistic/competitive behaviours. However, the ‘Napoleon complex’ or ‘small man syndrome’ suggests that smaller males are more assertive and punitive to compensate for lack of height and social dominance. Here, we assess possible relationships between height and punitive behaviours in a real-world setting.

Methods Using a non-experimental correlational design, we analysed data on 61 male association football referees from four professional leagues in England, and explored relationships between their height and punitive behaviours in the form of yellow cards, red cards and penalties given during an entire season.

Results Overall there was no effect of referee height on fouls awarded. However, there was a main effect of height on yellow cards awarded, with shorter referees issuing more yellow cards. The same effect was found for red cards and penalties, though this was moderated by league. In the lower leagues, more red cards and penalties were awarded by relatively shorter referees, but in the higher leagues more red cards and penalties were awarded by relatively taller referees.

Conclusions These findings from real-life public dominance encounters show that height is associated with punitive behaviours, but is sensitive to context.

Rolf Degen summarizing... People are reluctant to adopt low-carbon behaviors as long as "the rich" and the “egoistic people” are not doing their part

Revisiting the Psychology of Denial Concerning Low-Carbon Behaviors: From Moral Disengagement to Generating Social Change. Susanne Stoll-Kleemann, Tim O’Riordan. Sustainability 2020, 12(3), 935, January 27 2020. https://doi.org/10.3390/su12030935

Abstract: This paper reassesses the scope for shifting high-carbon personal behaviors in the light of prevailing insufficient political and regulatory action. Our previous research has shown that citizens regard such behavioral shifts as extremely daunting and create a number of psychological denial mechanisms that draw attention to the inaction of others, including governments. Further theoretical insights and relevant new findings have been attained from a more recent survey of more than 1000 German residents. This reveals that direct denial of anthropogenic climate change is replaced by a denial of responsibility for individual climate action. Ways of moral disengagement play a more dominant role, such as the diffusion and displacement of responsibility, although a majority is aware of—and very much concerned about—the climate crisis. More attention needs to be given for further reinterpretation of the role of moral disengagement to single out adequate strategies for different individuals and groups of people, such as making role models more visible to encourage social learning that could accelerate further necessary moral and behavioral transformations.

Keywords: climate change; behavior change; denial; emotions; low-carbon behavior; moral disengagement; collective action; responsibility; self-efficacy

All religious, etchnic, and racial groups appear to make use of pornography; its use is associated with greater health, knowledge, and standard of living, and lower homophobia

Pornography Use: What Do Cross-Cultural Patterns Tell Us? David L. Rowland, Dudbeth Uribe. In: Cultural Differences and the Practice of Sexual Medicine pp 317-334. January 28 2020. https://link.springer.com/chapter/10.1007/978-3-030-36222-5_18

Abstract: Access to online pornography has increased greatly over the past decade. In this chapter we first review purported effects of pornography use. We then present data compiled from one source of internet pornography use, namely Pornhub, and review findings from a cross-cultural perspective. Specifically, we investigate age and gender patterns across various regions of the world and relate pornography use to a number of sociocultural indices. Results indicate changing age and gender patterns with respect to pornography use, as well as relationships with indices of human development, gender inequality, trans/homophobia, and internet access. Given that internet pornography may increasingly serve as a means of sex education in many cultures, the importance of implementing meaningful and balanced sex education that promotes healthy sexual relationships is critically important.

Keywords: Pornography Cross-cultural Online Gender Age Worldwide trends Effects

Our expressed virtue judgments of specific traits may function, in part, as self-interested propaganda, by influencing the social value assigned by local others to the traits we happen to possess

Morality and the Modular Mind: Propagandistic Self-Interest and Perceptions of Virtue. Schwab, Leon T. California State University, Fullerton, ProQuest Dissertations Publishing, 2019. 27547780. https://search.proquest.com/openview/d3e07a339e0df51cfc8a674b86ab3f5c/1?pq-origsite=gscholar&cbl=2026366&diss=y

ABSTRACT: The underlying cognitive mechanisms that regulate how people formulate moralistic judgements of others’ behaviors and traits are poorly understood. This study tests the novel prediction, based on a hypothesized function of self-interested propaganda, that the extent to which a given trait is perceived as virtuous is positively influenced by one’s own standing on that trait. In a first study, data were gathered from 237 participants who completed an online survey. Participants rated the same list of traits for (i) their self-perceived standing on those traits, and (ii) their judgment of those traits as being virtuous. Some notable correlations between self-perceived trait possession and virtue ratings of those traits are Patriotism (r = 0.56, p < .01), Religiosity (r = 0.56, p < .01), Attractiveness (r = 0.5, p < .01), and Strength (r = 0.49, p < .01). A second study replicated these findings when controlling for participants’ estimates of (i) what others in their immediate social world would perceive as virtues, and (ii) what others in their distant social world would perceive as virtues. Although preliminary, these initial findings suggest that our expressed virtue judgments of specific traits may function, in part, as self-interested propaganda, by influencing the social value assigned by local others to the traits we happen to possess.



1985-2017: Media representations of climate change have become increasingly politicized, whereby political actors are increasingly featured and scientific actors less so

Politicization and Polarization in Climate Change News Content, 1985-2017. Sedona Chinn, P. Sol Hart, Stuart Soroka. Science Communication, January 29, 2020. https://doi.org/10.1177/1075547019900290

Abstract: Despite concerns about politicization and polarization in climate change news, previous work has not been able to offer evidence concerning long-term trends. Using computer-assisted content analyses of all climate change articles from major newspapers in the United States between 1985 and 2017, we find that media representations of climate change have become (a) increasingly politicized, whereby political actors are increasingly featured and scientific actors less so and (b) increasingly polarized, in that Democratic and Republican discourses are markedly different. These findings parallel trends in U.S. public opinion, pointing to these features of news coverage as polarizing influences on climate attitudes.

Keywords: climate change, computerized content analysis, news, politicization


Check also Merkley, Eric, and Dominik Stecula. 2020. “Party Cues in the News: Democratic Elites, Republican Backlash and the Dynamics of Climate Skepticism.”  British Journal of Political Science. Preprint January 25. https://www.bipartisanalliance.com/2020/01/supporters-of-republican-party-have.html

And Political leanings are the strongest predictors of climate change beliefs, particularly among the more knowledgeable:
Climate Change: A Partisan and Polarized Issue in the United States. Risa Palm, Toby Bolsen. Climate Change and Sea Level Rise in South Florida pp 15-40, January 2 2020. https://www.bipartisanalliance.com/2020/01/political-leanings-are-strongest.html

The Smartphone as a Pacifying Technology: In moments of stress, engaging with one’s smartphone provides greater stress relief than one’s laptop or a similar smartphone belonging to someone else

The Smartphone as a Pacifying Technology. Shiri Melumad, Michel Tuan Pham. Journal of Consumer Research, ucaa005, January 27 2020. https://doi.org/10.1093/jcr/ucaa005

Abstract: In light of consumers’ growing dependence on their smartphones, this article investigates the nature of the relationship that consumers form with their smartphone and its underlying mechanisms. We propose that in addition to obvious functional benefits, consumers in fact derive emotional benefits from their smartphone, in particular, feelings of psychological comfort and, if needed, actual stress relief. In other words, in a sense, smartphones are not unlike “adult pacifiers.” This psychological comfort arises from a unique combination of properties that turn smartphones into a reassuring presence for their owners: the portability of the device, its personal nature, the subjective sense of privacy experienced while on the device, and the haptic gratification it affords. Results from one large-scale field study and three laboratory experiments support the proposed underlying mechanisms and document downstream consequences of the psychological comfort that smartphones provide. The findings show, for example, that (a) in moments of stress, consumers exhibit a greater tendency to seek out their smartphone (study 2); and (b) engaging with one’s smartphone provides greater stress relief than engaging in the same activity with a comparable device such as one’s laptop (study 3) or a similar smartphone belonging to someone else (study 4).

Keyword: product attachment, psychology of technology, mobile marketing, digital marketing




Exaggerated psychological stress reactivity linked to increase in risk factors for cardiovascular disease; blunted reactivity predicted future increased obesity, greater illness frequency, poorer cognitive ability

Psychological stress reactivity and future health and disease outcomes: A systematic review of prospective evidence. Anne I. Turner et al . Psychoneuroendocrinology, February 1 2020, 104599. https://doi.org/10.1016/j.psyneuen.2020.104599

Highlights
• SAM system and HPA axis reactivity predict future health and disease outcomes
• Exaggerated and blunted responses predict different health and disease outcomes
• Reactivity-related health and disease outcomes are both physical and mental
• Intermediate stress responses (“Goldilocks” responses) may be the most adaptive
• A “bidirectional multi-system reactivity hypothesis” is proposed

Abstract
Background Acute psychological stress activates the sympatho-adrenal medullary (SAM) system and hypothalamo-pituitary adrenal (HPA) axis. The relevance of this stress reactivity to long-term health and disease outcomes is of great importance. We examined prospective studies in apparently healthy adults to test the hypothesis that the magnitude of the response to acute psychological stress in healthy adults is related to future health and disease outcomes.

Methods We searched Medline Complete, PsycINFO, CINAHL Complete and Embase up to 15 Aug 2019. Included studies were peer-reviewed, English-language, prospective studies in apparently healthy adults. The exposure was acute psychological stress reactivity (SAM system or HPA axis) at baseline. The outcome was any health or disease outcome at follow-up after ≥ 1 year.

Results We identified 1,719 papers through database searching and 1 additional paper through other sources. Forty-seven papers met our criteria including 32,866 participants (range 30 – 4100) with 1-23 years of follow-up. Overall, one third (32%; 83/263) of all reported findings were significant and two thirds (68%; 180/263) were null. With regard to the significant findings, both exaggerated (i.e. high) and blunted (i.e. low) stress reactivity of both the SAM system and the HPA axis at baseline were related to health and disease outcomes at follow-up. Exaggerated stress reactivity at baseline predicted an increase in risk factors for cardiovascular disease and decreased telomere length at follow-up. In contrast, blunted stress reactivity predicted future increased adiposity and obesity, more depression, anxiety and PTSD symptoms, greater illness frequency, musculoskeletal pain and regulatory T-Cell percentage, poorer cognitive ability, poorer self-reported health and physical disability and lower bone mass.

Conclusion Exaggerated and blunted SAM system and HPA axis stress reactivity predicted distinct physical and mental health and disease outcomes over time. Results from prospective studies consistently indicate stress reactivity as a predictor for future health and disease outcomes. Dysregulation of stress reactivity may represent a mechanism by which psychological stress contributes to the development of future health and disease outcomes.

Keywords: acute stressblood pressureheart rateepinephrinenorepinephrinecortisolsympatho-adrenal medullary systemhypothalamo-pituitary adrenal axishealth outcomesdisease outcomes

When honest people cheat, and cheaters are honest: Cognitive control processes override our moral default

When honest people cheat, and cheaters are honest: Cognitive control processes override our moral default. Sebastian P.H. Speer, Ale Smidts, Maarten A.S. Boksem. bioRxiv, Jan 24 2020. https://doi.org/10.1101/2020.01.23.907634

Abstract: Every day, we are faced with the conflict between the temptation to cheat for financial gains and maintaining a positive image of ourselves as being a ‘good person’. While it has been proposed that cognitive control is needed to mediate this conflict between reward and our moral self-image, the exact role of cognitive control in (dis)honesty remains elusive. Here, we identify this role, by investigating the neural mechanism underlying cheating. We developed a novel task which allows for inconspicuously measuring spontaneous cheating on a trial-by-trial basis in the MRI scanner. We found that activity in the Nucleus Accumbens promotes cheating, particularly for individuals who cheat a lot, while a network consisting of Posterior Cingulate Cortex, Temporoparietal Junction and Medial Prefrontal Cortex promotes honesty, particularly in individuals who are generally honest. Finally, activity in areas associated with Cognitive Control (Anterior Cingulate Cortex and Inferior Frontal Gyrus) helped dishonest participants to be honest, whereas it promoted cheating for honest participants. Thus, our results suggest that cognitive control is not needed to be honest or dishonest per se, but that it depends on an individual’s moral default.

Highly educated parents are more able to preserve their family's elite status in the next generation

The social and genetic inheritance of educational attainment: Genes, parental education, and educational expansion. Meng-Jung Lin. Social Science Research, Volume 86, February 2020, 102387. https://doi.org/10.1016/j.ssresearch.2019.102387

Abstract: Recently, several genome-wide association studies of educational attainment have found education-related genetic variants and enabled the integration of human inheritance into social research. This study incorporates the newest education polygenic score (Lee et al., 2018) into sociological research, and tests three gene-environment interaction hypotheses on status attainment. Using the Health and Retirement Study (N = 7599), I report three findings. First, a standard deviation increase in the education polygenic score is associated with a 58% increase in the likelihood of advancing to the next level of education, while a standard deviation increase in parental education results in a 53% increase. Second, supporting the Saunders hypothesis, the genetic effect becomes 11% smaller when parental education is one standard deviation higher, indicating that highly educated parents are more able to preserve their family's elite status in the next generation. Finally, the genetic effect is slightly greater for the younger cohort (1942–59) than the older cohort (1920–41). The findings strengthen the existing literature on the social influences in helping children achieve their innate talents.

Keywords: Educational attainmentParental educationCohortGene-environment interactionEducational expansion in higher education



People do not realize that failures contain useful information; therefore, people undershare failures in and beyond organizations settings

Hidden failures. Lauren Eskreis-Winkler, Ayelet Fishbach. Organizational Behavior and Human Decision Processes, Volume 157, March 2020, Pages 57-67. https://doi.org/10.1016/j.obhdp.2019.11.007

Highlights
• People do not realize that failures contain useful information.
• Therefore, people undershare failures in and beyond organizations settings.
• Highlighting the information in failure makes people more likely to share it.

Abstract: Failure often contains useful information, yet across five studies involving 11 separate samples (N = 1238), people were reluctant to share this information with others. First, using a novel experimental paradigm, we found that participants consistently undershared failure—relative to success and a no-feedback experience—even though failure contained objectively more information than these comparison experiences. Second, this reluctance to share failure generalized to professional experiences. Teachers in the field were less likely to share information gleaned from failure than information gleaned from success, and employees were less likely to share lessons gleaned from failed versus successful attempts to concentrate at work. Why are people reluctant to share failure? Across experimental and professional failures, people did not realize that failure contained useful information. The current investigation illuminates an erroneous belief and the asymmetrical world of information it produces: one where failures are common in private, but hidden in public.

Keywords: SharingFailureInformationSuccessKnowledge transfer


Pseudo-profound statement attributed to the Dalai Lama seems even more profound: "We are non‐local beings that localize as a dot then inflate to become non‐local again. The universe is mirrored in us."

“Who said it?” How contextual information influences perceived profundity of meaningful quotes and pseudo‐profound bullshit. Vukašin Gligorić  Ana Vilotijević. Applied Cognitive Psychology, December 20 2019. https://doi.org/10.1002/acp.3626

Summary: Psychological research on pseudo‐profound bullshit—randomly assembled buzz words plugged into a syntactic structure—has only recently begun. Most such research has focused on dispositional traits, such as thinking styles or political orientation. However, none has investigated contextual factors. In two studies, we introduce a new paradigm by investigating the contextual effect on pseudo‐profound bullshit and meaningful quotes. In Study 1, all participants rated the profundity of statements in three contexts: (a) isolated, (b) as allegedly said by a famous author, or (c) within a vignette (short story). Study 2 serves as a conceptual replication in which participants rated statements in only one of three contexts. Overall, our results demonstrate that although contextual information such as author's name increases the perceived profundity of bullshit, it has an inconsistent effect on meaningful quotes. The present study helps to better understand the bullshit receptivity while offering a new line of research.

4 GENERAL DISCUSSION

In the present research, we tried to add to the small amount of literature on pseudo‐profound bullshit, while offering a new paradigm. Across two studies, we demonstrated that pseudo‐profound bullshit is susceptible to the labeling effect—bullshit being rated as more profound when presented as being uttered by a famous author. On the contrary, this contextual effect for meaningful quotes was inconsistent, as profundity ratings were increased only in the second study.
The labeling effect for pseudo‐profound bullshit is similar to the ratings of poems attributed to famous or bogus poets (Bar‐Hillel et al., 2012). Although we did not investigate any underlying mechanisms of the effect, it is plausible to assume the similar process to those where expectation led to genuinely different feeling (e.g. Bar‐Hillel et al., 2012; Lee et al., 2006). That is, after seeing a famous author's name next to the statement, participants might have been primed by the author's name and construed the meaning in the statement. However, the power of different authorities remains; it may happen that one is seduced by an authority from an unfamiliar field (e.g., art/Dali), whereas this could not be the case for the familiar field (e.g., physics/Plank). Specifically, one of the directions for future research could be to examine whether certain authorities (i.e., based on occupation) have a larger or smaller impact on bullshit receptivity. Taken together, it would be beneficial to test whether this tendency is irrational or not (as in heuristics, for example).
Interestingly, increase in profundity was inconsistent for meaningful quotes as it emerged only in the second study. All meaningful quotes from Study 2 were taken as excerpts from particular authors' work, which makes them decontextualized. This might be the reason why there was a contextual effect on these quotes. As quotes usually depict the author's views represented by their own words on a certain topic (Conrad, 1999), this way of recruiting can constrain their application. Alternatively, short and widely applicable sayings (such as Latin phrases, e.g., “He conquers who conquers himself”) might be immune to the contextual effect due to their life‐oriented message and widespread use. This might be one of the avenues for future research.
Another possible path of label influence is through the contextualization of the statement. For example, when one reads a short story (or book excerpt), she might relate the bullshit to that story so that “non‐local beings that localize as a dot” actually relate to the protagonists of the story (e.g., signifying the old man's unimportance in the world). Even though our data do not support these conclusions, vignette condition had higher absolute ratings than the isolated condition. It might be the case that our short stores in Study 1 did not have enough literary value to increase the profundity. Although the vignette condition (book excerpt) improved ratings in Study 2, it also contained the author's name, making it impossible to distinguish whether effect occurred due to the author or the excerpt. However, this condition had higher absolute values than the author‐only condition, supporting our notions. These questions remain open for other researchers to answer.
Surprisingly, in Study 2, meaningful quotes and pseudo‐profound bullshit were rated as equally deep, which is in contrast to results from Study 1 and findings from bullshit research (e.g., Čavojova et al., 2018; Pennycook et al., 2015). One plausible reason is the selection of the deepest pseudo‐profound items from the original 30‐item scale. Higher mean profundity ratings for bullshit items in Study 2 (M = 3.2 compared with M = 2.9 in Study 1) support this notion. Therefore, although the same five pseudo‐profound items had similar ratings in two studies (M = 3.17 and M = 3.20), the mean bullshit score was lower in the first study as it contained other bullshit items that had lower ratings. Second, our selection of meaningful quotes does not necessarily guarantee their profundity—mean profundity ratings for meaningful quotes was lower in Study 2 (M = 3.3 compared with M = 3.7 in Study 1). That is, some of the meaningful quotes in Study 2 might seem like contemporary motivational quotes (e.g., “They always say that time changes things, but you actually have to change them yourself.”) and therefore have lower ratings. Indeed, this quote had the lowest ratings along with the Dostoevsky's quote (“To go wrong in one's own way is better than to go right in someone else's.”).
In conclusion, our results suggest that pseudo‐profound bullshit is susceptible to contextual effects—attributing a statement to a famous person alters its perception. Although it might be only economically exploited (as in the case of New Age leading figures), other kinds of bullshit (for example, political), might be more dangerous. Demonstrating how easily people might evaluate pseudo‐profound statements as more profound just because they were presented with an author's name; we should be aware of potential abuse of this type of effect.

Friday, January 31, 2020

Lying to appear honest: People may lie to appear honest in cases where the truth is highly favorable to them, such that telling the truth might make them appear dishonest to others

Choshen-Hillel, S., Shaw, A., & Caruso, E. M. (2020). Lying to appear honest. Journal of Experimental Psychology: General. Jan 2020. https://doi.org/10.1037/xge0000737

Abstract: People try to avoid appearing dishonest. Although efforts to avoid appearing dishonest can often reduce lying, we argue that, at times, the desire to appear honest can actually lead people to lie. We hypothesize that people may lie to appear honest in cases where the truth is highly favorable to them, such that telling the truth might make them appear dishonest to others. A series of studies provided robust evidence for our hypothesis. Lawyers, university students, and MTurk and Prolific participants said that they would have underreported extremely favorable outcomes in real-world scenarios (Studies 1a–1d). They did so to avoid appearing dishonest. Furthermore, in a novel behavioral paradigm involving a chance game with monetary prizes, participants who received in private a very large number of wins reported fewer wins than they received; they lied and incurred a monetary cost to avoid looking like liars (Studies 2a–2c). Finally, we show that people’s concern that others would think that they have overreported is valid (Studies 3a–3b). We discuss our findings in relation to the literatures on dishonesty and on reputation.


Sexual orientation explains < 1% of the variation in consumption-favoring behaviors; the common belief of a stylish & extremely wealthy gay consumer must be questioned; differences decrease with age

Sexual orientation and consumption: Why and when do homosexuals and heterosexuals consume differently? Martin Eisend, Erik Hermann. International Journal of Research in Marketing, Jan 31 2020. https://doi.org/10.1016/j.ijresmar.2020.01.005

Highlights
• Sexual orientation explains < 1% of the variation in consumption-favoring behaviors.
• The common belief of a stylish and extremely wealthy gay consumer must be questioned.
• Consumption differences between homosexuals and heterosexuals decrease with age.
• Consumption differences increase when comparing homosexuals and heterosexuals of the same gender.

Abstract: The increasing visibility of homosexuality in society, combined with the lesbian and gay community's considerable buying power, has triggered marketers and researchers' interest in understanding homosexual consumers' consumption patterns. Prior research on whether homosexual consumers behave differently from heterosexual consumers has yielded mixed results, and researchers and practitioners still do not know whether any substantial differences exist, what these differences look like, and how they can be explained. The findings from a meta-analysis reveal that sexual orientation explains on average < 1% of the variation in consumption behavior across 45 papers, indicating only slightly different consumption behaviors. Findings from a moderator analysis contradict conventional wisdom and lay theories, while partly supporting assumptions that are rooted in evolutionary and biological theories that show consumption differences decrease with age; they increase when comparing homosexuals and heterosexuals of the same gender. These findings, which question long-held beliefs about homosexual consumers, help marketers to successfully adjust their strategies.

Keywords: Sexual orientationConsumptionMeta-analysis

The art of flirting: What are the traits that make it effective?

The art of flirting: What are the traits that make it effective? Menelaos Apostolou, Christoforos Christoforou. Personality and Individual Differences, Volume 158, 1 May 2020, 109866. https://doi.org/10.1016/j.paid.2020.109866

Highlights
•    Identified 47 traits which turn flirting effective.
•    Classified the 47 traits into nine factors which turn flirting effective.
•    Found that women rated gentle approach, while men rated good looks as more effective.
•    Found that older participants rated factors, gentle approach as more effective.

Abstract: Flirting is an essential aspect of human interaction and key for the formation of intimate relationships. In the current research, we aimed to identify the traits that turn it more effective. In particular, in Study 1 we used open-ended questionnaires in a sample of 487 Greek-speaking participants, and identified 47 traits that make flirting effective. In Study 2, we asked 808 Greek-speaking participants to rate how effective each trait would be on them. Using principal components analysis, we classified these traits into nine broader factors. Having a good non-verbal behavior, being intelligent and having a gentle approach, were rated as the most important factors. Sex difference were found for most of the factors. For example, women rated gentle approach as more effective on them, while men rated good looks as more effective. Last but not least, older participants rated factors, such as the “Gentle approach,” to be more effective on them.

Check also A considerable proportion of people in postindustrial societies experience difficulties in intimate relationships and spend considerable time being single:
The Association Between Mating Performance, Marital Status, and the Length of Singlehood: Evidence From Greece and China. Menelaos Apostolou, Yan Wang. Evolutionary Psychology, November 13, 2019. https://www.bipartisanalliance.com/2019/11/a-considerable-proportion-of-people-in.html

Search for meaning is positively associated with presence of meaning only for those with greater maladaptive traits; & the search for meaning in adverse circumstances appears to be more effective than in benign conditions

Is the Search for Meaning Related to the Presence of Meaning? Moderators of the Longitudinal Relationship. Steven Tsun-Wai Chu & Helene Hoi-Lam Fung. Journal of Happiness Studies, January 30 2020. https://link.springer.com/article/10.1007/s10902-020-00222-y

Abstract: Meaning in life is an important element of psychological well-being. Intuitively, the search for meaning is associated with greater presence of meaning, but whether the relationship exists is met with mixed findings in the literature. The present studies aim to investigate the moderators of this relationship. Two studies, a one-month longitudinal study (N = 166, retention rate = 100%) and a six-month longitudinal study (N = 181, retention rate = 83%) were carried out. Participants completed measures on meaning in life, personality variables, and psychological needs in the baseline survey, and meaning in life in the follow-up survey. Multiple regression analysis showed that optimism, BIS, and psychological needs emerged to be significant moderators of the longitudinal relationship. Search for meaning at baseline was positively associated with presence of meaning at follow-up only for those with greater maladaptive traits. The search for meaning in adverse circumstances appears to be more effective than in benign conditions. Deficiency search is functional.



Drug makers feel burned: By the time the vaccine was ready—after the peak of the outbreak—public fear of the new flu had subsided; many people didn’t want the vaccine, and some countries refused to take their full orders

From 2018... Who will answer the call in the next outbreak? Drug makers feel burned by string of vaccine pleas. Helen Branswell. Stat News, January 11, 2018. https://www.statnews.com/2018/01/11/vaccines-drug-makers/

Excerpts:

Every few years an alarming disease launches a furious, out-of-the-blue attack on people, triggering a high-level emergency response. SARS. The H1N1 flu pandemic. West Nile and Zika. The nightmarish West African Ebola epidemic.

In nearly each case, major vaccine producers have risen to the challenge, setting aside their day-to-day profit-making activities to try to meet a pressing societal need. With each successive crisis, they have done so despite mounting concerns that the threat will dissipate and with it the demand for the vaccine they are racing to develop.

Now, manufacturers are expressing concern about their ability to afford these costly disruptions to their profit-seeking operations. As a result, when the bat-signal next flares against the night sky, there may not be anyone to respond.

...

Drug makers “have very clearly articulated that … the current way of approaching this — to call them during an emergency and demand that they do this and that they reallocate resources, disrupt their daily operations in order to respond to these events — is completely unsustainable,” said Richard Hatchett, CEO of CEPI, an organization set up after the Ebola crisis to fund early-stage development of vaccines to protect against emerging disease threats.

...

Nearly all the major pharmaceutical companies that work on these vaccines have found themselves holding the bag after at least one of these outbreaks.

GSK stepped up during the Ebola crisis, but has since essentially shelved the experimental vaccine it once raced to try to test and license. Two other vaccines — Merck’s and one being developed by Janssen, the vaccines division of Johnson & Johnson — are still slowly wending their ways through difficult and costly development processes. Neither company harbors any hope of earning back in sales the money it spent on development.

A number of flu vaccine manufacturers were left on the hook with ordered but unpaid for vaccine during the mild 2009 H1N1 flu pandemic. By the time the vaccine was ready — after the peak of the outbreak — public fear of the new flu had subsided. Many people didn’t want the vaccine, and some countries refused to take their full orders. GSK, Sanofi Pasteur, and Novartis — which has since shed its vaccines operation — produced flu vaccine in that pandemic.

Dr. Rip Ballou, who heads the U.S. research and development center for GSK Global Vaccines, told STAT it’s not in the “company’s DNA” to say “no” to pleas to respond to appeals in an emergency. But the way it has responded in the past is no longer tenable.

“We do not want to have these activities compete with in-house programs,” said Ballou. “And our learnings from Ebola, from pandemic flu, from SARS previously, is that it’s very disruptive and that’s not the way that we want to do business going forward.”

GSK has proposed using a facility it has in Rockville, Md., as a production plant for vaccines needed in emergencies, but the funding commitments that would be needed to turn that idea into reality haven’t materialized.

...

Sanofi Pasteur has also taken several enormous hits in the successive rounds of disease emergency responses. In the early 2000s, the company worked on a West Nile virus vaccine. Though the disease still causes hundreds of cases of severe illness in the U.S. every year and is estimated to have been responsible for over 2,000 deaths from 1999 to 2016, public fear abated, taking with it the prospects for sales of a vaccine. Sanofi eventually pulled the plug.

...

At the same time, the company bore the brunt of a barrage of criticism for not publicly committing to a low-price guarantee for developing countries. Facing horrible PR and no sales prospects, Sanofi announced late last summer that it was out.

...

In an emergency, regulatory agencies may be willing to bend some rules. But once the crisis subsides, they revert to normal operating procedures — as Merck has found out as it tries to persuade regulators to accept data from an innovative ring-vaccination trial conducted on its Ebola vaccine.

“This is sort of a human nature problem. People pay attention to the burning house, and maybe not the one that’s got bad wiring, right, that’s down the street,” Shiver said.

Predictive Pattern Classification Can Distinguish Gender Identity Subtypes (the subjective perception of oneself belonging to a certain gender) from Behavior and Brain Imaging

Predictive Pattern Classification Can Distinguish Gender Identity Subtypes from Behavior and Brain Imaging. Benjamin Clemens, Birgit Derntl, Elke Smith, Jessica Junger, Josef Neulen, Gianluca Mingoia, Frank Schneider, Ted Abel, Danilo Bzdok, Ute Habel. Cerebral Cortex, bhz272, January 29 2020, https://doi.org/10.1093/cercor/bhz272

Abstract: The exact neurobiological underpinnings of gender identity (i.e., the subjective perception of oneself belonging to a certain gender) still remain unknown. Combining both resting-state functional connectivity and behavioral data, we examined gender identity in cisgender and transgender persons using a data-driven machine learning strategy. Intrinsic functional connectivity and questionnaire data were obtained from cisgender (men/women) and transgender (trans men/trans women) individuals. Machine learning algorithms reliably detected gender identity with high prediction accuracy in each of the four groups based on connectivity signatures alone. The four normative gender groups were classified with accuracies ranging from 48% to 62% (exceeding chance level at 25%). These connectivity-based classification accuracies exceeded those obtained from a widely established behavioral instrument for gender identity. Using canonical correlation analyses, functional brain measurements and questionnaire data were then integrated to delineate nine canonical vectors (i.e., brain-gender axes), providing a multilevel window into the conventional sex dichotomy. Our dimensional gender perspective captures four distinguishable brain phenotypes for gender identity, advocating a biologically grounded reconceptualization of gender dimorphism. We hope to pave the way towards objective, data-driven diagnostic markers for gender identity and transgender, taking into account neurobiological and behavioral differences in an integrative modeling approach.

Keywords: fMRI, gender identity, machine learning, resting-state functional connectivity, transgender

Thursday, January 30, 2020

The youngest students in a class are less satisfied with their life, have worse general health, more frequent psychosomatic complaints and are more likely overweight

Younger, Dissatisfied, and Unhealthy - Relative Age in Adolescence. L. Fumarco, S. Baert, F. Sarracino. Economics & Human Biology, January 30 2020, 100858. https://doi.org/10.1016/j.ehb.2020.100858

Highlights
•    The youngest students in a class are less satisfied with their life.
•    They have worse general health.
•    They have more frequent psychosomatic complaints and are more likely overweight.

Abstract: We investigate whether relative age (i.e. the age gap between classmates) affects life satisfaction and health in adolescence. We analyse data on students between 10 and 17 years of age from the international survey ‘Health Behaviour in School-Aged Children’ and find robust evidence that a twelve-month increase in relative age (i.e. the hypothetical maximum age gap between classmates) i) increases life satisfaction by 0.168 standard deviations, ii) increases self-rated general health by 0.108 standard deviations, iii) decreases psychosomatic complaints by 0.072 standard deviations, and iv) decreases chances of being overweight by 2.4%. These effects are comparable in size to the effects of students’ household socio-economic status. Finally, gaps in life satisfaction are the only ones to reduce with the increase in absolute age, but only in countries where the first tracking of students occurs at 14 years of age or later.

Neanderthal genes might have helped Homo sapiens adjust to life beyond Africa, influencing skin pigmentation towards fairer skin & then increased life expectancy (at a cost of more skin cancer)

Women with fair phenotypes seem to confer a survival advantage in a low UV milieu. A nested matched case control study. Pelle G. Lindqvist et al. PLOS, January 30, 2020. https://doi.org/10.1371/journal.pone.0228582

Abstract
Background Sun exposure in combination with skin pigmentation is the main determinant for vitamin D status. Human skin color seems to be adapted and optimized for regional sun ultraviolet (UV) intensity. However, we do not know if fair, UV-sensitive skin is a survival advantage in regions with low UV radiation.

Methods A population-based nested case–control study of 29,518 Caucasian women, ages 25 to 64 years from Southern Sweden who responded to a questionnaire regarding risk-factors for malignant melanoma in 1990 and followed for 25 years. For each fair woman, defined as having red hair or freckles (n = 11,993), a control was randomly selected from all non-fair women from within the cohort of similar age, smoking habits, education, marital status, income, and comorbidity, i.e., 11,993 pairs. The main outcome was the difference in all-cause mortality between fair and non-fair women in a low UV milieu, defined as living in Sweden and having low-to-moderate sun exposure habits. Secondary outcomes were mortality by sun exposure, and among those non-overweight.

Results In a low UV milieu, fair women were at a significantly lower all-cause mortality risk as compared to non-fair women (log rank test p = 0.04) with an 8% lower all-cause mortality rate (hazard ratio [HR] = 0.92, 95% CI 0.84‒1.0), including a 59% greater risk of dying from skin cancer among fair women (HR 1.59, 95% CI 1.26‒2.0). Thus, it seem that the beneficial health effect from low skin coloration outweigh the risk of skin cancer at high latitudes.

Conclusion In a region with low UV milieu, evolution seems to improve all-cause survival by selecting a fair skin phenotype, i.e., comprising fair women with a survival advantage.


Discussion

Women with a fair UV-sensitive phenotype living in a low UV milieu had a significantly increased life expectancy as compared to non-fair women. Fair women were at an eight percent lower all-cause mortality rate, as compared to those with non-fair skin. There is a strong inverse dose-dependent risk between increasing sun-exposure habits and all-cause mortality.

Strengths and limitations

Our large sample, comprising 20% of all women in the south Swedish region between 25 and 64 ages, as drawn by random selection from the population registry at the study inception 1990 is a strength. It was thus a representative sample of the South Swedish population at the time of recruitment before the large immigration of the 2000’s. It comprises almost exclusively European Caucasian women. Thus, the comparison between fair and non-fair was mainly a comparison between Fitzpatrick types 1 skin vs. type 2‒3 skin. Since the questionnaire was administrated at the inception of the study, there was no recall bias. Since we earlier have been criticized that our adjustments in Cox regression might not be adequate, we decided to perform a one-to-one matched design. Historically, during evolution there was no possibility to use solarium or to travel for sunbathing. Therefore, we were predetermined to make the main outcome comparison in a low UV milieu, i.e., among those with low-to-moderate sun exposure habits. As secondary outcome we assessed mortality by sun-exposure with adjustment for exercise or stratified for low BMI, only including the time period after year 2000. A major limitation is that the significance level of the lower risk of all-cause mortality among fair women was close to the 5% significance level in all analyses regarding skin type, but it was according to the predetermined hypothesis. Another strength is that the analyses from year 2000 including exercise habits, and BMI showed similar results, but with wider CIs. The results might not be generalized into regions with more intense UV radiation. The aim of the study was not to assess cause specific mortality. However, it is impossible to publish on beneficial effects by sun exposure without including data on skin cancer mortality. Thus, our study is in agreement with the large amount of papers showing an increased incidence of skin cancer with fair skin and we also showed increased mortality in skin cancer. Since fair skin is selected at high latitudes, an improved all-cause survival is also expected from an evolutionary perspective [2]. Frost and coworkers reported in an open internet-based study that red-haired women were particularly prone to ovarian-, uterus-, cervical, and colorectal cancer, our results could not reproduce these findings and we did not find an increased incidence of these groups among fair women in our study [15]. There has been somewhat conflicting evidence regarding sun exposure and all-cause mortality. The Swedish Women´s Lifestyle and Health Study reported that increased sun exposure (measured as sunbathing holidays, i.e., which was one of our four questions) was related to reduced HRs for all-cause mortality [16]. On the other hand, a large US epidemiological study based on regional, not personal, UV radiation reported a positive relation between increasing UV radiation and all-cause mortality [17]. A possible explanation for the opposing results might be the differences in latitude and, therefore, UV intensity (Sweden latitude 55o to 59o and continental US latitude 24o to 42o. While the mean level of the biomarker vitamin D for sun exposure was 48.6 (± 20.5) nmol/L in Sweden it was 77.0 (± 25.0) nmol/L in the US, indicating a greater problem with sun deficiency at high latitudes [9, 18]. Based on data from the Swedish Meteorological and Hydrological Institute (SMHI), in 2014 there was one day with strong UV exposure, i.e., UV-index ≥ 6.

Skin cancer mortality

When we investigated whether the increased mortality associated with skin cancer influenced the strong inverse relationship between all-cause mortality and increasing sun exposure habits and found that this was not the case. Women with fair skin were at a 59% increased risk of death in skin cancer. This was counterbalanced by the health benefits, as measured by all-cause mortality, of fair skin and sun exposure. There is an increased risk of skin cancer with both fair skin and increasing sun exposure, but the prognosis of skin cancer seem to improve with increasing sun exposure [19, 20]. Thus, there seem to be a tradeoff between health benefits and skin cancer and in regions with scarcity of solar UV radiation fair skin have been selected [2]. In our modern society there is not unusual with a mismatch between skin coloration and geography/climate/ habits that might cause increased morbidity and mortality [2].

Sun exposure and overweight

Overweight and obese women do not seem to obtain the same benefit from having fair skin or from sun exposure as non-overweight women. We have seen similar findings in prior studies, where the lower risk of type 2 diabetes mellitus and endometrial cancer after UV exposure was mainly seen in non-overweight women [21, 22]. Wortsman and coworkers have clearly demonstrated that obesity has a detrimental effect on vitamin D levels for a given amount of UV exposure [23]. Thus, lower sun exposure habits among overweight is not the cause. It appears that vitamin D is either produced in a smaller quantity or consumed/inactivated among overweight women. Further, a study using Mendelian randomization analysis showed that increasing BMI leads to lower vitamin D levels [24]. The differential impact of BMI by sun exposure on all-cause mortality is an area that would benefit from additional research. Since BMI seem to be in the causal pathway of sun exposure and all-cause mortality, we chose not to adjust for BMI and present only stratified analysis.
It has been hypothesized that the inbreeding with Neanderthals some 47,000 to 65,000 years ago in northern Canaan might have helped Homo sapiens adjust to life beyond Africa [2527]. Studies of the ancient Neanderthal genome have shown that Westerners carry approximately 1% to 3% of Neanderthal DNA [25, 26]. People of European origin are highly likely (≈ 60% to 70%) to have the Neanderthal DNA that affects keratin filaments, i.e., zinc finger protein basonuclin-2 (BNC2). The latter alleles are thought to be involved in the adaptive variation of skin coloration, influencing skin pigmentation towards fairer skin [6, 28]. With our finding of increased life expectancy with fair skin, we speculate that the preserved high carriership of the Neanderthal BNC2 allel might be an advantage at high latitudes.
We interpret our findings to support that a fair, UV-sensitive phenotype in Sweden seems to be related to prolonged life expectancy in a low UV milieu, but at the cost of an increased risk of death due to skin cancer. Over thousands of years a fair UV-sensitive phenotype has possibly been selected for optimal health at high latitudes

Despite a longstanding expert consensus about the importance of cognitive ability for life outcomes, contrary views continue to proliferate in scholarly & popular literature; we find no threshold beyond which greater IQ cease to be beneficial

Brown, Matt, Jonathan Wai, and Christopher Chabris. 2020. “Can You Ever Be Too Smart for Your Own Good? Linear and Nonlinear Effects of Cognitive Ability.” PsyArXiv. January 30. https://psyarxiv.com/rpgea/

Abstract: Despite a longstanding expert consensus about the importance of cognitive ability for life outcomes, contrary views continue to proliferate in scholarly and popular literature. This divergence of beliefs among researchers, practitioners, and the general public presents an obstacle for evidence-based policy and decision-making in a variety of settings. One commonly held idea is that greater cognitive ability does not matter or is actually harmful beyond a certain point (sometimes stated as either 100 or 120 IQ points). We empirically test these notions using data from four longitudinal, representative cohort studies comprising a total of 48,558 participants in the U.S. and U.K. from 1957 to the present. We find that cognitive ability measured in youth has a positive association with most occupational, educational, health, and social outcomes later in life. Most effects were characterized by a moderate-to-strong linear trend or a practically null effect (mean R2 = .002 to .256). Although we detected several nonlinear effects, they were small in magnitude (mean incremental R2= .001). We found no support for any detrimental effects of cognitive ability and no evidence for a threshold beyond which greater scores cease to be beneficial. Thus, greater cognitive ability is generally advantageous—and virtually never detrimental.

The vast majority of dogs and cats were reported to remember past events; both species reportedly remembered single-occurrence events that happened years ago

Pet memoirs: The characteristics of event memories in cats and dogs, as reported by their owners. Amy Lewis, Dorthe Berntsen. Applied Animal Behaviour Science, Volume 222, January 2020, 104885. https://doi.org/10.1016/j.applanim.2019.104885

Highlights
• The vast majority of dogs and cats were reported to remember past events.
• Both species reportedly remembered single-occurrence events that happened years ago.
• The events were diverse and often involved an interaction with an animal or person.
• They were often recalled when current external stimuli overlapped with the memory.

Abstract: The case for episodic memory in non-human animals has been intensely debated. Although a variety of paradigms have shown elements of episodic memory in non-human animals, research has focused on rodents, birds and primates, using standardized experimental designs, limiting the types of events that can be investigated. Using a novel survey methodology to address memories in everyday life, we conducted two studies asking a total of 375 dog and cat owners if their pet had ever remembered an event, and if so, to report on their pet’s memory of the event. In both studies, cats and dogs were reported to remember a variety of events, with only 20% of owners reporting that their pet had never remembered an event. The reported events were often temporally specific and were remembered when commonalities (particularly location) occurred between the current environment and the remembered event, analogous to retrieval of involuntary memories in humans.

Keywords: Event memoryEpisodic-like memoryDog cognitionCat cognitionInvoluntary autobiographical memory

Domestic dogs respond correctly to verbal cues issued by an artificial agent; generalisation of previously learned behaviours to the novel agent in all conditions was rapidly achieved

Domestic dogs respond correctly to verbal cues issued by an artificial agent. Nicky Shaw, Lisa M. Riley. Applied Animal Behaviour Science, January 30 2020, 104940. https://doi.org/10.1016/j.applanim.2020.104940

Highlights
• Domestic dogs can recall to an artificial agent and respond correctly to its pre-recorded, owner spoken verbal cues as reliably as to their owners in person and while alone in the test room.
• Generalisation of previously learned behaviours to the novel agent in all conditions was rapidly achieved.
• No behavioural indicators of poor welfare were recorded during interaction with the agent directly.

Abstract: Human-canine communication technology for the home-alone domestic dog is in its infancy. Many criteria need to be fulfilled in order for successful communication to be achieved remotely via artificial agents. Notably, the dogs’ capacity for correct behavioural responses to unimodal verbal cues is of primary consideration. Previous studies of verbal cues given to dogs alone in the test room have revealed a deterioration in correct behavioural responses in the absence of a source of attentional focus and reward. The present study demonstrates the ability of domestic pet dogs to respond correctly to an artificial agent. Positioned at average human eye level to replicate typical human-dog interaction, the agent issues a recall sound followed by two pre-recorded, owner spoken verbal cues known to each dog, and dispenses food rewards for correct behavioural responses. The agent was used to elicit behavioural responses in three test conditions; owner and experimenter present; experimenter present; and dog alone in the test room. During the fourth (baseline) condition, the same cues were given in person by the owner of each dog. The experiments comprised a familiarisation phase followed by a test phase of the four conditions, using a counterbalanced design. Data recorded included latency to correct response, number of errors before correct response given and behavioural welfare indicators during agent interaction. In all four conditions, at least 16/20 dogs performed the correct recall, cue 1 response, and cue 2 response sequence; there were no significant differences in the number of dogs who responded correctly to the sequence between the four conditions (p = 0.972). The order of test conditions had no effect on the dogs’ performances (p = 0.675). Significantly shorter response times were observed when cues were given in person than from the agent (p = 0.001). Behavioural indicators of poor welfare recorded were in response to owners leaving the test room, rather than as a direct result of agent interaction. Dogs left alone in the test room approached and responded correctly to verbal cues issued from an artificial agent, where rapid generalisation of learned behaviours and adjustment to the condition was achieved.

Keywords: DogDog-human communicationDog trainingUnimodal verbal cuesArtificial AgentWelfare

That confidence people have in their memory is weakly related to its accuracy, that false memories of fictitious childhood events can be easily implanted, are claims that rest on shaky foundations: Memory is malleable but essentially reliable

Regaining Consensus on the Reliability of Memory. Chris R. Brewin, Bernice Andrews, Laura Mickes. Current Directions in Psychological Science, January 30, 2020. https://doi.org/10.1177/0963721419898122

Abstract: In the last 20 years, the consensus about memory being essentially reliable has been neglected in favor of an emphasis on the malleability and unreliability of memory and on the public’s supposed unawareness of this. Three claims in particular have underpinned this popular perspective: that the confidence people have in their memory is weakly related to its accuracy, that false memories of fictitious childhood events can be easily implanted, and that the public wrongly sees memory as being like a video camera. New research has clarified that all three claims rest on shaky foundations, suggesting there is no reason to abandon the old consensus about memory being malleable but essentially reliable.

Keywords: false memory, memory accuracy, confidence, lay beliefs


Academic dishonesty—to cheat, fabricate, falsify, and plagiarize in an academic context—is positively correlated with the dark traits, and negatively correlated with openness, conscientiousness, agreeableness, & honesty-humility

Plessen, Constantin Y., Marton L. Gyimesi, Bettina M. J. Kern, Tanja M. Fritz, Marcela Victoria Catalán Lorca, Martin Voracek, and Ulrich S. Tran. 2020. “Associations Between Academic Dishonesty and Personality: A Pre-registered Multilevel Meta-analysis.” PsyArXiv. January 30. doi:10.31234/osf.io/pav2f

Abstract: Academic dishonesty—the inclination to cheat, fabricate, falsify, and plagiarize in an academic context—is a highly prevalent problem with dire consequences for society. The present meta-analysis systematically examined associations between academic dishonesty and personality traits of the Big Five, the HEXACO model, Machiavellianism, narcissism, subclinical psychopathy, and the Dark Core. We provide an update and extension of the only meta-analysis on this topic by Giluk and Postlethwaite (2015), synthesizing in total 89 effect sizes from 50 studies—containing 38,189 participants from 23 countries. Multilevel meta-analytical modelling showed that academic dishonesty was positively correlated with the dark traits, and negatively correlated with openness, conscientiousness, agreeableness, and honesty-humility. The moderate-to-high effect size heterogeneity—ranging from I2 = 57% to 91%—could only be partially explained by moderator analyses. The observed relationships appear robust with respect to publication bias and measurement error, and can be generalized to a surprisingly large scope (across sexes, continents, scales, and study quality). Future research needs to examine these associations with validated and more nuanced scales for academic dishonesty.



High emotion recognition ability may inadvertently harm romantic and professional relationships when one perceives potentially disruptive information; also, high-ERA individuals do not appear to be happier with their lives


Inter- and Intrapersonal Downsides of Accurately Perceiving Others’ Emotions. Katja Schlegel. In: Social Intelligence and Nonverbal Communication pp 359-395, Jan 26 2020. https://link.springer.com/chapter/10.1007/978-3-030-34964-6_13

Abstract: The ability to accurately perceive others’ emotions from nonverbal cues (emotion recognition ability [ERA]) is typically conceptualized as an adaptive skill. Accordingly, many studies have found positive correlations between ERA and measures of social and professional success. This chapter, in contrast, examines whether high ERA can also have downsides, both for other people and for oneself. A literature review revealed little evidence that high-ERA individuals use their skill to hurt others. However, high ERA may inadvertently harm romantic and professional relationships when one perceives potentially disruptive information. Furthermore, high-ERA individuals do not appear to be happier with their lives than low-ERA individuals. Overall, the advantages of high ERA outweigh the downsides, but many open questions regarding negative effects remain to be studied.

Keywords: Emotion recognition Emotional intelligence Well-being Dark side Interpersonal accuracy

Summary and Conclusion

The ability to accurately recognize others’ emotions from the face, voice,
and body is typically considered to be an adaptive skill contributing to
social and professional success. This has been supported by various studies
(see Schmid Mast & Hall, 2018; Hall et al., 2009; Elfenbein et al.,
2007, for reviews). Much less research has looked into the potential
downsides or disadvantages of high ERA for oneself (i.e., for one’s wellbeing)
and for others (i.e., by manipulating other people or hampering
smooth interactions with others). The present chapter reviewed this
research in non-clinical adults, specifically focusing on the following
questions: Is there a “dark” side to high ERA in that people use it to hurt
others? Can high ERA negatively affect the quality of relationships? Why
is high ERA uncorrelated with psychological well-being? Finally, is there
an optimal level of ERA?
Although more research is clearly needed to answer these questions
with more confidence, the current state of the literature suggests that
ERA is a double-edged sword that affects one’s well-being and social outcomes
both positively and negatively. One common theme that emerged
as a possible explanation for both positive and negative pathways is the
heightened emotional awareness of or attunement to others’ feelings in
persons with high ERA. Because high-ERA individuals are more perceptive
of others’ positive and negative emotions, their own emotions also
appear be more affected by what is happening around them, contributing
to various inter- and intrapersonal outcomes.
For instance, high-ERA individuals seem to be more prosocial and
cooperative, maybe in order to perceive more positive emotions in others
and to preserve their own psychological well-being. Heightened emotional
awareness for others’ feelings can also explain the positive associations
between ERA and social and workplace effectiveness found in many
studies. On the other hand, “hyperawareness” in high-ERA individuals
can inadvertently contribute to lower rapport, less favorable impressions
in others, and lower relationship quality due to “eavesdropping” and the
failure to show “motivated inaccuracy” when it might be adaptive.
Because high emotional awareness appears to amplify the effects of
perceived positive and negative emotions, in stable environments with
only few stressors, the adaptive advantages of high ERA may outweigh
the downsides. However, as adversity or instability increases, the higher
proportion of perceived and experienced negative affect may contribute
to lower well-being and the development of depressive symptoms. A
higher tendency to suffer with others in distress might represent one possible
mechanism negatively influencing psychological well-being.
Taken together, the various positive and negative pathways between
high ERA and well-being as well as interpersonal relationships may
explain why ERA does not appear to be positively correlated with wellbeing,
although this had been found for emotional intelligence more
broadly (e.g., Sánchez-Álvarez et al., 2015). One may speculate that other
components of emotional intelligence such as the ability to regulate one’s
own negative emotions efficiently or the ability to manage others’ emotions
have fewer potential downsides than ERA with respect to one’s own
well-being, although they may be more “useful” when it comes to manipulating
others (e.g., Côté et al., 2011).
An interesting question is whether the terms “emotional hyperawareness”
(e.g., Davis & Nichols, 2016) or “hypersensitivity” (Fiori & Ortony,
2016) are appropriate to describe high-ERA individuals. These terms are
often used to describe an exaggerated, maladaptive reactivity of neurophysiological
structures related to mental disorders (e.g., Frick et al.,
2012; Neuner et al., 2010). In healthy individuals with high ERA, however,
the elevated attunement to emotions might represent a more realistic
and holistic view of the social world rather than a bias (Scherer, 2007). If
this is the case, then the absence of a correlation between ERA and wellbeing
or life satisfaction may also reflect that those high in ERA evaluate
these constructs more realistically and thus more negatively, although
they might be “happier” than others if different criteria were used. It may
also be that high-ERA individuals, compared to low-ERA individuals, are
relatively more satisfied with some life domains (e.g., friendships) and
less satisfied with others (e.g., work), which may cancel each other out
when global well-being or life satisfaction is considered.
The current literature can be expanded in several ways. In particular,
more studies that examine the moderating effects of personality traits on
the link between ERA and outcomes are needed. In particular, traits
related to the processing and regulation of emotions in oneself and others
might moderate the effects of ERA not only on intrapersonal outcomes
such as psychological well-being but also on interpersonal outcomes such
as relationship quality. For example, it would be interesting to examine
how ERA, empathic concern, and detachment interact in predicting
stress, emotional exhaustion, or work engagement in helping professions.
One can hypothesize that a high ability to detach oneself from stressful
negative work experiences protects professionals that are highly perceptive
of clients’ negative feelings and express empathic concern from negative
effects on well-being. Other possible moderating variables include
“positivity offset” (Ito & Cacioppo, 2005) and stable appraisal biases
(Scherer, 2019). In addition, “dark” personality traits might moderate the
effects on interpersonal behaviors such as deception, such that high ERA
may, for example, amplify the effects of high Machiavellianism or trait
exploitativeness (Konrath et al., 2014). Future studies should also look
into curvilinear relationships to examine which levels of ERA are the
most beneficial or detrimental for various outcomes and situations.
Furthermore, longitudinal studies may shed light on the causality
underlying ERA and the development of psychological well-being over
time as a function of a person’s environment. For example, it could be
tested whether Wilson and Csikszentmihalyi’s (2007) finding that prosociality
is beneficial in stable environments but detrimental in adverse
ones also holds for ERA. Such studies would also allow investigating the
causal pathways linking ERA and depressive symptoms, including testing
the possibilities that dysphoria increases ERA (Harkness et al., 2005) and
that ERA, due to a more realistic perception of the social world, makes
people “wiser but sadder” (Scherer, 2007).
Many of the above conclusions rely on the assumption that high ERA
relates to a higher attunement to emotions in our surroundings. However,
only few studies to date examined this association. Fiori and Ortony
(2016) and Freudenthaler and Neubauer (2007) pointed out that ability
tests of emotional intelligence measure maximal performance and crystallized
knowledge, but do not necessarily capture typical performance
and more fluid emotion processing. More research is thus needed to corroborate
the idea that being good at accurately labeling emotional expressions
when one is explicitly instructed to do so is related to paying more
attention to emotions in everyday life when an abundance of different
types of information is available. Future research should involve the
development of new standard tests tapping into typical performance
regarding emotion perception. Future studies could also benefit from
using methods such as portable eye tracking or experience sampling to be
able to study more real-life situations. Finally, future studies may examine
satisfaction in specific life domains as outcome measures of ERA in addition
to general measures of well-being.
The current review also raises the question whether available trainings
for increasing ERA (see Blanch-Hartigan, Andrzejewski, & Hill, 2012
for a meta-analysis) are useful if high ERA can have detrimental effects.
The answer may depend on what outcomes are considered. If an ERA
training improves law enforcement officers’ job performance (Hurley,
Anker, Frank, Matsumoto, & Hwang, 2014) or helps doctors to better
understand their patients (Blanch-Hartigan, 2012), the answer would be
that trainings are useful. When psychological well-being is considered as
the outcome, stand-alone ERA trainings may not always be useful, for
example, if a person is experiencing chronic stress or depressive symptoms.
In these cases, it may be beneficial to combine an ERA training
with a training targeted at the use of adaptive emotion regulation strategies
to prevent potentially detrimental effects.
To conclude, I would like to emphasize that, overall, ERA should still
be considered an adaptive and valuable skill, especially when effective
interpersonal interactions in the workplace or close relationships are
considered
(e.g., reviews by Elfenbein et al., 2007; Schmid Mast & Hall,
2018). High-ERA individuals receive better ratings from others on various
positive traits (e.g., socio-emotional competence) and report being
more open, more conscientious, and more tolerant (Hall et al., 2009).
The interpersonal downsides and “dark” aspects of high ERA in healthy
adults discussed in the present chapter seem to be limited to relatively
specific situations or ERA profiles, although more research is needed.
With respect to psychological well-being, however, the picture seems to
be more nuanced, implying both positive and negative pathways that
may be more or less influential based on a person’s life situation and personality
traits. More sophisticated study designs, novel data collection
methods, and more complex statistical analyses can help us better understand
these mechanisms.

Being so important to identify deception, we are really bad at it, so we developed the equivalent of an intelligence network that would pass along information and evidence, thus rendering the need for an individual lie detector moot

Nonverbal Communication: Evolution and Today. Mark G. Frank, Anne Solbu. In: Social Intelligence and Nonverbal Communication pp 119-162, January 26 2020. https://link.springer.com/chapter/10.1007/978-3-030-34964-6_5

Abstract: One aspect of social intelligence is the ability to identify when others are being deceptive. It would seem that individuals who were bestowed with such an ability to recognize honest signals of emotion, particularly when attempts to suppress them are made, would have a reproductive advantage over others without it. Yet the research literature suggests that on average people are good at detecting only overt manifestations of these signals. We argue instead that our evolution as a social species living in groups permitted discovery of deceptive incidents due to the factual evidence of the deception transmitted verbally through social connections. Thus the same principles that pressed for our evolution as a cooperative social species enabled us to develop the equivalent of an intelligence network that would pass along information and evidence, thus rendering a press for an individual lie detector moot.

Keywords: Deception detection Evolution Emotions Behavioral signals Social life

Conclusion
Taken together, it is clear that there are strong signals for various emotions and intentions and a strong rationale for why these signals would be
‘engineered’ to solve a recurrent problem. And despite being wired to
detect these signals, humans are poor detectors of these signals once they
become subtle through efforts to conceal them. Yet this ability to spot
these dishonest and/or subtle versions of the signals would seem to be of
great benefit to any given individual in his or her quest to survive and
pass on his or her genes to the next generation. This sense that evolution
did not bestow our species with these internal event detectors seems puzzling, until we unpack some of the social structures of the ancient world.
It seems the cooperative structures, and little (at least initially) opportunities to ‘cheat’, often may have allowed, in essence, an intelligence network to be developed where pejorative information could be passed along
easily and cheaply to identify any particular cheater. Thus, the evolution
of cooperative behavior was the key to lie-catching. It seems logical that
there would be no strong independent press to develop internal cheater
detectors, when a strong social network would do the job for at a greatly
reduced cost (Smith, 2010).
Importantly, lie detection in the laboratory or in single case studies
does not fully translate to the real world, where gossip and relationships
with others matter (Haidt, 2001). People rely on gossip, even when accuracy may be limited (Sommerfeld, Krambeck, & Milinski, 2008); it may
nevertheless actually improve lie detection (Klein & Epley, 2015).
Moreover, it is through the influence from others that we may decide to
override our tendency to cooperate (Bear & Rand, 2016) and employ
conscious deliberation to make our decisions (Haidt, 2001). The alignment of emotions through empathy, and increased goal sharing (Tomasello,
Carpenter, Call, Behne, & Moll, 2005), as evidenced by the #MeToo
movement (Rodino-Colocino, 2018), gave rise to the same powerful
group thinking and sociality as seen in the emergence of human morality
(Jensen, Vaish, & Schmidt, 2014). Haidt (2001) states “A group of judges
independently seeking truth is unlikely to reach an effective consensus,
but a group of people linked together in a large web of mutual influence
may eventually settle into a stable configuration” (p. 826). This becomes,
functionally, a long-range radar type system that has agents reporting
back actions, behaviors, and relationships to each other, which in turn sets
the groundwork for recognizing inconsistencies regarding people not
being where they say they are, people being with people they deny knowing, and so forth. The presence of this communication network would
reduce the need to make individuals hyper-vigilant in every interaction,
or to individually develop super-acute deception detection skills. Likewise,
unusual interpersonal behaviors can trigger individuals to search for evidence to verify their hypotheses about someone’s veracity, and they can
then activate their social networks to verify the information provided by
the unusually behaving person (Novotny et al., 2018). These networks are
not just passive providers of information. Thus, the socially intelligent
person is the one who has the best access to the collective intelligence—
and likely the most friends, as believed by the Ugandans (Wober, 1974).
We believe the research literature has neglected this larger system in which
our social structures exist, which often detect the deception for us. Even
as our society expands, social media and movements like #MeToo have
become like the global village, where previously unacquainted individuals
can now verify the truth or falsity of each other, thus (hopefully) betraying
the attempted liar.

Wednesday, January 29, 2020

Does Attractiveness Lead to or Follow From Occupational Success? Findings From German Associational Football

Does Attractiveness Lead to or Follow From Occupational Success? Findings From German Associational Football. Henk Erik Meier, Michael Mutz. SAGE Open, January 29, 2020. https://doi.org/10.1177/2158244020903413

Abstract: Prior research has provided evidence that attractiveness is associated with work-related advantages. It is less clear, however, whether attractiveness is an antecedent or a consequence of professional success. To answer this question, associational football in Germany is used as an exemplifying case. Portrait pictures of German football players were retrieved, one picture from a very early career stage and one from a very late one. Attractiveness of these portraits was assessed by the “truth of consensus” method. Panel regression models are applied to analyze changes in attractiveness and relate these changes to professional success. Findings show that success as a footballer cannot be predicted with attractiveness at early career stages. Instead, the increase of attractiveness over time is more pronounced among very successful players. It is thus concluded that successful individuals are not more attractive in the very beginning, but improve their appearance throughout their careers.

Keywords: attractiveness, beauty, appearance, professional success, football


The role of physical attractiveness for job-related interactions and outcomes is intensely debated. Previous research has pointed to the existence of a beauty premium in the labor market, but scholars have recently emphasized that the causal mechanisms behind this beauty effect are not completely understood. The objective of this study was to provide some clues on the direction of the dependency, whether attractiveness leads to or follows from success. The first notion that attractiveness fosters professional success in associational football was clearly rejected (H1). At the same time, it was shown that more successful football players markedly improve their physical appearance over time, lending support to the second idea that attractiveness follows from success (H2). Hence, it can be concluded from the findings that attractiveness is less an antecedent, but more a consequence of success. Hence, beauty is not a stable characteristic of a football player, but something modified by “beauty work.”
Large cross-sectional studies on football in Germany had shown that attractiveness and success are correlated (Rosar et al., 2010, 2013, 2017). In the interpretation of this association, it was claimed that coaches may give attractive footballers an advantage in fielding decisions which may help attractive players to become successful. In particular, the interpretation that coaches favor more attractive players was put forward by Rosar and colleagues (2017). However, bearing in mind that football is one of the few professional domains where attractiveness has particularly no relevance as a productivity factor, this interpretation comes as a surprise. Our results lend more support to the notion that players who are fielded more often (and are thus more often in the public spotlight) invest more into their beauty. Although this needs to be tested in future research more explicitly (including measures for grooming), the findings presented here suggest that the beauty premium in sport is probably more accurately interpreted as a by-product of beauty work and not as a form of discrimination against less attractive players.
If this line of reasoning is correct, it is still unclear what motivates this beauty work: On one hand, professional athletes are offered huge financial rewards for attractiveness and popularity, because these qualities are valued by media and the sport industry. For an athlete, beauty work can thus be a form of strategic investment to reach a broader public beyond the narrow scope of regular football fans and, in doing so, increase his endorser qualities. David Beckham or Cristiano Ronaldo may be considered textbook examples of this strategy (Coad, 2005). In forms of sponsorship and marketing deals, beauty work may thus pay-off for athletes and lead to higher revenues. However, Hamermesh et al. (2002) have also contested the idea that additional earnings due to investments in physical appearance recover costs (e.g., for clothing and cosmetics). However, this study was not conducted in the realm of professional sport and may thus not hold true in this particular context. On the other hand, beauty work must not necessarily represent an investment strategy, but may simply be a form of “conspicuous consumption” (Veblen, 1899/2007). Conspicuous consumption refers to the acquisition of luxury goods, including expensive clothing, to publicly demonstrate wealth and a high social status. Hence, in this line of interpretation, the “returns” of beauty work do not tend to a monetary but to a symbolic level, aiming at distinction and prestige. Moreover, it was also claimed that showy spending increases sex appeal among men (Sundie et al., 2011). Hence, beauty work among high-class football players, who stand in the limelight of a huge TV audience each weekend, may simply represent a form of impression management to showcase oneself in a positive way and generate symbolic capital.
This finding comes with strong implications for future research on the role of physical attractiveness in professional sport: Future research has to go beyond correlational analysis and needs to employ longitudinal research designs to be able to discriminate between different mechanisms at stake. Simple correlational analysis does not suffice for making conclusive inferences on the impact of attractiveness on football players’ careers. Moreover, as the current study leaves unclear why successful football players improve their physical appearance, future research should address beauty work and its financial and symbolic returns.
One limitation of this study is that it measured beauty solely based on facial attractiveness. According to Hakim (2010), beauty, sexual attractiveness, physical fitness, liveliness, charm, and style are distinctive features that can make a person attractive for others. Although some of these characteristics are hard to measure as they are not assessable with pictures (e.g., charm) or change quickly (e.g., style), it should be kept in mind that this study (as with many previous studies) reduces beauty to facial attractiveness while ignoring other (body) characteristics. Moreover, as an alternative to the “truth of consensus”-rating method, scholars have suggested a software-based approach, analyzing facial geometry, for instance, horizontal symmetry, ratio of nose to ear length or ratio of face width to face height (Hoegele et al., 2015). This is a promising approach so that future studies would do well to integrate rater-based as well as software-based methods for assessing facial attractiveness. Finally, this study solely focused on male athletes so that it remains uncertain whether these findings would also hold for female athletes. Previous studies on attractiveness and occupational success found stronger effects for women compared with men (Jæger, 2011). Similar findings were reported for female professional tennis players, whose popularity is much more driven by their attractiveness compared with male players (Konjer et al., 2019). However, in view of the fact that women’s football is less professionalized and commercialized as a sport in Germany (e.g., with regard to media coverage, salary levels, or endorsement deals), the incentives to invest into beauty and appearance may not be as high as in men’s football. Hence, replications of this study in women’s football, in other fields of professional sport, and in different domains of the entertainment industry would be helpful to assess whether the findings presented here are generalizable or an expression of peculiarities of European associational men’s football.