Saturday, May 2, 2020

Individual differences in trust evaluations are shaped mostly by personal learning, not genes

Individual differences in trust evaluations are shaped mostly by environments, not genes. Clare A. M. Sutherland et al. Proceedings of the National Academy of Sciences, April 27, 2020. https://doi.org/10.1073/pnas.1920131117

Significance: Rapid impressions of trustworthiness can have extreme consequences, impacting financial lending, partner selection, and death-penalty sentencing decisions. But to what extent do people disagree about who looks trustworthy, and why? Here, we demonstrate that individual differences in trustworthiness and other impressions are substantial and stable, agreeing with the classic idea that social perception can be influenced in part by the “eye of the beholder.” Moreover, by examining twins, we show that individual differences in impressions of trustworthiness are shaped mostly by personal experiences, instead of genes or familial experiences. Our study highlights individual social learning as a key mechanism by which we individually come to trust others, with potentially profound consequences for everyday trust decisions.

Abstract: People evaluate a stranger’s trustworthiness from their facial features in a fraction of a second, despite common advice “not to judge a book by its cover.” Evaluations of trustworthiness have critical and widespread social impact, predicting financial lending, mate selection, and even criminal justice outcomes. Consequently, understanding how people perceive trustworthiness from faces has been a major focus of scientific inquiry, and detailed models explain how consensus impressions of trustworthiness are driven by facial attributes. However, facial impression models do not consider variation between observers. Here, we develop a sensitive test of trustworthiness evaluation and use it to document substantial, stable individual differences in trustworthiness impressions. Via a twin study, we show that these individual differences are largely shaped by variation in personal experience, rather than genes or shared environments. Finally, using multivariate twin modeling, we show that variation in trustworthiness evaluation is specific, dissociating from other key facial evaluations of dominance and attractiveness. Our finding that variation in facial trustworthiness evaluation is driven mostly by personal experience represents a rare example of a core social perceptual capacity being predominantly shaped by a person’s unique environment. Notably, it stands in sharp contrast to variation in facial recognition ability, which is driven mostly by genes. Our study provides insights into the development of the social brain, offers a different perspective on disagreement in trust in wider society, and motivates new research into the origins and potential malleability of face evaluation, a critical aspect of human social cognition.

Keywords: trustface evaluationfirst impressionsbehavioral geneticsclassical twin design

          Discussion
Here, we find large and stable individual variation in key facial evaluations of trustworthiness, dominance, and attractiveness, consistent with the classic idea that these visual judgments can be shaped by “the eye of the beholder.” Using a twin study, we show that this variation in facial evaluation is largely shaped by people’s personal experiences, rather than by genetic factors or shared environments. Highlighting the scope of personal experience to affect trust offers a different perspective on the fundamental basis, nature, and origin of individual trust and on our capacity to change whom we trust, for good or for ill. As our lives are increasingly affected by highly personalized social experiences, especially online (12), our findings suggest that disagreements about whom we trust are also likely to increase.
Notably, our finding that variation in facial evaluation is driven by personal environments stands in sharp contrast to variation in facial recognition ability, which is almost entirely genetically driven (25). Multivariate modeling showed that the environmental factors driving individual differences in trustworthiness, dominance, and attractiveness evaluations were also largely independent. This pattern suggests that individual differences in impression formation are based on different experiences, and largely not based on overall or general familiarity, typicality, or overall statistical learning (2022). Instead, our results are supportive of social learning theories, whereby unique social encounters shape individual associations between facial cues and associated traits (3536), or could also motivate new statistical learning theories which can account for the social context. Our results shed light on a core aspect of human social perception and indicate a remarkable diversity in the architecture of individual variation across different components of face processing.
As well as revealing the etiology of individual differences in trustworthiness and dominance evaluation, our results replicate and extend a behavioral genetics study of individual aesthetic judgments, which also found that individual differences in facial attractiveness are driven by people’s personal experiences (18). Our current study used a new, more diverse (e.g., in age) and more naturalistic sample of faces. This demonstration of generalizability is especially critical here because the faces used will strongly affect the types of facial cues people can use to judge attractiveness and, consequently, available individual differences (927).
Interestingly, our results do not necessarily imply that familial environment is unimportant even though the shared environment was not a major contributing factor. Siblings, including twins, can have remarkably unique familial environments (reviewed in ref. 29). For example, maternal affection can be very different even across identical twin pairs (29). Early caregiver or familial social experiences could therefore still influence unique mappings of facial cues to impressions.
Finally, it is important to be clear that our findings about individual differences do not argue against the claim that facial impressions of trustworthiness are adaptive, as suggested by leading facial impression theories (5726). Major evolutionary models of impressions have been based on consensus impressions (see ref. 17 for a review) whereas twin studies are concerned with individual variation. Facial cues that are critical for survival or successful reproduction may in fact be particularly strongly selected for, leading to consensus across individual perceivers. Indeed, consensus impressions, particularly of trustworthiness, are remarkably similar across cultural contexts, although there may be cultural “dialects” in impressions (3739).
Our results suggest that a priority for future research should be to understand the development of social evaluation of faces. Especially, it will be critical to discover the developmental drivers of individual differences in face impressions, rather than focusing on potential genetic influences. We know little about how early in development these individual differences occur or which kinds of experiences are most consequential. One suggestion, based on our current findings, is that individual interactions with strangers, peers, and caregivers will be especially critical. A key methodological contribution of the current work is to provide a set of reliable tests of individual variation in trust and other impressions, which will benefit developmental and other research into individual differences in facial impression formation. As individual differences in facial impressions and identity recognition show distinctive etiologies, the perceptual and neural mechanisms driving variation in facial impressions will likely differ from those discovered in face recognition perception so far (reviewed in ref. 25). In terms of perceptual mechanisms, little is known about which facial features drive idiosyncratic impressions although a wealth of research has illustrated which facial features underlie consensus impressions (e.g., smiling, femininity, and raised eyebrow height are generally perceived as trustworthy) (67). Idiosyncratic impressions could result from individually specific weighting of the same features that drive consensus trustworthiness impressions, as well as associations with additional features with trust or mistrust. Indeed, different facial features are likely to drive trustworthiness variation for different people, depending on their personal experiences (for example, one person may rely heavily on emotional expression to judge trustworthiness whereas another person relies on gender). Regarding neural mechanisms, plausible candidate neural regions driving individual impressions include the amygdala and caudate, which encode associative facial trust learning at the participant group level (23). Finally, the importance of individual experience, highlighted by our findings, motivates research to determine the long-term malleability of facial evaluations. This research aim is particularly critical, given the potential for these impressions to bias important social decisions, from online dating to courtroom sentencing (317).
To conclude, we provide compelling evidence for substantial individual differences in impression formation and show that these differences are largely driven by unique personal environments, not genes (or shared environment). We also provide reliable tests of individual differences in impression formation. Our findings will speak to any scientist, philosopher, journalist, artist, or curious person who wonders why we judge a book by its cover, to what extent impressions lie in the eye of the beholder, and how our experiences with family, friends, partners, or the media might shape how we view the world.

People see themselves as better than average in many domains, from leadership skills to driving ability; exception is remembering names, when they rate themselves as approx. the same as others their age

Hargis, M. B., Whatley, M. C., & Castel, A. D. (2020). Remembering proper names as a potential exception to the better-than-average effect in younger and older adults. Psychology and Aging, May 2020. https://doi.org/10.1037/pag0000472

Abstract: People see themselves as better than average in many domains, from leadership skills to driving ability. However, many people—especially older adults—struggle to remember others’ names, and many of us are aware of this struggle. Our beliefs about our memory for names may be different from other information; perhaps forgetting names is particularly salient. We asked younger and older adults to rate themselves compared with others their age on several socially desirable traits (e.g., honesty); their overall memory ability; and their specific ability to remember scientific terms, locations, and people’s names. Participants demonstrated a better-than-average (BTA) effect in their ratings of most items except their ability to remember names, which both groups rated as approximately the same as others their age. Older adults’ ratings of this ability were related to a measure of the social consequences of forgetting another’s name, but younger adults’ ratings were not. The BTA effect is present in many judgments for both younger and older adults, but people may be more attuned to memory failures when those failures involve social consequences.


Participants refrained from bullshitting only when they possessed adequate self-regulatory resources and expected to be held accountable for their communicative contributions

Self-Regulatory Aspects of Bullshitting and Bullshit Detection. John V. Petrocelli, Haley F. Watson, and Edward R. Hirt. Social Psychology, April 30, 2020. https://doi.org/10.1027/1864-9335/a000412

Abstract: Two experiments investigate the role of self-regulatory resources in bullshitting behavior (i.e., communicating with little to no regard for evidence, established knowledge, or truth; Frankfurt, 1986; Petrocelli, 2018a), and receptivity and sensitivity to bullshit. It is hypothesized that evidence-based communication and bullshit detection require motivation and considerably greater self-regulatory resources relative to bullshitting and insensitivity to bullshit. In Experiment 1 (N = 210) and Experiment 2 (N = 214), participants refrained from bullshitting only when they possessed adequate self-regulatory resources and expected to be held accountable for their communicative contributions. Results of both experiments also suggest that people are more receptive to bullshit, and less sensitive to detecting bullshit, under conditions in which they possess relatively few self-regulatory resources.

Keywords: accountability, bullshit, bullshitting, bullshit detection, self-regulation, self-regulatory resources

Friday, May 1, 2020

Understanding Brain Death

Understanding Brain Death. Robert D. Truog, Erin Talati Paquette, Robert C. Tasker. JAMA, May 1, 2020. doi:10.1001/jama.2020.3593

The concept of brain death, or the determination of death by neurological criteria, was first proposed by a Harvard committee in the United States in 1968,1 and then adopted into the Uniform Determination of Death Act (UDDA) in 1981.2 Although the UDDA was widely accepted and endorsed by medical professional organizations, in recent years the concept has come under greater scrutiny and is increasingly the focus of legal challenges. Most urgent is that the current diagnostic standards do not satisfy the wording of the law. The UDDA defines brain death as the “irreversible cessation of all functions of the entire brain.” Yet, it is now widely acknowledged that some patients who meet the current diagnostic standards may retain brain functions that are not included in the required tests, including hypothalamic functioning.3 Until the UDDA is revised to be more specific about which functions must be lost to satisfy the definition (such as, for example, consciousness and the capacity to breathe), current medical practice will not be in alignment with the legal standard.

Fixing this problem will require resolution of a longstanding debate about what brain death actually means. Beecher,4 the chair of the 1968 Harvard committee, clearly thought that brain death was a new and distinct definition of death, different from biological death. He wrote that “when consciousness is permanently lost… this is the ‘moment’ of death.”4 But in 1981, the authors of the UDDA completely rejected this view in proposing both a cardiorespiratory and a neurological standard for determining death, insisting that “the use of two standards in a statute should not be permitted to obscure the fact that death is a unitary phenomenon.”2(p7) To support this position, the UDDA authors pointed to evidence that the brain is the master integrator of the body’s functions, such that once the brain is severely damaged, bodily functions deteriorate, with cardiac arrest and biological death invariably following the injury within several days. This unified view has continued to be the position of most experts, with one asserting that “Globally, [physicians] now invariably equate brain death with death and do not distinguish it biologically from cardiac arrest.”5

In recent years, this view has been challenged by multiple reports of cases of prolonged biological survival in patients who meet criteria for brain death. One well-known case is that of Jahi McMath, a teenaged girl who survived biologically for almost 5 years after being diagnosed as brain dead following surgery at age 13 years. During most of this time, she was cared for at home, continuing to grow and develop, along with the onset of menarche. In another case, a boy diagnosed as brain dead from meningitis at age 4 years survived biologically for more than 20 years. At autopsy, his brain was completely calcified, with no identifiable neural tissue, either grossly or microscopically. Recently, a woman was found to be 9 weeks pregnant when she was diagnosed as brain dead at age 28 years; she was maintained for several months until she delivered a healthy baby followed by donation of multiple organs.

The relative rarity of these cases is because brain death is typically a self-fulfilling prophecy; biological death usually quickly follows the diagnosis, either from organ donation or ventilator withdrawal. But in cases for which organ support is continued, as when a brain-dead woman is pregnant or when a court order requires physicians to continue treatment, prolonged biological survival may occur. As counterintuitive as it may seem, when functions such as breathing and nutrition are medically supported, the brain is not essential for maintaining biological integration and functioning.

If brain death is neither the absence of all brain function nor the biological death of the person, then what is it? Current tests for determining brain death focus on establishing 3 criteria: unconsciousness, apnea, and irreversibility of these 2 states. First, unconsciousness is diagnosed by demonstration of the absence of response to painful stimuli and absence of brainstem reflexes. While individual brainstem reflexes are irrelevant to whether the patient is alive or dead (for example, people can live normal lives with nonresponsive pupils), demonstrating that the brainstem is nonfunctional is an indirect way of inferring that the reticular activating system is nonfunctional. This neural network in the brainstem is essential for maintaining wakefulness, and thereby is a necessary substrate for consciousness. Second, apnea is diagnosed by removing patients from the ventilator for several minutes and demonstrating that they make no effort to breathe despite a high level of carbon dioxide in the blood. Third, irreversibility is assumed if the cause of the injury is known, no reversible causes can be identified, and the patient’s condition does not change over several hours. Collectively, the testing for brain death is designed to show that the patient is in a state of “irreversible apneic unconsciousness.”

Irreversible apneic unconsciousness is not the same as biological death. But should patients in this condition be considered to be legally dead? This is a complex question that hinges on metaphysical and moral views about the necessary and sufficient characteristics of a living person. The British position on this point is interesting and relevant. While the United Kingdom does not have a law on brain death, the Code of Practice of the Academy of Royal Medical Colleges explicitly endorses the view that irreversible apneic unconsciousness should be recognized as death.6 The Code states, “Death entails the irreversible loss of those essential characteristics which are necessary to the existence of a living human person and, thus, the definition of death should be regarded as the irreversible loss of the capacity for consciousness, combined with irreversible loss of the capacity to breathe.”6 Contrary to the US position, the Code does not insist that brain death is the same as biological death. It states that while “the body may continue to show signs of biological activity … these have no moral relevance to the declaration of death.”6 Following Beecher,4 the British consider brain death to be a moral determination that is distinct from biological death, based on a particular view about what constitutes the essential characteristics of a human person.

One option for reconciling the discrepancy between the UDDA and the current diagnostic standards for brain death in the United States would be to revise the UDDA along the lines of the British model. This would align the legal definition of death with current diagnostic standards. It would, however, also raise questions about how to respond to individuals who reject the concept of brain death. Even though there is nothing irrational or unreasonable about preferring a biological definition of death over other moral, religious, or metaphysical alternatives, there are concerns about the potential effects of allowing citizens to opt out of being declared brain dead. The experience in New Jersey may be relevant to this question because for more than 25 years that state has had a law permitting citizens to opt out of the determination of death by neurological criteria, and this law has not had any documented influence on either organ donation or intensive care unit utilization.7

Another potential benefit of adopting the British approach would be to facilitate improvement and refinements in the tests that are used. It is remarkable that the core tests in use today to diagnose brain death are virtually the same as those first proposed in 1968, and the authors of guidelines have commented on the “severe limitations in the current evidence base” for the determination of brain death.8 In particular, concerns have been raised about the irreversibility of the diagnosis and the certainty of the determination of unconsciousness. The latter is particularly important because studies have suggested that the behavioral bedside tests used to diagnose unconsciousness in the vegetative state may be wrong as much as 40% of the time.9 In addition, the safety of the apnea test has been questioned,10 and alternatives that do not require acutely raising the level of carbon dioxide in the patient’s blood to potentially dangerous levels could be advantageous. Incorporating modern imaging techniques and new diagnostic technologies into the routine testing for brain death could give more confidence to the claim that the patient is unconscious, provide stronger evidence of irreversibility, and reduce concerns about the safety of the tests.

Until the UDDA or individual state laws are revised, lawsuits are likely to continue because current tests do not fulfill the language of the law. This challenge provides an opportunity to clarify the meaning of brain death, better educate the public about the diagnosis, and improve the tests to make them as safe and reliable as possible.


Full text, references, etc., at the DOI above

For China as a whole, the longest warm period during the last 2000 years occurred in the 10th–13th centuries, although there were multi-decadal cold intervals in the middle to late 12th century

Multi-scale temperature variations and their regional differences in China during the Medieval Climate Anomaly. Zhixin Hao, Maowei Wu, Yang Liu, Xuezhen Zhang & Jingyun Zheng. Journal of Geographical Sciences volume 30, pages119–130. Jan 6 2020. https://link.springer.com/article/10.1007/s11442-020-1718-7

Abstract: The Medieval Climate Anomaly (MCA, AD950-1250) is the most recent warm period lasting for several hundred years and is regarded as a reference scenario when studying the impact of and adaptation to global and regional warming. In this study, we investigated the characteristics of temperature variations on decadal-centennial scales during the MCA for four regions (Northeast, Northwest, Central-east, and Tibetan Plateau) in China, based on high-resolution temperature reconstructions and related warm-cold records from historical documents. The ensemble empirical mode decomposition method is used to analyze the time series. The results showed that for China as a whole, the longest warm period during the last 2000 years occurred in the 10th–13th centuries, although there were multi-decadal cold intervals in the middle to late 12th century. However, in the beginning and ending decades, warm peaks and phases on the decadal scale of the MCA for different regions were not consistent with each other. On the inter-decadal scale, regional temperature variations were similar from 950 to 1130; moreover, their amplitudes became smaller, and the phases did not agree well from 1130 to 1250. On the multi-decadal to centennial scale, all four regions began to warm in the early 10th century and experienced two cold intervals during the MCA. However, the Northwest and Central-east China were in step with each other while the warm periods in the Northeast China and Tibetan Plateau ended about 40–50 years earlier. On the multi-centennial scale, the mean temperature difference between the MCA and Little Ice Age was significant in Northeast and Central-east China but not in the Northwest China and Tibetan Plateau. Compared to the mean temperature of the 20th century, a comparable warmth in the MCA was found in the Central-east China, but there was a little cooling in Northeast China; meanwhile, there were significantly lower temperatures in Northwest China and Tibetan Plateau.

Sexual functioning was more strongly associated with self-esteem than were safe sex & sexual consent, and sexual permissiveness was unassociated with self-esteem

From 2019... Self-esteem and sexual health: a multilevel meta-analytic review. John K. Sakaluk, James Kim, Emily Campbell, Allegra Baxter & Emily A. Impett. Health Psychology Review, Volume 14, 2020 - Issue 2, Pages 269-293, Jun 17 2019. https://doi.org/10.1080/17437199.2019.1625281

ABSTRACT: Sexual health reflects physical, emotional, mental, and social elements of sexual well-being. Researchers often position self-esteem (i.e., global or domain-specific evaluations of self) as a key correlate of sexual health. We present the first comprehensive meta-analysis of correlations between self-esteem and sexual health. Our synthesis includes 305 samples from 255 articles, containing 870 correlations from 191,161 unique participants. The overall correlation between self-esteem and sexual health was positive and small (r = .12, 95% CI: .09, .15), characterised by considerable heterogeneity and robust to different corrections. Sexual functioning (r = .27, 95% CI: .21, .34) was more strongly associated with self-esteem than were safe sex (r = .10, 95% CI: .07, .13) and sexual consent (r = .19, 95% CI: .13, .24), and sexual permissiveness was unassociated with self-esteem (r = −.02, 95% CI: -.05, .008). Most moderators were nonsignificant, although moderator data were inconsistently available, and samples were North American-centric. Evidence of publication bias was inconsistent, and study quality, theory usage, and background research were not reliably associated with study outcomes. Our synthesis suggests a need for more specific theories of self-esteem corresponding to unique domains of sexual, highlighting a need for future theorising and research.

KEYWORDS: meta-analysis, self-esteem, safe sex, sexual consent, sexual health, sexual permissiveness, sexual functioning


Rolf Degen summarizing: To put oneself in the past or future with the power of thought engages largely the same brain areas, but future mental time travel draws particularly on the right hippocampus

The radiation of autonoetic consciousness in cognitive neuroscience: A functional neuroanatomy perspective. Amnon Dafni-Merom, Shahar Arzy. Neuropsychologia, April 30 2020, 107477. https://doi.org/10.1016/j.neuropsychologia.2020.107477

Highlights
• The concepts of autonoesis and mental time travel inspired key memory-related theories.
• These include constructive episodic simulation, scene construction and self-projection.
• Meta-analysis revealed shared and specific activations for these theories.
• Mental-travels in the social, temporal and spatial domains share activations within the DMN.

Abstract: One of Endel Tulving's most important contributions to memory research is the coupling of self-knowing consciousness (or “autonoesis”) with episodic memory. According to Tulving, autonoetic episodic memory enables the uniquely human neurocognitive operation of “mental time travel”, which is the ability to deliberately “project” oneself to a specific time and place to remember personally experienced events that occurred in the past and simulate personal happenings that may occur in the future. These ideas ignited an explosion of research in the years to follow, leading to the development of several related concepts and theories regarding the role of the human self in memory and prospection. In this paper, we first explore the expansion of the concept of autonoetic consciousness in the cognitive neuroscience literature as well as the formulation of derivative concepts and theories. Subsequently, we review such concepts and theories including episodic memory, mental time travel, episodic simulation, scene construction and self-projection. In view of Tulving's emphasis of the temporal and spatial context of the experience, we also review the cognitive operation involved in “travel” (or “projection”) in these domains as well as in the social domain. We describe the underlying brain networks and processes involved, their overlapping activations and involvement in giving rise to the experience. Meta-analysis of studies investigating the underlying functional neuroanatomy of these theories revealed main overlapping activations in sub-regions of the medial prefrontal cortex, the precuneus, retrosplenial cortex, temporoparietal junction and medial temporal lobe. Dissection of these results enables to infer and quantify the interrelations in between the different theories as well as with in respect to Tulving's original ideas.

Keywords: AutonoesisEpisodic memoryScene constructionSelf-projectionConstructive episodic simulationMental time travelMental lines


Conventional swearing: 30% increase in pain threshold & pain tolerance; new “swear” words, “fouch” & “twizpipe,” were rated as more emotional & humorous but did not affect pain threshold or tolerance

Swearing as a Response to Pain: Assessing Hypoalgesic Effects of Novel “Swear” Words. Richard Stephens and Olly Robertson. Front. Psychol., April 30 2020. https://doi.org/10.3389/fpsyg.2020.00723

Abstract: Previous research showing that swearing alleviates pain is extended by addressing emotion arousal and distraction as possible mechanisms. We assessed the effects of a conventional swear word (“fuck”) and two new “swear” words identified as both emotion-arousing and distracting: “fouch” and “twizpipe.” A mixed sex group of participants (N = 92) completed a repeated measures experimental design augmented by mediation analysis. The independent variable was repeating one of four different words: “fuck” vs. “fouch” vs. “twizpipe” vs. a neutral word. The dependent variables were emotion rating, humor rating, distraction rating, cold pressor pain threshold, cold pressor pain tolerance, pain perception score, and change from resting heart rate. Mediation analyses were conducted for emotion, humor, and distraction ratings. For conventional swearing (“fuck”), confirmatory analyses found a 32% increase in pain threshold and a 33% increase in pain tolerance, accompanied by increased ratings for emotion, humor, and distraction, relative to the neutral word condition. The new “swear” words, “fouch” and “twizpipe,” were rated as more emotional and humorous than the neutral word but did not affect pain threshold or tolerance. Changes in heart rate and pain perception were absent. Our data replicate previous findings that repeating a swear word at a steady pace and volume benefits pain tolerance, extending this finding to pain threshold. Mediation analyses did not identify a pathway via which such effects manifest. Distraction appears to be of little importance but emotion arousal is worthy of future study.

Discussion

This study contributes to the psychology literature on swearing in the context of pain (Stephens et al., 2009Stephens and Umland, 2011Philipp and Lombardo, 2017Robertson et al., 2017) as the first attempt to create new “swear” words and assess some of their psychological properties. Our experiment assessed the effects of repeating three different words – a conventional swear word (“fuck”) and two new “swear” words (“fouch” and “twizpipe”) - on pain perception and tolerance, compared with a neutral word control condition (a word to describe a table). We ran a well-powered experiment with a sample consisting of 92 native English speakers. We used an ice-cold water hand immersion task known as the cold pressor procedure. This provides a controlled stimulus that is painful but not harmful and yields scores for pain threshold (time at which pain is reported) and pain tolerance (time at which the hand is removed). We also recorded heart rate as well as ratings of pain perception, emotion, humor, and distraction. The order in which participants completed the conditions (“fuck,” “fouch,” “twizpipe,” and neutral word) was randomized to guard against order effects. Pain Catastrophizing and Fear of Pain scores were gathered to help understand sample characteristics. The scores were similar to our previous data (Stephens and Umland, 2011) in which the overall mean score for Pain Catastrophizing was 25.30 (SD = 9.64) and for Fear of Pain was 87.45 (SD = 16.43). This indicates that our sample may be considered typical for these variables and, as such, that these variables are unlikely to have unduly influenced the pain outcomes.
Hypotheses (i) to (iii) were put forward as manipulation checks to ensure that the made-up “swear” words had the desired properties in terms of the emotion, humor, and distraction ratings. Hypothesis (i) that emotion ratings would be greater for “fouch” vs. neutral word was supported, and hypothesis (ii) that humor and distraction ratings would be greater for “twizpipe” vs. neutral word was partially supported in that the humor rating was greater for “twizpipe.” Interestingly, both made-up “swear” words showed higher ratings for emotion and humor compared with the neutral word. Hypothesis (iii) that emotion, humor, and distraction ratings would be greater for “fuck” vs. neutral word was supported. Our tests of hypotheses (i) to (iii) demonstrate that our manipulation of creating new “swear” words was successful in that “fouch” and “twizpipe” were able to evoke some of the properties of swearing, in terms of emotion rating and humor. This was not the case for distraction, however, since only “fuck” was found to have a raised distraction rating compared with the neutral word. Given that both new “swear” words had demonstrated potential to influence pain perception via increased emotion ratings and/or distracting a person from the pain via increased humor ratings, it seemed appropriate to continue with the analyses and test whether the new “swear” words had any effect on the pain outcomes. We also note that “fuck” was rated as humorous in this context, consistent with the findings of Engelthaler and Hills (2018), who found the word “fuck” was rated in the top 1% of funniest words when 5000 English words were presented one at a time.
Hypotheses (iv) to (vii) were put forward as tests of whether the conventional swear word and the new “swear” words would show hypoalgesic effects and associated changes in heart rate, as found previously. Hypothesis (iv), that cold pressor pain onset latency (pain threshold) would be increased for “fuck,” “fouch,” and “twizpipe” vs. neutral word, was supported for “fuck” but not for “fouch” or “twizpipe.” Hypothesis (v), that cold pressor pain tolerance latency would be increased for “fuck,” “fouch,” and “twizpipe” vs. neutral word, was also supported for “fuck” but not for “fouch” or “twizpipe”. Together, these findings extend previous research on swearing and pain by replicating, in a pre-registered study, the beneficial effect of swearing on pain tolerance and showing that swearing has an additional beneficial effect on pain threshold (onset latency), a behavioral pain measure that has not previously been assessed.
Regarding the new “swear” words, our confirmatory analyses showed no beneficial effects for pain threshold and tolerance. On the suggestion of a peer reviewer, we ran exploratory equivalence tests assessing whether the effect sizes for these words were within a range considered to be negligible. These analyses confirmed the absence of a beneficial effect for pain threshold and tolerance beyond a smallest effect size of interest based on the conservatively small estimate of dz = 0.3 entered into the power calculation. That these new “swear” words had no effect on pain threshold and tolerance is not altogether surprising. While it is not properly understood how swear words gain their power, it has been suggested that swearing is learned during childhood and that aversive classical conditioning contributes to the emotionally arousing aspects of swear word use (Jay, 2009Tomash and Reed, 2013). This suggests that how and when we learn conventional swear words is an important aspect of how they function. Clearly, the new “swear” words utilized in the present study were not learned during childhood and so there was no possibility that this aspect could have had an influence. On the other hand, “fouch” and “twizpipe” were chosen because they had potential to mirror some properties of conventional swearing. Like the swear word, these words were rated as more emotion-evoking and humorous than the neutral word control condition. Nevertheless, these properties did not facilitate pain alleviation effects, suggesting that surface properties of swear words (such as how they sound) do not explain the hypoalgesic effects of swearing. An overall absence of pain alleviation effects for the new “swear” words in the present study would be expected based on Jay’s (2009) childhood aversive classical conditioning theory. There is little evidence for this theory other than a low powered experiment (N = 26) finding that participants reporting a higher frequency of punishment for swearing as children showed an increased skin conductance response when reading swear words, compared with participants reporting a lower frequency of punishment for swearing (Tomash and Reed, 2013). To investigate this theory further, future research should aim to verify the frequency with which such aversive classical conditioning events occur in childhood and assess the relationship between prior punishment for swearing and autonomic arousal in an adequately powered design.
Hypothesis (vi), that pain perception would be decreased for “fuck,” “fouch,” and “twizpipe” vs. neutral word, was not supported. We should not be surprised at the lack of differences for pain perception as this may indicate that participants base behavioral decisions of reporting pain onset and removing the hand on similar perceived pain levels, albeit levels that have been modified by repeating a swear word. On that basis we suggest that measuring subjective pain perception is of limited usefulness in future studies assessing hypoalgesic effects of swearing where behavioral measures such as the cold pressor procedure are employed.
Hypothesis (vii), that change from resting heart rate would be increased for “fuck” and “fouch” vs. neutral word, was not supported. The lack of heart rate differences across conditions is at odds with previous studies which have shown elevated heart rate for swearing versus a neutral word (Stephens et al., 2009Stephens and Umland, 2011). This may be due to the design of the present study in which participants completed four consecutive word repetition/cold pressor immersion conditions rather than two, as previously. Repeated presentations of similar tasks, as well as repeated exposure to aversive stimuli, have been found to result in blunted cardiovascular stress reactivity (Hughes et al., 2018). Blunted cardiovascular stress reactivity refers to the reduction in cardiovascular response to acute physiological or psychological stress (Brindle et al., 2017). It seems reasonable to suggest that repeated exposure to cold pressor-mediated acute pain may have induced cardiovascular blunting.
In the absence of clear autonomic responses to swearing, we assessed the exploratory hypothesis (viii) that the effects of swearing on pain tolerance would be mediated by one or more psychological variables, in the form of the emotion, humor, or distraction rating scores. However, none of the ratings showed evidence of mediation, with 95% confidence intervals for humor and distraction being approximately symmetrically balanced across the origin. The latter effect is of interest because swearing in the context of pain is often characterized as a deliberate strategy for distraction, and distraction is recognized as being an effective psychological means of influencing descending pain inhibitory pathways (Edwards et al., 2009). While swearing was rated as distracting (more so than the other words) the level of distraction was not related to the pain alleviation effects. Thus, based on our evidence, distraction may not be important in explaining how swearing produces hypoalgesic effects. The analysis assessing whether emotion ratings mediate the effect of swearing on extending pain tolerance also showed no effect, although here the 95% confidence interval only narrowly crossed the origin. While offering no evidential support for a mediation effect, further study assessing mediation of hypoalgesic effects of swearing via emotional arousal, in the absence of changes in heart rate, might fruitfully demonstrate this as a viable mechanism. Such an effect would be in keeping with previous research finding pain relieving effects of emotional arousal (Stephens and Allsop, 2012).
However, there is a caveat to this. At the study outset we theorized that swearing may increase emotional arousal without specifying the valence of that arousal. During peer review we were directed to literature linking emotion elicitation and pain modulation, and in particular, research by Lefebvre and Jensen (2019) who report that inducing a state of negative affect by asking participants to recall a time when they experienced a high degree of worry led to increased ratings of pain from pressure applied to the finger, relative to baseline. In addition, the same study found that inducing a state of positive affect by asking participants to recall a happy memory led to decreased ratings of pain. It is apparent that emotional modulation of pain can be explained by the two-factor behavioral inhibition system-behavioral activation system (BIS-BAS) model of pain (Jensen et al., 2016). According to the BIS-BAS model, negative affect contributes toward pain-related avoidance behaviors and associated negative cognitions, thereby increasing the subjective experience of pain. Conversely, positive affect contributes toward approach behaviors and positive cognitions, thus decreasing the subjective experience of pain. One limitation of the present study is that the measure of emotion elicitation was not valenced. This may explain why emotion was not shown to be a mediating variable in the link between swearing and hypoalgesia. Future research should assess both positive and negative emotion arousal due to swearing.
A further limitation might have been that participants did not consider themselves to be swearing when repeating the novel “swear” words. This remains unknown as we did not carry out a manipulation check asking participants whether they considered using these words was swearing. On the other hand, the novel “swear” words were selected by a panel of experts and laypeople briefed to choose words that could be used in similar ways to swear words, and which shared properties of swear words including emotional resonance and humor potential. It is also worth noting that “Fouch” begins with a fricative, defined as a sound created by forcing air through a narrow channel (here the lower teeth and upper lip) which some have associated with swearing, although other contest such a link (Stack Exchange, 2014).
Additionally, maintaining the ice water temperature in the range 3–5°C might be considered too wide a variation, such that the physical intensity of the pain stimulus was not consistent across participants. In mitigation there was no systematic variation of the temperature across the four word conditions. As shown in Table 1, the starting temperatures for each immersion were fairly consistent, with means ranging from 3.91 to 3.98 (SDs 0.50 to 0.53). This indicates that approximately 65% of immersions had starting temperatures within a 1°C range of 3.5–4.5°. Therefore, variation in temperature is unlikely to have biased the results.

A final limitation is that participants may have guessed the aims of the study and consequently demand characteristics may have influenced the results. In advertising the study as “psychological effects of vocal expressions, including swearing, while immersing the hand in ice water” we aimed to hide our predictions. Nevertheless, due to widespread media exposure for findings of previous studies conducted in the Keele Swear Lab we cannot rule out, nor quantify the extent to which, participant behavior was influenced by expectations of participants.

Thursday, April 30, 2020

Win–win Denial: The Psychological Underpinnings of Zero-sum Thinking

Johnson, Samuel G. B., Jiewen Zhang, and Frank Keil. 2020. “Win–win Denial: The Psychological Underpinnings of Zero-sum Thinking.” PsyArXiv. April 30. psyarxiv.com/efs5y

Abstract: A core proposition in economics is that voluntary exchanges benefit both parties. We show that people often deny the mutually beneficial nature of exchange, instead espousing the belief that one or both parties fail to benefit from the exchange. Across 4 studies (and 7 further studies in the Supplementary Materials), participants read about simple exchanges of goods and services, judging whether each party to the transaction was better off or worse off afterwards. These studies revealed that win–win denial is pervasive, with buyers consistently seen as less likely to benefit from transactions than sellers. Several potential psychological mechanisms underlying win–win denial are considered, with the most important influences being mercantilist theories of value (confusing wealth for money) and naïve realism (failing to observe that people do not arbitrarily enter exchanges). We argue that these results have widespread implications for politics and society.

Check also The politics of zero-sum thinking: The relationship between political ideology and the belief that life is a zero-sum game. Shai Davidai, Martino Ongis. Science Advances Dec 18 2019, Vol. 5, no. 12, eaay3761. https://www.bipartisanalliance.com/2019/12/liberals-exhibit-zero-sum-thinking-when.html

The Fallacy of an Airtight Alibi, Understanding Human Memory: Strong evidence that participants confuse days across weeks; in addition, people often confused weeks in general and also hours across days

Laliberte, Elizabeth, Hyungwook Yim, Benjamin Stone, and Simon Dennis. 2020. “The Fallacy of an Airtight Alibi: Understanding Human Memory for Where Using Experience Sampling.” PsyArXiv. April 30. psyarxiv.com/6rce5

Abstract: A primary challenge for alibi generation research is establishing the ground truth of the real world events of interest. We used a smartphone app to record data on adult participants for a month prior to a memory test. The app captured their accelerometry continuously and their GPS location and sound environment every ten minutes. After a week retention interval, we asked participants to identify where they were at a given time from among four alternatives. Participants were incorrect 36% of the time. Furthermore, our forced choice procedure allowed us to conduct a conditional logit analysis to assess the relative importance of different aspects of the events to the decision process. We found strong evidence that participants confuse days across weeks. In addition, people often confused weeks in general and also hours across days. Similarity of location induced more errors than similarity of sound environments or movement types



Although generally very pessimistic, a substantial proportion of individuals believes that national & global economy will be doing worse than their household (a financial ”better-than-average effect”)

Barrafrem, Kinga, Daniel Västfjäll, and Gustav Tinghög. 2020. “Financial Well-being, COVID-19, and the Financial Better-than-average-effect.” PsyArXiv. April 30. psyarxiv.com/tkuaf

Abstract: At the onset of the COVID-19 outbreak we conducted a survey (n=1000) regarding how people assess the near future economic situation within their household, nation, and the world. Together with psychological factors related to information processing we link these prospects to financial well-being. We find that, although generally very pessimistic, a substantial proportion of individuals believes that national and global economy will be doing worse than their household, what we call a financial ”better-than-average effect”. Furthermore, we find that private economic outlook and financial ignorance are linked to financial well-being while financial literacy and the (inter)national situation are not.


Wednesday, April 29, 2020

Using Sex Toys and the Assimilation of Tools into Bodies: Can Sex Enhancements Incorporate Tools into Human Sexuality?

Using Sex Toys and the Assimilation of Tools into Bodies: Can Sex Enhancements Incorporate Tools into Human Sexuality? Ahenkora Siaw Kwakye. Sexuality & Culture, Apr 29 2020. https://rd.springer.com/article/10.1007/s12119-020-09733-5

Abstract: The use of vibrators, dildos and other sex toys for sexual stimulation and pleasure is common among women and is growing in popularity. While the phenomenon has positive benefits, it might equally present adverse consequences to users. This research aims to assess the popularity of sex toy use among women from different nations. Furthermore, the study aims to find out if the use of other household items for sexual stimulation is popular among women between the ages of 18 and 50. Finally, the study attempts to discover if sex toy users observe changes arising from the use of various sex toys and if such variations can be attributed to the assimilation of the sex toy used. I employed a convenience sampling in eight countries. The study observed that sex toys are popular among women between the ages of 18 and 50, but sex toy use appears to produce varying effects on users. It was also evident that a majority of participants use vibrating sex toys without a clinician’s recommendation. Some women observe changes in sensitivity levels of their sexual responses after using sex enhancements. It was observed that while there is a crackdown on the use of sex toys in Islamic nations, religion itself has a certain influence on the individual adherent’s desire to explore use of sex enhancements.


Unhappiness & age: Analysis of data from eight well-being data files on nearly 14 million respondents across forty European countries & the United States and 168 countries from the Gallup World Poll

Unhappiness and Age. David G.Blanchflower. Journal of Economic Behavior & Organization, April 29 2020. https://doi.org/10.1016/j.jebo.2020.04.022

Abstract: I examine the relationship between unhappiness and age using data from eight well-being data files on nearly fourteen million respondents across forty European countries and the United States and 168 countries from the Gallup World Poll. I use twenty different individual characterizations of unhappiness including many not good mental health days; anxiety; worry; loneliness; sadness; stress; pain; strain, depression and bad nerves; phobias and panic; being downhearted; having restless sleep; losing confidence in oneself; not being able to overcome difficulties; being under strain; being unhappy; feeling a failure; feeling left out; feeling tense; and thinking of yourself as a worthless person. I also analyze responses to a further general attitudinal measure regarding whether the situation in the respondent's country is getting worse. Responses to all these unhappiness questions show a, ceteris paribus, hill shape in age, with controls and many also do so with limited controls for time and country. Unhappiness is hill-shaped in age and the average age where the maximum occurs is 49 with or without controls. There is an unhappiness curve.

3. Discussion

There appears to be a midlife crisis where unhappiness reaches a peak in mid-life in the late forties across Europe and the United States. That matches the evidence for a nadir in happiness that reaches a low in the late forties also (Blanchflower, 2020a). In that paper it was found that, averaging across 257 individual country estimates from developing countries gave an age minimum of 48.2 for well-being and doing the same across the 187 country estimates for advanced countries gives a similar minimum of 47.2.
Table 14 summarizes the results obtained by solving out the age at which the quadratic fitted to the data reaches a maximum. There are sixteen without controls that average at 47.4 and twenty-eight with controls with the maxima averaging out to 49.1, and 48.6 years overall for the forty-four estimates. This is very close to the finding in Blanchflower (2020a) that the U-shape in happiness data averaged 47.2 in developed countries and 48.2 in developing. The conclusion is therefore that data on unhappiness and happiness are highly consistent at the age when the low point or zenith in well-being occurs.
I add to the growing list of unhappiness variables that have hump shapes in age with or without controls. I find a broadly similar hill or hump shaped curve in twenty measures of unhappiness including being many not good mental health days; being stressed, unhappy; anxious, sad, sleepless; lonely; tired; depressed, tense, under strain; having bad nerves; phobias and panics and being in pain, feeling left out of society and several more. I also found the hump shape for a more general measure relating to the respondent's belief that the country 'is getting worse'. It doesn't seem to matter much how the question about unhappiness is phrased or coded or which country the question is asked or when we get similar results.
A referee has noted that if you look at the graphs, you see wave-like patterns (sadness, panics), hump-shaped patterns (sleep, stress), and increasing-to-a-plateau-like patterns (pain and worry with limited controls). No matter the exact shape of the plots in the various charts, it is clear that there is a peak somewhere in mid-life. I don't claim the patterns are all identical, but their broad similarity is striking, with a peak in prime age. There is a clear consistent pattern in the unhappiness and age data.
Blanchflower and Graham (2020) showed that the drop in measured happiness from youth to the mid-point low of the U-shape is quantitatively large and was not "trivial" as some psychologists have claimed. Indeed, they show the decline in well-being was about the equivalent of that observed from losing a spouse or a job. The results on unhappiness are similar. For example, in the Gallup USDTP averaged across the years 2008-2017 the probability of depression in the raw data rose from 12% at age 18 to 21% at age 58. The proportion of the employed who were depressed was 12% versus 24% for the unemployed. In addition, 12% of the married were depressed yesterday versus 19% of the widowed. In the raw data from the BRFSS the proportion who said they had 20 or more bad days in a month was 6.6% at age 18 and 8.4 at age 47, the peak. Among the married the rate was 5.5% versus 8% for the widowed. The rise in unhappiness to the mid-life peak, is thus large and comparable in magnitude to major life events.
So, what is going on in mid-life? In Blanchflower and Oswald (2008) we suggested three possibilities. First, that individuals learn to adapt to their strengths and weaknesses, and in mid-life quell their infeasible aspirations. Second, it could be that cheerful people live systematically longer than the miserable, and that the nadir in happiness in mid-life thus traces out in part a selection effect. A third is that a kind of comparison process is at work: I have seen school-friends die and come eventually to value my blessings during my remaining years. Steptoe et al (2010) suggest that "it is plausible that wellbeing improves when children leave home, given reduced levels of family conflict and financial burden" (p.9986, 2010).
The finding of a nadir in well-being in midlife likely adds important support to the notion that the prime-aged, and especially those with less education, are especially vulnerable to disadvantages and shocks.27 The global Covid-19 pandemic, which is disproportionately impacting marginal workers will likely make matters even harder to deal with for many at a well-being low point (Bell and Blanchflower, 2020). Some especially defenseless individuals might face downward spirals as age and life circumstances interact. Many will not be getting the social/emotional support they need as they are isolated and lonely, in addition to the first-order effects of whatever they are coping with in normal times. Lack of health care coverage in the US may well be a compounding factor where there is also an obesity epidemic. A midlife low is tough and made much harder when combined with a deep downturn and a slow and weak recovery. Peak unhappiness occurs in mid-life. There is an unhappiness curve.

The 168 countries are Afghanistan; Albania; Algeria; Angola; Argentina; Armenia; Australia; Austria; Azerbaijan; Bahrain; Bangladesh; Belarus; Belgium; Belize; Benin; Bhutan; Bolivia; Bosnia Herzegovina; Botswana; Brazil; Bulgaria; Burkina Faso; Burundi; Cambodia; Cameroon; Canada; Central African Republic; Chad; Chile; China; Colombia; Comoros; Congo Brazzaville; Congo Kinshasa; Costa Rica; Croatia; Cuba; Cyprus; Czech Republic; Denmark; Djibouti; Dominican Republic; Ecuador; Egypt; El Salvador; Estonia; Ethiopia; Finland; France; Gabon; Gambia; Georgia; Germany; Ghana; Greece; Guatemala; Guinea; Guyana; Haiti; Honduras; Hong Kong; Hungary; Iceland; India; Indonesia; Iran; Iraq; Ireland; Israel; Italy; Ivory Coast; Jamaica; Japan; Jordan; Kazakhstan; Kenya; Kosovo; Kuwait; Kyrgyzstan; Laos; Latvia; Lebanon; Lesotho; Liberia; Libya; Lithuania; Luxembourg; Macedonia; Madagascar; Malawi; Malaysia; Maldives; Mali; Malta; Mauritania; Mauritius; Mexico; Moldova; Mongolia; Montenegro; Morocco; Mozambique; Myanmar; Nagorno Karabakh; Namibia; Nepal; Netherlands; New Zealand; Nicaragua; Niger; Nigeria; Northern Cyprus; Norway; Oman; Pakistan; Palestine; Panama; Paraguay; Peru; Philippines; Poland; Portugal; Puerto Rico; Qatar; Romania; Russia; Rwanda; Saudi Arabia; Senegal; Serbia; Sierra Leone; Singapore; Slovakia; Slovenia; Somalia; Somaliland; South Africa; South Korea; South Sudan; Spain; Sri Lanka; Sudan; Suriname; Swaziland; Sweden; Switzerland; Syria; Taiwan; Tajikistan; Tanzania; Thailand; Togo; Trinidad and Tobago; Tunisia; Turkey; Turkmenistan; Uganda; Ukraine; UAE; UK; USA; Uruguay; Uzbekistan; Venezuela; Vietnam; Yemen; Zambia and Zimbabwe.

Changes in sexual behaviors in young people during COVID-19: 44% of participants reported a decrease in the number of sexual partners & about 37% of participants reported a decrease in sexual frequency

Changes in sexual behaviors of young women and men during the coronavirus disease 2019 outbreak: a convenience sample from the epidemic area. Weiran Li et al. The Journal of Sexual Medicine, April 29 2020, https://doi.org/10.1016/j.jsxm.2020.04.380

Abstract
Background: In March 2020, the World Health Organization (WHO) declared coronavirus disease 2019 (COVID-19), which is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a pandemic. Currently, data on changes in sexual behavior during the COVID-19 outbreak are limited.

Aim: The present study aimed to obtain a preliminary understanding of the changes in people’s sexual behavior, as a result of the pandemic and explore the context in which they manifest.

Methods: A convenience sample of 270 men and 189 women who completed an online survey consisting of 12 items plus an additional question were included in the study.

Outcomes: The study outcomes were obtained using a study-specific questionnaire to assess the changes in people’s sexual behavior.

Results: While there was a wide range of individual responses, our results showed that 44% of participants reported a decrease in the number of sexual partners and about 37% of participants reported a decrease in sexual frequency. Multiple regression analysis showed that age, partner relationship and sexual desire were closely related to sexual frequency. In addition, we found that most individuals with risky sexual experiences had a rapid reduction in risky sexual behavior.

Clinical Implications: The current findings contribute to identifying another potential health implication associated with the COVID-19 pandemic and report preliminary evidence of the need to provide potential interventions for the population.

Strength & Limitations: This study is the first to perform a preliminary exploration of sexual behavior during the COVID-19 outbreak. The generalizability of the results is limited, given that only a small convenience sample was used.

Conclusion: During the height of the COVID-19 outbreak, overall sexual activity, frequency, and risky behaviors declined significantly among young men and women in China.

Key words: COVID-19Sexual activitiesSexual frequencyRisky sexual behavior


DISCUSSION

In general, at the height of the COVID-19 epidemic, we found that both sexual activities and sexual satisfaction of young men and women decreased. Low sexual desire and unsatisfying partner relationships were significant factors affecting sexual activities, which is in agreement with previous studies 6.
In addition, we found that most individuals with a history of risky sexual experiences had a rapid reduction in risky sexual behaviors. This may be because the participants may have experienced a great deal of psychological stress during this particular period, such as anxiety, fear, boredom, and disappointment. In addition, it is undeniable that strict physical restrictions have directly impacted the possibility of having new sexual partners and risky sexual behaviors. However, in the supplementary question, 32% of men and 18% of women indicated that they were inclined to increase the number of sexual partners or risky sexual behaviors once the epidemic ended. A significant minority will be engaged in behaviors that could increase the risk of contracting sexually transmitted diseases7.
There are several potential limitations to our research that should be noted. First, race and ethnic culture appear to have a significant association with the occurrence of sexual problems8. For example, most young Chinese people live with their parents (72% in the current study), which is different from results reported in other countries and may be a significant factor that can limit their sexual behaviors. Therefore, the small sample size from a single ethnicity and the lack of randomization are also limitations for the extrapolation of the results to the global general population. Second, the use of unverified questionnaires and retrospective evaluations of sexual behavior was also a weakness of the study. In addition, we did not collect data form participants who did not complete the questionnaire. Hence, the characteristics of these individuals and their impact on the overall data were not analyzed.

Tuesday, April 28, 2020

Does the devil wear Prada? Luxury product experiences can affect prosocial behavior

Does the devil wear Prada? Luxury product experiences can affect prosocial behavior. Yajin Wang et al. International Journal of Research in Marketing, April 28 2020. https://doi.org/10.1016/j.ijresmar.2020.04.001

Abstract: Despite the explosive growth of luxury consumption, researchers have yet to examine how the experience of using luxury products affects us both psychologically and behaviorally. In this research, we explore how the experience of using a luxury product can alter a user's perceptions of themselves and their behavior toward other people. We gave women either a luxury product (e.g., Prada handbag) or a non-luxury product (e.g., unbranded handbag) to use, and afterwards, we presented women with opportunities to exhibit either selfish or generous behaviors toward others. We found that, after using a luxury product, women exhibited more selfish behavior, such as sharing fewer resources with others and contributing less money to charity than women who used a non-luxury handbag. We also found this pattern can be reversed, with luxury users exhibiting more generous behavior when the generous behavior can be performed in front of other people. Further, we show that these patterns of selfish and generous behaviors are mediated by changes in perceived status and superiority that are triggered when women experience using a luxury product.

Keywords: LuxuryConsumer experienceProsocial behavior

Not Only Decibels: Exploring Human Judgments of Laughter Intensity

Rychlowska, Magdalena, Gary J. McKeown, Ian Sneddon, and Will Curran. 2020. “Not Only Decibels: Exploring Human Judgments of Laughter Intensity.” PsyArXiv. April 28. psyarxiv.com/x7qea

Abstract. Paper presented at the 5th Laughter Workshop, Paris, 27-28 September 2018: While laughter intensity is an important characteristic immediately perceivable for the listeners, empirical investigations of this construct are still scarce. Here, we explore the relationship between human judgments of laughter intensity and laughter acoustics. Our results show that intensity is predicted by multiple dimensions, including duration, loudness, pitch variables, and center of gravity. Controlling for loudness confirmed the robustness of these effects and revealed significant relationships between intensity and other features, such as harmonicity and voicing. Together, the findings demonstrate that laughter intensity does not overlap with loudness. They also highlight the necessity of further research on this complex dimension.

Participants who were told that another person got a better meal than they did liked their own meal less than if they were told that another person received either the same meal as they did

Food-based social comparisons influence liking and consumption. Jennifer S.Mills, Janet Polivy, Ayesha Iqbal. Appetite, April 26 2020, 104720. https://doi.org/10.1016/j.appet.2020.104720

Abstract: This study examined the effects of food-based social comparisons on hedonic ratings and consumption of a meal. Participants were randomly assigned to one of three experimental conditions in which they were led to believe that they got a worse meal, a better meal, or the same meal as another participant. They then tasted and rated their own meal. Subsequent liking and ad lib food consumption were measured. Participants who were told that another person got a better meal than they did (upward comparison) liked their meal less than if they were told that another person received either the same meal as they did or a worse meal (downward comparison). Similarly, participants who were in the upward comparison condition ate less food than if they were in the control or downward comparison conditions. Consumption was mediated by liking. The results suggest that being told that someone else is eating a meal that is higher or lower in hedonic value than one's own meal induces hedonic contrast and influences liking and consumption.

Keywords: Social comparisonFood hedonicsHedonic contrastEating behaviour


Emotional empathy is more heritable than cognitive empathy; is affected by environment shared by siblings; found no find evidence for age differences in empathy heritability

The genetic and environmental origins of emotional and cognitive empathy: Review and meta-analyses of twin studies. Lior Abramson et al. Neuroscience & Biobehavioral Reviews, April 27 2020. https://doi.org/10.1016/j.neubiorev.2020.03.023

Highlights
• We meta-analyzed the twin literature of emotional and cognitive empathy.
• Emotional empathy is more heritable than cognitive empathy.
• Cognitive empathy as examined by tests is affected by environment shared by siblings.
• We did not find evidence for age differences in empathy heritability.
• We propose future directions to examine the processes behind genes-empathy relations.

Abstract: Empathy is considered a cornerstone of human social experience, and as such has been widely investigated from psychological and neuroscientific approaches. To better understand the factors influencing individual differences in empathy, we reviewed and meta-analyzed the behavioral genetic literature of emotional empathy- sharing others’ emotions (k=13), and cognitive empathy-understanding others’ emotions (k = 15), as manifested in twin studies. Results showed that emotional empathy is more heritable, 48.3% [41.3%-50.6%], than cognitive empathy, 26.9% [18.1%-35.8%]. Moreover, cognitive empathy as examined by performance tests was affected by the environment shared by family members, 11.9% [2.6%-21.0%], suggesting that emotional understanding is influenced, to some degree, by environmental factors that have similar effects on family members beyond their genetic relatedness. The effects of participants’ age and the method used to asses empathy on the etiology of empathy were also examined. These findings have implications for understanding how individual differences in empathy are formed. After discussing these implications, we suggest theoretical and methodological future research directions that could potentially elucidate the relations between genes, brain, and empathy.



Why are Women More Religious than Men? Do Risk Preferences and Genetic Risk Predispositions Explain the Gender Gap?

Why are Women More Religious than Men? Do Risk Preferences and Genetic Risk Predispositions Explain the Gender Gap? YI LI  ROBERT WOODBERRY  HEXUAN LIU  GUANG GUO. Journal for the Scientific Study of Religion, April 23 2020 https://doi.org/10.1111/jssr.12657

Abstract: Risk preference theory argues that the gender gap in religiosity is caused by greater female risk aversion. Although widely debated, risk preference theory has been inadequately tested. Our study tests the theory directly with phenotypic and genetic risk preferences in three dimensions—general, impulsive, and sensation‐seeking risk. Moreover, we examine whether the effects of different dimensions of risk preferences on the gender gap vary across different dimensions of religiosity. We find that general and impulsive risk preferences do not explain gender differences in religiosity, whereas sensation‐seeking risk preference makes the gender gap in self‐assessed religiousness and church attendance insignificant, but not belief in God, prayer, or importance of religion. Genetic risk preferences do not remove any of the gender gaps in religiosity, suggesting that the causal order is not from risk preference to religiosity. Evidence suggests that risk preferences are not a strong predictor for gender differences in religiosity.


Casual sex is increasingly socially acceptable, but negative stereotypes about women remain; in this paper, both men & women stereotype women (but not men) who have casual sex as having low self-esteem

Krems, Jaimie, Ahra Ko, Jordan W. Moon, and Michael E. W. Varnum, PhD. 2020. “Lay Beliefs About Gender and Sexual Behavior: First Evidence for a Pervasive, Robust (but Seemingly Unfounded) Stereotype.” PsyArXiv. April 27. psyarxiv.com/rc2d3

Abstract: Although casual sex is increasingly socially acceptable, negative stereotypes toward women pursuing casual sex appear to remain pervasive. Specifically, a common trope in media (e.g., television, film) is that such women have low self-esteem. Despite robust work on prejudice against women who engage in casual sex, little empirical work investigates the lay theories individuals hold about such women. Across six experiments with US participants (N = 1,469), we find that both men and women stereotype women (but not men) who have casual sex as having low self-esteem. This stereotype is held explicitly and semi-implicitly, not driven by individual differences in religiosity, conservatism, or sexism, is mediated by inferences that women having casual sex are unsatisfied with their mating strategy, yet persists when these women are explicitly described as choosing to have casual sex. Finally, it appears unfounded; across experiments, these same participants’ sexual behavior is uncorrelated with their own self-esteem.


Those who acted to benefit others were seen as egalitarian and less selfish, although expressing pride strongly overturned these judgments

McLatchie, Neil, and Jared Piazza. 2020. “The Challenge of Expressing Pride in Moral Achievements: The Advantage of Joy and Vicarious Pride.” PsyArXiv. April 27. psyarxiv.com/9f8pb

Abstract: Recent findings suggest bodily expressions of pride communicate a person is self-interested and meritocratic. Across two studies (combined N=721), we investigated whether these implications retain when pride is expressed with regards to moral achievements where the activity has benefited others. In Study 1, achievers that attained self-benefiting, competence-based achievements were judged to be self-interested and meritocratic, and expressing pride somewhat enhanced these evaluations. By contrast, those who acted to benefit others were seen as egalitarian and less selfish, although expressing pride strongly overturned these judgments. Study 2 replicated these findings and found that expressions of joy following a moral achievement, and pride expressed by a companion, enhanced the do-gooder’s perceived status without reducing attributions of egalitarianism. Our findings highlight the costs of displaying moral pride, but point to joy and vicarious pride as promising alternative routes for circumventing these costs. Datasets and analysis scripts are available at: https://osf.io/ra3gy/?view_only=5329461bfda84c0bb8c34df967d98398.