Thursday, February 4, 2021

Sex Differences in Mate Preferences Across 45 Countries - A Large-Scale Replication in 45 Countries

Sex Differences in Mate Preferences Across 45 Countries: A Large-Scale Replication. Kathryn V. Walter et al. Psychological Science, March 20, 2020. https://doi.org/10.1177/0956797620904154

h/t David Schmitt Mate Preferences Across 45 Countries: A Large-Scale Replication...Support for universal sex differences in preferences remains robust...Beyond age of partner, neither pathogens nor gender equality robustly predicted sex differences across countries

Abstract: Considerable research has examined human mate preferences across cultures, finding universal sex differences in preferences for attractiveness and resources as well as sources of systematic cultural variation. Two competing perspectives—an evolutionary psychological perspective and a biosocial role perspective—offer alternative explanations for these findings. However, the original data on which each perspective relies are decades old, and the literature is fraught with conflicting methods, analyses, results, and conclusions. Using a new 45-country sample (N = 14,399), we attempted to replicate classic studies and test both the evolutionary and biosocial role perspectives. Support for universal sex differences in preferences remains robust: Men, more than women, prefer attractive, young mates, and women, more than men, prefer older mates with financial prospects. Cross-culturally, both sexes have mates closer to their own ages as gender equality increases. Beyond age of partner, neither pathogen prevalence nor gender equality robustly predicted sex differences or preferences across countries.

Keywords: mate preferences, sex differences, cross-cultural studies, evolutionary psychology, biosocial role theory, open data, preregistered


Check also How Sexually Dimorphic Are Human Mate Preferences? Daniel Conroy-Beam. Personality and Social Psychology Bulletin, June 11, 2015. https://doi.org/10.1177/0146167215590987

Abstract: Previous studies on sex-differentiated mate preferences have focused on univariate analyses. However, because mate selection is inherently multidimensional, a multivariate analysis more appropriately measures sex differences in mate preferences. We used the Mahalanobis distance (D) and logistic regression to investigate sex differences in mate preferences with data secured from participants residing in 37 cultures (n = 10,153). Sex differences are large in multivariate terms, yielding an overall D = 2.41, corresponding to overlap between the sexes of just 22.8%. Moreover, knowledge of mate preferences alone affords correct classification of sex with 92.2% accuracy. Finally, pattern-wise sex differences are negatively correlated with gender equality across cultures but are nonetheless cross-culturally robust. Discussion focuses on implications in evaluating the importance and magnitude of sex differences in mate preferences.

Keywords: mate selection, sex differences, multivariate analysis, cross-cultural analysis


And Conroy-Beam, D., & Buss, D. M. (2019). Why is age so important in human mating? Evolved age preferences and their influences on multiple mating behaviors. Evolutionary Behavioral Sciences, 13(2), 127-157. https://www.bipartisanalliance.com/2019/04/why-is-age-so-important-in-human-mating.html


We find that cell phone vibrations of intermediate length (400ms) evoke a reward response, particularly among younger & more impulsive consumers, which in turn boosts purchasing in online shopping

Hampton, William H., and Christian Hildebrand. 2021. “Pavlov’s Buzz? Mobile Vibrations as Conditioned Rewards.” PsyArXiv. February 4. psyarxiv.com/92ksn

Abstract: People spend a large portion of their day interacting with vibrating mobile devices, yet how we respond to the vibrotactile sensations emitted by these devices, and their effect on consumer decision-making is largely unknown. Integrating recent work on haptic sensory processing and classical conditioning, the current research examines: (1) the relationship between vibration duration and reward response, (2) to what extent rewarding vibrations modify consumer decision-making, and (3) the underlying mechanism of this effect. We find that mobile vibrations of intermediate length (400ms) evoke a reward response, particularly among younger and more impulsive consumers, which in turn boosts purchasing in ecological online shopping environments. We examine mobile vibration in a variety of experimental settings, drawing on a diverse participant pool, leveraging both controlled experiments and a large, country-wide field experiment to assess theoretically- and practically-important boundary conditions. We further examine the mechanism of this effect, providing direct evidence that vibrations influence consumers due to classical conditioning, such that vibrations become rewarding due to their learned association with positive mobile events. Our findings have important implications for the effective design of haptic interfaces in marketing and the role of mobile vibration stimuli as a novel form of reward.


‘You can’t bullshit a bullshitter’ (or can you?): Bullshitting frequency predicts receptivity to various types of misleading information

‘You can’t bullshit a bullshitter’ (or can you?): Bullshitting frequency predicts receptivity to various types of misleading information. Shane Littrell  Evan F. Risko  Jonathan A. Fugelsang. British Journal of Social Psychology, February 4 2021. https://doi.org/10.1111/bjso.12447

Rolf Degen's take: Notorious bullshitters are particularly bad at seeing through the bullshit of others. https://t.co/oiLV02cRfL https://t.co/np4SXk5ISt

Abstract: Research into both receptivity to falling for bullshit and the propensity to produce it have recently emerged as active, independent areas of inquiry into the spread of misleading information. However, it remains unclear whether those who frequently produce bullshit are inoculated from its influence. For example, both bullshit receptivity and bullshitting frequency are negatively related to cognitive ability and aspects of analytic thinking style, suggesting that those who frequently engage in bullshitting may be more likely to fall for bullshit. However, separate research suggests that individuals who frequently engage in deception are better at detecting it, thus leading to the possibility that frequent bullshitters may be less likely to fall for bullshit. Here, we present three studies (N = 826) attempting to distinguish between these competing hypotheses, finding that frequency of persuasive bullshitting (i.e., bullshitting intended to impress or persuade others) positively predicts susceptibility to various types of misleading information and that this association is robust to individual differences in cognitive ability and analytic cognitive style.


Men and women are equally interested in being the recipient of sexual behaviours while they sleep; this particular interest (proposed term "dormaphilia") opens up new and interesting research questions

Somnophilia: Examining Its Various Forms and Associated Constructs. Elizabeth T. Deehan, Ross M. Bartels. Sexual Abuse, November 15, 2019. https://doi.org/10.1177/1079063219889060

Abstract: Somnophilia refers to the interest in having sex with a sleeping person. Using an online sample of 437 participants, the present study provides the first empirical examination of somnophilia, its various forms, and theorized correlates. Participants completed the newly developed Somnophilia Interest and Proclivity Scale, which comprises three subscales (active consensual, passive consensual, and active nonconsensual somnophilia). To test hypotheses about the convergent and divergent validity of different paraphilic interests, participants also completed scales measuring necrophilic, rape-related, and sadistic/masochistic sexual fantasies, rape proclivity, and the need for sexual dominance/submission. Male participants scored higher than females on all scales except the passive subscale. For both males and females, each subscale was associated most strongly with conceptually congruent variables. These results support existing theoretical assumptions about somnophilia, as well as offering newer insights, such as distinguishing between active and passive somnophilia. Limitations and implications for further research are discussed.

Keywords: somnophilia, necrophilia, paraphilia, biastophilia, sexual fantasy, dormaphilia


Wednesday, February 3, 2021

The findings of this study support the notion that self-perceived facial attractiveness is not only motivated by psychological traits, but objectively measured phenotypic traits also contribute significantly

Kanavakis G, Halazonetis D, Katsaros C, Gkantidis N (2021) Facial shape affects self-perceived facial attractiveness. PLoS ONE 16(2): e0245557. https://doi.org/10.1371/journal.pone.0245557

Abstract: Facial appearance expresses numerous cues about physical qualities as well as psychosocial and personality traits. Attractive faces are recognized clearly when seen and are often viewed advantageously in professional, social and romantic relationships. On the other hand, self-perceived attractiveness is not well understood and has been mainly attributed to psychological and cognitive factors. Here we use 3-dimensional facial surface data of a large young adult population (n = 601) to thoroughly assess the effect of facial shape on self-perceived facial attractiveness. Our results show that facial shape had a measurable effect on self-perception of facial attractiveness in both sexes. In females, self-perceived facial attractiveness was linked to decreased facial width, fuller anterior part of the lower facial third and more pronounced middle forehead and root of the nose. Males favored a well-defined chin, flatter cheeks and zygomas, and more pronounced eyebrow ridges, nose and middle forehead. The findings of this study support the notion that self-perceived facial attractiveness is not only motivated by psychological traits, but objectively measured phenotypic traits also contribute significantly. The role of social stereotypes for facial attractiveness in modern society is also inferred and discussed.

Discussion

This study assesses the effect of facial shape variation on self-perceived facial attractiveness using three-dimensional data of a large young adult population. Current evidence suggests that facial attractiveness, as perceived by others, is related to averageness, symmetry, masculinity/femininity, and also to secondary characteristics, namely skin texture and tone, hair quality and style as well as eye color [911122526]. Intuitively, it might be expected that objective facial characteristics are only important when judging the esthetic appearance of an unfamiliar face, and not when performing a self-assessment of facial attractiveness. So far, self-perceived facial attractiveness is more commonly linked to internal processes related to individual self-concept and self-esteem [1314]. Here, we put this preconception under scrutiny and show that self-perceived facial attractiveness is also affected by objective factors, namely facial shape.

Comparisons of facial shape space (as described by Procrustes coordinates) between males and females in our sample, demonstrated significant sexual dimorphism. Males, on average, presented a wider face with more prominent eyebrow ridges, a pronounced chin and more protruded nose. On the other hand, the average female face was narrower, with more protruded lips and cheeks, as well as more dominant eyes. These facial differences between males and females represent well known phenotypic expressions of sexual hormones during growth and development of the human face [2728]. Biologically, they might be related to inherent differences in lung capacity, body mass and distribution of adipose tissue, which to some degree influence facial anatomy [29].

Due to the distinct sexual differences, shape variation was also explored separately in males and females within the present sample. A more careful examination of the principal components explaining more than 50% of the variation within each sex [PC1-PC4] revealed that PC1 and PC2 described changes in the nose, lower facial height and midfacial width (Figs 36). Thus, the areas with the largest variation within the male and female population are similar to the areas presenting the largest differences between sexes (Fig 2); potentially signifying the different effect of sexual hormones not only between, but also within same sex populations. A genetic and environmental interpretation of the observed variation might also be plausible. Mapping of the genetic effect on human facial shape has identified a strong genetic control of the lower third of the face (primarily the chin) and the nose [3032], in all large continental populations. In addition, anatomic investigations of human skulls from populations that lived in diverse climates show marked dissimilarities in the piriform and zygomatic areas between specimens from tropical and temperate areas [33]. These are attributed to evolutionary adaptations to climate conditions and manifest the presence of an additional environmental effect on facial morphology. From a biological view, our results fall within the above spectrum. PC3 and PC4 in our female and male populations, mostly describe changes in the perioral region, the lips and the eyes. The genetic effect on these structures has also been demonstrated, with the distances between the eyes as well as between the eyes and the mouth presenting a high heritability effect [34].

The face is the most influencing factor in human interactions, it contributes to effective communication and affects social and personal relationships [49]. Facial dimorphism related to inherent sexual characteristics plays an important role in romantic relationships by shaping perceived impressions about mating quality, health and reproductive potential [34]. Females, for example, exhibit an increased sexual preference for males with more masculine features during their ovulation period [935]. During this time, females are also judged as more attractive by observers of the same or opposite sex [36].

Features of masculinity and femininity provide cues for physical and social traits, such as attractiveness, personality, trustworthiness, dominance and aggression [589123437]. Typically, masculine male faces and feminine female faces are considered more attractive by both sexes, although this is only true for small deviations from the average face [11123738]. Despite the extensive data supporting the above, the notion of universal attractiveness cues has been challenged, and there seem to be significant differences between populations [39]. It is suggested that attractiveness cues are learned within a social environment [3739] and, thus, many of our beliefs might be representative of western societies only. In the present study all participants were born and raised in the United States, therefore the ethnic variability within our sample did probably not influence self-assessments considerably.

Our results showed that facial shape had a significant effect on self-perceived facial attractiveness and predicted 4% and 5% of the variation in VAS scores in females and males, respectively. Furthermore, females with more feminine features and males with more masculine features seemed to consider themselves more attractive, confirming the findings of numerous previous studies that have assessed attractiveness with external ratings. Given the multidimensionality of factors interfering with the process of self-assessment, our findings reveal the importance of facial shape, an objective factor, in partially steering peoples’ opinions about themselves. It has been suggested that self-perceived attractiveness is an acquired feature that evolves throughout the course of our lifetime according to our social interactions [40]. Furthermore, it strongly affects romantic relationships; individuals with high self-ratings of attractiveness set higher upper limits in their dating expectations regardless of their objective facial appearance [41]. The effect of facial shape becomes more noteworthy when taking into account that humans evaluate faces that resemble them as 22% more attractive [42]. This fairly narcissistic phenomenon implies that people are less likely to consider their objective appearance when making dating decisions and tend to adjust their attractiveness estimates of their potential dates according to their own appearance. The present study counters this idea, since young adults in our sample appeared to be influenced by the morphology of their faces when making their self-assessment. This is a sign that the intuitive process of making romantic or mating decisions may also be subconsciously influenced by more objective factors, such as an individual’s facial shape.

A more in-depth exploration of our sample, in sub-groups, revealed that in white individuals the previously described effect of facial shape was not evident in males and was stronger in females, as compared to the entire female sample. Furthermore, no effect was evident in the subgroup of non-white females. Both of the above observations have significant social implications. Most beauty standards have been historically developed based on white facial features [43], and although beauty standards have evolved with western societies becoming more multi-racial, our finding entertains the thought that young white females might experience more pressure in meeting certain social standards of facial appearance. On the other hand, maybe the effect of facial shape on self-perceived facial attractiveness is mediated by the limited effect of other factors, such as skin texture. Coetzee et al. [44] studied a group of white and black individuals within a western society and observed that whites based their assessments of attractiveness primarily on facial shape, in contrast to blacks who were more influenced by skin tone. They connected their finding to the large variety of darker skin tones, to which whites are visually oblivious compared to blacks. Our results support this conclusion, since self-perceptions of non-white females were not affected by their facial shape. Another reason for this could be that the increased facial shape variation in the non-white females group compared to the white females’ group might have added noise to the outcomes failing to detect a significant effect. However, the absence of statistical significance on this test was definitive, based on the measured p-value, thus, not supporting this notion.

As mentioned before, the same observation was made here in white males; which may be subject to multifold interpretations. Males tend to have a higher self-esteem than females [45] and are more satisfied with their overall appearance [46]. This difference is unlikely to have a genetic or biological origin and seems to dissipate with age, since it is not seen in mature adults [45]. This universally seen phenomenon is rather a result of environmental factors that influence the development of self-esteem over a lifetime [47]. Western societies likely enable males to develop higher self-esteem than females, which in result affects more acquired social features. If so, it can be speculated that the effect of self-esteem on self-perceived attractiveness in young males overshadows any other, more objective feature. This is also supported by the comparison of VAS scores between males and females in our population, which showed that males gave significantly higher attractiveness ratings to themselves (P<0.001). The above considerations together with the reduced sample size of the white male sample might have made the detection of a significant effect in the specific sample impossible.

Methodological considerations

The results of this study must be interpreted within the realm of the studied population and cannot be extrapolated to the general population. We have investigated a large group of young adults that were all highly educated, were born and had lived most of their lives in the United States. Despite their ethnic diversity, it may thus be assumed that their standards for facial attractiveness did not vary significantly. It must be noted that if the same study was repeated in an older population, the results might have been different due to changes in perception of attractiveness with age [48]. Here we did not report on the effect of age on the results, as an initial exploratory analysis revealed that it was not statistically significant.

In addition, participants were not able to look at their pictures prior to evaluating their facial attractiveness, which might have triggered a different response, had it been allowed. However, it was preferred to obtain more “genuine” answers that were not affected by the instant stimulus produced from prior exposure to their facial image.

Furthermore, the reliability testing did not include the image acquisition error related to the camera system since this has been found to be minimal (approximately 0.2mm) [4950]. Therefore, it was not considered to have a significant impact on the results.

Significance and implications

This study provides novel and important information regarding the effect of facial morphology on self-perceived facial attractiveness. Self-assessments of body image and attractiveness are largely performed under the scope of psychosocial evaluations. Thus, the effect of objective features is often understated. Here we show that objective facial appearance is important when humans make decisions about their own facial attractiveness. In addition, we provide support to the notion that even in multicultural, modern societies, beauty stereotypes have changed little and continue to have a strong impact. Our findings are particularly insightful for plastic surgeons, maxillofacial surgeons, orthodontists and other specialists who are involved in treatments affecting patients’ facial appearance and particularly facial shape. Facial shape was identified as a factor related to facial appearance, and thus, as an important element to consider when aiming to improve facial appearance. The latter is shown to be a reason for patients to seek treatment and a factor that affects patient satisfaction from a given intervention. In addition, the results of this study provide helpful information to clinical psychologists interested in aspects of human perception, and are of interest for the general public as facial appearance is an important feature of everyday human interactions.

“Important Conversations” Are Needed to Explain the Nocebo Effect

“Important Conversations” Are Needed to Explain the Nocebo Effect. Anita Slomski. JAMA, February 3, 2021. doi:10.1001/jama.2020.25840

Roger needed no convincing that taking a statin could prevent his early death. At age 52 years, he had mixed lipidemia, severe peripheral vascular disease, obesity, fatty liver disease, and a previous femoral artery occlusion. But as he explained to the investigators of the SAMSON (Self-Assessment Method of Statin Side Effects or Nocebo) trial, he’d already tried 3 different statins and discontinued each one due to the dreadful muscle pain he felt while taking them.

“He was totally gobsmacked when we unblinded the results of SAMSON and showed him that his worst months—including muscle pain so bad he couldn’t get out of bed—were from placebo,” said cardiologist James P. Howard, MB BChir, clinical research fellow at Imperial College London and co–first author of the SAMSON report published in the New England Journal of Medicine. After discovering that he reported feeling fine during the months of the trial that he received a statin, Roger resumed statin therapy with no symptoms for the 4 years since receiving his personal results.

The novel n-of-1 trial validated what physicians have long observed: patients’ negative expectations for statin therapy rather than the drug’s pharmacological action are often responsible for intolerable adverse effects. SAMSON, in fact, found that 90% of adverse effects from statins were explained by this nocebo effect. “The nocebo effect is a massive burden; in our 60 patients, side effects were so bad that they had to come off the tablets on 71 occasions,” Howard said in an interview.

The 60 study participants, all who had previously discontinued statin therapy because of intolerable adverse effects, received 4 bottles each of 20-mg atorvastatin and placebo, and 4 empty bottles. Each month for a year, participants took pills or nothing in a random sequence and recorded their daily symptom intensity on their smartphones.


“To work out the nocebo effect, it’s imperative that you have a nontreatment arm where the patient takes nothing so you can subtract the background symptoms that are ever-present, such as the aches and pains of getting older or of arthritis, for example,” Howard said. “As far as we know, this is the first time anyone has done such a trial.”


At the end of the trial, patients saw how they rated their symptoms during the 3 treatment sequences, which was compelling enough to convince half of them to resume statin therapy. “Only 18 of the original 60—less than one-third—told us that they weren’t restarting statins because they still believed they caused side effects,” Howard said.


Although trials in other journals including JAMA and The Lancet have reported nocebo effects in statin therapy, SAMSON stood out because the study design demonstrated to patients themselves that the nocebo effect is real.

The study’s participants had first-hand evidence that “just the simple act of taking a pill, where they might have been expecting side effects, explained much of the symptoms,” Donald Lloyd-Jones, MD, ScM, president-elect of the American Heart Association (AHA), told JAMA last fall during the AHA’s virtual Scientific Sessions conference.

However, some experts question the magnitude of the nocebo effect in SAMSON’s results. “It’s easy for me to believe that 50% to 60% of statin side effects are nocebo, but not 90%,” Steven E. Nissen, MD, chief academic officer and Lewis and Patricia Dickey Chair in Cardiovascular Medicine at the Cleveland Clinic, said in an interview. “Some patients who have tried very hard to take statins have a real disorder” that prevents them from taking statins.

Howard agrees that his results shouldn’t be extrapolated to all patients who take statins. “In a larger trial, you might find a 70% or 95% nocebo effect,” he said. What’s important “is that the nocebo effect dominates in a majority of patients on a statin and that real side effects are much rarer than we thought.”

For physicians, that means explaining the nocebo effect. “We have to have very important conversations with our patients rather than just writing a prescription, actually telling them what to expect,” said Lloyd-Jones, also chair of preventive medicine at Northwestern Medicine Feinberg School of Medicine.


A Bad Rap Fuels the Nocebo Effect


Although much less studied than the placebo effect, the nocebo response has been demonstrated in a variety of therapies in experimental and real-world settings. A recent review article in the New England Journal of Medicine cited several striking examples. When New Zealand pharmacies switched to a new formulation of thyroid hormone replacement medication, reports of adverse events increased 2000-fold, even though the drug’s active ingredient remained unchanged. Nearly a third of study participants taking the β-blocker atenolol for cardiac disease and hypertension developed sexual adverse effects and erectile dysfunction when they were warned of the potential side effects compared with 16% who weren’t informed of possible adverse effects. Patients have blocked the analgesic effects of the potent opioid remifentanil when falsely told it would increase pain.

“The nocebo effect has been described in biosimilars used in autoimmune diseases, when patients believe the drugs are less effective than the original biologics,” Luana Colloca, MD, PhD, the review’s first author and associate professor in the Department of Pain and Translational Symptom Science at the University of Maryland School of Nursing, said in an interview.

“We know that allergic reactions can be amplified by nocebo, such as people continuing to have symptoms of gluten-intolerance even after receiving a negative diagnosis,” Colloca added. And 30% of women receiving chemotherapy for breast cancer developed anticipatory nausea from previously neutral environmental cues, such as meeting an oncology nurse at the grocery store or being in a room painted the same color as the infusion room, her review noted.

The greater a patient’s negative perception of a therapy, the stronger the nocebo response. “The patients we give statins to are the same patients who get prescriptions for angiotensin-converting enzyme inhibitors, hypertension treatment, and aspirin,” Howard said. “Patients don’t start ramipril for hypertension and say they feel terrible. People view statins much more negatively and with more skepticism.”

Concerns about statins began when the US Food and Drug Administration (FDA) required statin labels to list rhabdomyolysis as a potential serious adverse effect after an early statin was withdrawn from the market. “That made some doctors edgy about statins,” and, in turn, patients, Howard said.

Then as professional societies advised that statins could benefit a greater number of people, a baseless claim began circulating “that statins were developed to enrich the pharmaceutical industry and that doctors are in bed with big pharma, pushing cholesterol drugs,” Nissen said. Companies selling “natural” products to lower cholesterol have also contributed to perceptions that statins are harmful.

A new study that tracked statin adverse effects reported to the FDA’s Adverse Event Report System found that significantly more nocebo-related subjective adverse events than harms substantiated by clinicians have been reported in the last decade. Complaints of nocebo-effect symptoms—but not objective adverse events—peaked whenever the FDA issued a statin warning. One such warning occurred in 2010 when an increased risk of myopathy was observed with high-dose simvastatin.

Bad publicity has also dogged bisphosphonates after reports emerged of women developing esophageal ulcers after taking the drug to treat osteoporosis. “These patients took the bisphosphonate incorrectly—dry swallowing it or taking it while lying down—and they refluxed alendronic acid into the esophagus,” David Karpf, MD, adjunct clinical professor of endocrinology, gerontology, and metabolism at Stanford University School of Medicine, said in an interview.

In the large population-based fracture-prevention trial that Karpf led, serious gastric adverse events were higher in the placebo group than in the bisphosphonate group. “We told participants that the drug is effective in preventing fractures and is generally well tolerated, and lo and behold, we had excellent compliance in the trial,” he said.


“I think the nocebo effect demonstrated in the SAMSON study is generalizable to any drug that has been studied in large populations and shown to be well tolerated but with some side effects, like bisphosphonates,” Karpf added. Drugs approved to treat asymptomatic chronic diseases have passed a high bar for safety, and, therefore, should be more tolerable to patients, he said. But at the same time, the nocebo effect may be stronger for drugs used to prevent disease in asymptomatic patients. “People aren’t getting any therapeutic satisfaction from taking a statin, but they are reading about muscle damage when they Google statins,” said Howard.


These Symptoms Aren’t Phony


Clinicians generally have an inkling about which patients may be vulnerable to the nocebo effect, such as those with a history of anxiety or depression. Other tip-offs are patients who say they’re very sensitive to medications or hate taking them or who mention a long list of symptoms that their previous physicians couldn’t diagnose, according to Arthur Barsky, MD, professor of psychiatry at Harvard Medical School.

Lack of trust in the clinician can also prompt a nocebo response. “A patient reporting side effects can often be a commentary on the doctor-patient relationship,” Barsky said in an interview. “If you aren’t sure your doctor has made the right diagnosis or you aren’t comfortable with your doctor, it’s easier to say you’ll stop taking a drug because it causes headaches than to say, ‘I don’t trust you.’”

Patients who report nocebo symptoms are feeling real distress—but misattributing it to the drug. In reality, their symptoms may be caused by aging, not eating well, stress, or the underlying disease itself. “For patients with difficult lives, side effects to statins can be a nidus for their emotional pain,” Jennifer Robinson, MD, MPH, professor of epidemiology at the University of Iowa College of Public Health and lead author of statin guidelines for the National Lipid Association, said in an interview.

It’s important for clinicians to acknowledge nocebo symptoms as real but “to discount their medical significance by telling patients that the symptoms they are experiencing aren’t harmful or aren’t an indication that the drug is dangerous,” Barsky said. “The more you are worried about what a drug will do to your body, the more you will monitor side effects and the more intense they will become.”

If patients appear hesitant about starting or continuing a particular drug, clinicians should ask what their worries are, Colloca suggested. “The nocebo effect can occur if a patient has incorrect information about a drug or has had prior negative medication experiences,” she said. Physicians can point to trials of the drug showing that participants in the placebo group had similar adverse effects as those on the active drug. “Allaying the patient’s concerns can make the drug more tolerable,” Colloca said.

Physicians may also be able to head off a nocebo effect by emphasizing a drug’s efficacy, tolerability, and safety rather than mentioning rare adverse effects. “I tell patients that statins have been studied in a quarter of a million people and are safer than aspirin,” Robinson said.

When switching patients with rheumatoid arthritis from a biologic therapy to a less costly biosimilar, Roy Fleischmann, MD, clinical professor of medicine at the University of Texas Southwestern Medical Center, takes pains to explain the efficacy and safety of the biosimilar and that the vast majority of patients respond well to it. “There is a perception—among physicians, too—that if a drug is cheaper, the quality is not as good as the original biologic,” Fleischmann said in an interview. “It’s important for physicians to assure themselves and convey to their patients that the biosimilar has been manufactured according to FDA standards.”

Although clinicians must disclose a drug’s potentially dangerous adverse effects, patients can decide if they also want to be informed about potential minor adverse effects. So-called authorized concealment avoids priming patients to experience adverse effects through the potent power of suggestion.

Among patients who initially say they cannot tolerate a statin, up to 90% can successfully return to daily moderate or high-intensity statin therapy when physicians use strategies to mitigate the nocebo effect, according to Robinson. It may take trials of a few different statins or starting a patient on a 5-mg dose once a week and gradually increasing the dose to overcome nocebo adverse effects, she said.

SAMSON’s Howard said he’ll rechallenge patients with a different statin but disagrees with the strategy of inching patients along on low doses to increase tolerability. “You can’t tell patients that the side effects aren’t caused by statins and then start them at a low dose of another one,” he said. “Either you believe that the side effects are due to the nocebo effect, or you believe they are biochemical and then you go with a low dose. Sending mixed messages isn’t helpful.”

For patients at low risk of myocardial infarction or stroke who continue to experience muscle pain after trying 2 different statins, Howard will switch them to the nonstatin ezetimibe. “But if the goal of the statin is for secondary prevention, you are duty-bound to try a lot harder with these patients, whether that means rechallenging with another statin or using a PSCK9 [proprotein convertase subtilisin/kexin type 9] inhibitor,” Howard said.

“We have effective drugs to treat major diseases that have a huge societal impact, such as diabetes, heart disease, osteoporosis,” said Stanford University’s Karpf. “But we need to work harder to improve patients’ adherence to these lifesaving therapies. I think the SAMSON study is one step in that direction.”


Alcohol conditioned contexts enhance positive subjective alcohol effects and consumption

Alcohol conditioned contexts enhance positive subjective alcohol effects and consumption. Joseph A. Lutz, Emma Childs. Behavioural Processes, February 3 2021, 104340. https://doi.org/10.1016/j.beproc.2021.104340

Highlights

• Alcohol-paired environments enhanced positive subjective responses to alcohol.

• Alcohol-paired environments promoted alcohol drinking.

• Conditioning strength predicted early drinking in a context-dependent manner.

• Human CPP is a viable model to study alcohol environment associations.

• This approach may reveal the mechanisms by which contexts induce drinking.

• The model may be used to test strategies to prevent context-induced drinking.

Abstract: Associations between alcohol and the places it is consumed are important at all stages of alcohol abuse and addiction. However, it is not clear how the associations are formed in humans or how they influence drinking, and there are few effective strategies to prevent their pathological effects on alcohol use. We used a human laboratory model to study the effects of alcohol environments on alcohol consumption. Healthy regular binge drinkers completed conditioned place preference (CPP) with 0 vs. 80 mg/100 ml alcohol (Paired Group). Control participants (Unpaired Group) completed sessions without explicit alcohol-room pairings. After conditioning, participants completed alcohol self-administration in either the alcohol- or no alcohol-paired room. Paired group participants reported greater subjective stimulation and euphoria, and consumed more alcohol in the alcohol-paired room in comparison to the no alcohol-paired room, and controls tested in either room. Moreover, the strength of conditioning significantly predicted drinking; participants who exhibited the strongest CPP consumed the most alcohol in the alcohol-paired room. This is the first empirical evidence that laboratory-conditioned alcohol environments directly influence drinking. The results also confirm the viability of the model to examine the mechanisms by which alcohol environments stimulate drinking and to test strategies to counteract their influence on behavior.

Abbreviations: CPPconditioned place preferenceALC80mg alcohol/100ml bloodNo ALC0mg alcohol/100ml bloodBrACbreath alcohol concentrationHRheart rateBPblood pressure

Keywords: Alcoholconditioned place preferencecontextcueshumanself-administration


88% of adolescents experienced no or very small effects of social media use on self-esteem, whereas 4% experienced positive and 8% negative effects

Social Media Use and Adolescents’ Self-Esteem: Heading for a Person-Specific Media Effects Paradigm. Patti Valkenburg, Ine Beyens, J Loes Pouwels, Irene I van Driel, Loes Keijsers. Journal of Communication, jqaa039, January 31 2021, https://doi.org/10.1093/joc/jqaa039

Rolf Degen's take: For the vast majority of youth, social media use had little or no effect on self-esteem, while small minorities experienced improvement or worsening. https://t.co/rBMHXvhpUn https://t.co/UHeD3Fortg

Abstract: Eighteen earlier studies have investigated the associations between social media use (SMU) and adolescents’ self-esteem, finding weak effects and inconsistent results. A viable hypothesis for these mixed findings is that the effect of SMU differs from adolescent to adolescent. To test this hypothesis, we conducted a preregistered three-week experience sampling study among 387 adolescents (13–15 years, 54% girls). Each adolescent reported on his/her SMU and self-esteem six times per day (126 assessments per participant; 34,930 in total). Using a person-specific, N = 1 method of analysis (Dynamic Structural Equation Modeling), we found that the majority of adolescents (88%) experienced no or very small effects of SMU on self-esteem (−.10 < β < .10), whereas 4% experienced positive (.10 ≤ β ≤ .17) and 8% negative effects (−.21 ≤ β ≤ −.10). Our results suggest that person-specific effects can no longer be ignored in future media effects theories and research.


Discussion

The two existing meta-analyses on the relationship of SMU and self-esteem assessed the effects of their included empirical studies as weak and their results as mixed (Huang, 2017Liu & Baumeister, 2016). The between-person associations reported in empirical studies on SMU and self-esteem ranged from +.22 (Apaolaza et al., 2013) to .28 (Rodgers et al., 2020). In the current study, the between-person association between SMU and self-esteem fits within this range: We found a negative relationship of r = .15 between SMU and self-esteem (RQ1), meaning that adolescents who spent more time on social media across a period of three weeks reported a lower level of self-esteem than adolescents who spent less time on social media. This negative relationship pertained to the summed usage of Instagram, Snapchat, and WhatsApp, but did not differ for the usage of each of the separate platforms.

In addition, although we hypothesized a positive overall within-person effect of SMU on self-esteem (H1), we found a null effect. However, this overall null effect must be interpreted in light of the supportive results for our second hypothesis (H2), which predicted that the effect of SMU on self-esteem would differ from adolescent to adolescent. We found that the majority of participants (88%) experienced no or very small positive or negative effects of SMU on changes in self-esteem (.10 < β < .10), whereas one small group (4%) experienced positive effects (.10 ≤ β ≤ .17), and another small group (8%) negative effects of SMU (.21 ≤ β ≤ .10) on self-esteem.

The person-specific effect sizes reported in the current study pertain to SMU effects on changes in self-esteem (i.e., self-esteem controlled for previous levels of self-esteem). As Adachi and Willoughby (2015, p. 117) argue, such effect sizes are often “dramatically” smaller than those for outcomes that are not controlled for their previous levels. Indeed, when we checked this assumption of Adachi & Willoughby, the associations between SMU and self-esteem not controlled for its previous levels resulted in a considerably wider range of effect sizes (β = .34 to β = +.33) than those that did control for previous levels (β = . 21 to β = +.17). To account for a potential undervaluation of effect sizes in autoregressive models, Adachi and Willoughby (2015, p. 127) proposed “a more liberal cut-off for small effects in autoregressive models (e.g., small = .05).” In this study, we followed our preregistration and interpreted effect sizes ranging from .10 < β < +.10 as non-existent to very small. However, if we would apply the guideline proposed by Adachi and Willoughby (2015) to our results, the distribution of effect sizes would lead to 21% negative susceptibles, 16% positive susceptibles, and 63% non-susceptibles.

Our results showed that the effects of SMU on self-esteem are unique for each individual adolescent, which may, in turn, explain why the two meta-analyses evaluated the effects of their included studies as weak and their results as inconsistent. First, our results suggest that these effects were weak because they were diluted across a heterogeneous sample of adolescents with different susceptibilities to the effects of SMU. This suggestion is supported by comparing our overall within-person effect (β = .01, ns) with the full range of person-specific effects, which ranged from moderately negative to moderately positive. Second, the effects reported in earlier studies may have been inconsistent because these studies may, by chance, have slightly oversampled either “positive susceptibles” or “negative susceptibles.” After all, if a sample is somewhat biased towards positive susceptibles, the results would yield a moderately positive overall effect. Conversely, if a sample is somewhat biased towards negative susceptibles the results would report a moderately negative overall effect.

It may seem reassuring at first sight that the far majority of participants in our study did not experience sizeable negative effects of SMU on their self-esteem. However, as illustrated in the bottom N =1 time-series plot in Figure 2, for some participants, their non-significant within-person effect may result from strong social media-induced ups and downs in self-esteem, which cancelled each other out across time, resulting in a net null effect. However, as the two upper time-series plots in Figure 2 show, not only the non-susceptibles, but also the positive and negative susceptibles sometimes experienced effects in the opposite direction: The positive susceptibles occasionally experienced negative effects, while the negative susceptibles occasionally experienced positive effects.

Although DSEM models enable researchers to demonstrate how within-person effects of SMU differ across persons, they do not (yet) allow us to statistically evaluate the presence of both positive and negative effects within one and the same person (Hamaker, 2020, personal communication). A possibility to analyze the combination of positive and negative effects within persons may soon be offered by even more advanced modeling strategies than DSEM, which are currently undergoing a rapid development. Among those promising developments are regime switching models (Lu et al., 2019), which provide the opportunity to establish the co-occurrence of both positive and negative effects of SMU within single persons.

Explanatory Hypotheses and Avenues for Future Research

Although our study allowed us to reveal the prevalence of positive susceptibles, negative susceptibles, and non-susceptibles among participants, it did not investigate why and when some adolescents are more susceptible to SMU than others. Our exploratory results did show that adolescents with a lower mean level of self-esteem, experienced a more positive within-person effect of SMU on self-esteem than adolescents with a higher mean level of self-esteem. This latter result may point to a social compensation effect (Kraut et al., 1998), indicating that adolescents who are low in self-esteem may successfully seek out social media to enhance their self-esteem. Our DSEM analysis did not reveal differences in the within-person effects of SMU on self-esteem among adolescents with high and low SMU, suggesting that the positive effects among some adolescents cannot be attributed to modest SMU, whereas the negative effects among other adolescents cannot be attributed to excessive SMU.

An important next step is to further explain why adolescents differ in their susceptibility to SMU. A first explanation may be that adolescents differ in the valence (the positivity or negativity) of their experiences while spending time on social media. It is, for example, possible that the positive susceptibles experience mainly positive content on social media, whereas the negative susceptibles experience mainly negative content. In this study, we focused on time as a predictor of momentary ups and downs in self-esteem. However, most self-esteem theories emphasize that it is the valence rather than the duration of social experiences that results in self-esteem fluctuations. It is assumed that self-esteem goes up when we succeed or when others accept us, and drops when we fail or when others reject us (Leary & Baumeister, 2000). Future research should, therefore, extend our study by investigating to what extent the valence of experiences on social media accounts for differences in susceptibility to the effects of SMU above and beyond adolescents’ time spent on social media.

A second explanation as to why adolescents differ in their susceptibility to the effects of SMU may lie in person-specific susceptibilities to the positivity bias in SM. Our first hypothesis was based on the idea that the sharing of positively biased information would elicit reciprocal positive feedback from fellow users, which, in turn, would lead to overall improvements in self-esteem. However, our results suggest that, for some adolescents, this positivity bias may lead to decreases in self-esteem, for example, because of their tendency to compare themselves to other social media users who they perceive as more beautiful or successful. This tendency towards social comparison may lead to envy (e.g., Appel et al., 2016) and decreases in self-esteem (Vogel et al., 2014).

Until now, studies investigating the positive feedback hypothesis have mostly focused on the positive effects of feedback on self-esteem (e.g., Valkenburg et al., 2017), whereas studies examining the social comparison hypothesis have mainly focused on the negative effects of social comparison on self-esteem (e.g., Vogel et al., 2014). However, both the positive feedback hypothesis and the social comparison hypothesis are more complex than they may seem at first sight. First, although most adolescents receive positive feedback while using social media, a minority frequently receives negative feedback (Koutamanis et al., 2015), and may experience resulting decreases in self-esteem. Likewise, although social comparison may lead to envy, it may also lead to inspiration (e.g., Meier & Schäfer, 2018), and resulting increases in self-esteem. Future research should attempt to reconcile these explanatory hypotheses by investigating who is particularly susceptible to positive and/or negative feedback, and who is particularly susceptible to the positive (e.g., inspiration) and/or negative (e.g., envy) effects of social comparison on social media.

Another possible explanation for differences in person-specific effects of SMU on self-esteem may lie in differences in the specific contingencies on which adolescents’ self-esteem is based. Self-esteem contingency theory (Crocker & Brummelman, 2018) recognizes that people differ in the areas of life that serve as the basis of their self-esteem (Jordan & Zeigler-Hill, 2013). For example, for some adolescents their physical appearance may serve as the basis of their self-esteem, whereas others may base their self-esteem on peer approval. Different contexts may also activate different self-esteem contingencies (Crocker & Brummelman, 2018). On the soccer field, athletic ability is valued, which may activate the athletic ability contingency in this context. On social media, physical appearance and peer approval may be relevant, so that these contingencies may particularly be triggered in the social media context. It is conceivable that adolescents who base their self-esteem on appearance or peer approval may be more susceptible to the effects of SMU than adolescents who base their self-esteem less on these contingencies, and this is, therefore, another important avenue for future research.

Stimulating Positive and Mitigating Negative Effects

Our results suggest that for the majority of adolescents the momentary effects of SMU are small or negligible. As discussed though, all adolescents—whether they are positive susceptibles, negative susceptibles, or non-susceptibles—may occasionally experience social media-induced drops in self-esteem. Social media have become a fixture in adolescents’ social life, and the use of these media may thus result in negative experiences among all adolescents. Therefore, not only the negative susceptibles, but all adolescents need their parents or educators to help them prevent, or cope with, these potentially negative experiences. Parents and educators can play a vital role in enhancing the positive effects of SMU and combatting the negative ones. Helping adolescents prevent or process negative feedback and explaining that the social media world may not be as beautiful as it often appears, are important ingredients of media-specific parenting as well as school-based media literacy programs.

Although this study was designed to contribute to (social) media effects theories and research, our analytical approach may also have social benefits. After all, N =1 time-series plots could not only be helpful for theory building, but also for person-specific advice to adolescents. These plots give a comprehensive snapshot of each adolescent’s experiences and responses across more or less prolonged time periods. Such information could greatly help tailoring prevention and intervention strategies to different adolescents. After all, only if we know which adolescents are more or less susceptible to the negative and positive effects of social media, are we able to adequately target prevention and intervention strategies at these adolescents.

Towards a Personalized Media Effects Paradigm

Insights into person-specific susceptibilities to certain environmental influences is burgeoning in several disciplines. For example, in medicine, personalized medicine is on the rise. In education, personalized learning is booming. And in developmental psychology, differential susceptibility theories are among the most prominent theories to explain heterogeneity in child development. Although N =1 or idiographic research is now progressively embraced in multiple disciplines, spurred by recent methodological developments, it has a long history behind it. In fact, in the first two decades of the 20th century, scholars such as Piaget, Pavlov, and Thorndike often conducted case-by-case research to develop and test their theories bottom up (i.e., from the individual to the population; Robinson, 2011). However, in the 1930s, idiographic research soon lost ground to nomothetic approaches, certainly after Francis Galton attached the term nomothetic to the aggregated group-based methodology that is still common in quantitative research (Robinson, 2011). However, due to technological advancements, it has become feasible to collect masses of intensive longitudinal data from masses of individuals on the uses and effects of social media (e.g., through ESM, tracking). Moreover, rapid developments in data mining and statistical methods now also enable researchers to analyze highly complex N =1 data, and by doing so, to develop and investigate media effects and other communication theories bottom-up rather than top-down (i.e., from the population to the individual). We hope that this study may be a very first step to a personalized media effects paradigm.

Although people clearly moralize diverse concerns—including those related to religion, sex, and food—heterogeneity in conceptual definitions is problematic for theory development and make falsification extremely difficult

Gray, Kurt, Nicholas DiMaggio, Chelsea Schein, and Frank Kachanoff. 2021. “What Is 'purity'? Conceptual Murkiness in Moral Psychology.” PsyArXiv. February 3. doi:10.31234/osf.io/vfyut

Abstract: Purity is an important topic in psychology. It has a long history in moral discourse, has helped catalyze paradigm shifts in moral psychology, and is thought to underlie political differences. But what exactly is “purity?” To answer this question, we review the history of purity and then systematically examine 158 psychology papers that define and operationalization (im)purity. In contrast to the many concepts defined by what they are, purity is often understood by what it isn’t—obvious dyadic harm. Because of this “contra”-harm understanding, definitions and operationalizations of purity are quite varied. Acts used to operationalize impurity include taking drugs, eating your sister’s scab, vandalizing a church, wearing unmatched clothes, buying music with sexually explicit lyrics, and having a messy house. This heterogeneity makes purity a “chimera”—an entity composed of various distinct elements. Our review reveals that the “contra-chimera” of purity has 9 different scientific understandings, and that most papers define purity differently from how they operationalize it. Although people clearly moralize diverse concerns—including those related to religion, sex, and food—such heterogeneity in conceptual definitions is problematic for theory development. Shifting definitions of purity provide “theoretical degrees of freedom” that make falsification extremely difficult. Doubts about the coherence and consistency of purity raise questions about key purity-related claims of modern moral psychology, including the nature of political differences and the cognitive foundations of moral judgment.


Children with relatively high narcissism levels tend to emerge as leaders, even though they may not excel as leaders

Narcissism and Leadership in Children. Eddie Brummelman, Barbara Nevicka, Joseph M. O’Brien. Psychological Science, February 3, 2021. https://doi.org/10.1177/0956797620965536

Rolf Degen's take: Narcissistic children often attained the leadership position in the classroom even though they did not possess the leadership qualities they thought themselves to have.

Abstract: Some leaders display high levels of narcissism. Does the link between narcissism levels and leadership exist in childhood? We conducted, to our knowledge, the first study of the relationship between narcissism levels and various aspects of leadership in children (N = 332, ages 7–14 years). We assessed narcissism levels using the Childhood Narcissism Scale and assessed leadership emergence in classrooms using peer nominations. Children then performed a group task in which one child was randomly assigned as leader. We assessed perceived and actual leadership functioning. Children with higher narcissism levels more often emerged as leaders in classrooms. When given a leadership role in the task, children with higher narcissism levels perceived themselves as better leaders, but their actual leadership functioning did not differ significantly from that of other leaders. Specification-curve analyses corroborated these findings. Thus, children with relatively high narcissism levels tend to emerge as leaders, even though they may not excel as leaders.

Keywords: narcissism, leadership, childhood, open data, open materials

Our randomized study examined the relationship between narcissism levels and various aspects of leadership in childhood. Narcissism was assessed as a continuous personality trait using the Childhood Narcissism Scale (Thomaes et al., 2008). Children with higher narcissism levels more often emerged as leaders in their classrooms and had more positive views of their own leadership functioning. Yet when they actually had to lead a group, their leadership functioning did not differ significantly from that of other leaders. Indeed, as leaders, children with higher narcissism levels did not differ significantly from other leaders in how much leadership behavior they displayed, how positively they were perceived by their followers, or how their group performed. Specification-curve analyses demonstrated the robustness of our findings.

Theoretical implications

Children with relatively high narcissism levels tended to emerge as leaders in their classrooms, even though they did not actually excel as leaders. How is that possible? According to evolutionary theories of self-deception (von Hippel & Trivers, 2011), self-deception has evolved to facilitate interpersonal deception. Because children with relatively high narcissism levels truly believe they make amazing leaders, they may confidently convince others of their leadership skills without having to suppress or hide any self-doubt. These children may thus acquire leadership positions and other social resources.

What is unique about narcissism and leadership in childhood? Like their adult counterparts (Grijalva et al., 2015), children with relatively high levels of narcissism tend to emerge as leaders. Yet unlike their adult counterparts (Nevicka, Ten Velden, et al., 2011), children with relatively high narcissism levels in leadership roles do not tend to significantly harm their group’s performance. Narcissism may have fewer interpersonal costs in childhood than in adulthood (Poorthuis et al., 2019), perhaps because children are generally less socially dominant than adults (Roberts et al., 2006), making them less inclined to act against their group’s interests.

Research in adults suggests that narcissism levels are underpinned by agentic and antagonistic traits (Back & Morf, 2018Krizan & Herlache, 2018). The association between narcissism levels and leadership may be driven, in part, by agentic traits (e.g., self-confidence; Grijalva et al., 2015Watts et al., 2013). For example, when adults with relatively high narcissism levels enter a new peer group, their agentic traits predict initial increases in popularity (Leckelt et al., 2015). In our study, agentic traits did not significantly mediate the association between narcissism levels and leadership emergence. Agentic traits did, however, fully mediate the association between narcissism levels and self-perceived leadership functioning. Thus, agentic traits helped explain why children with higher narcissism levels perceived themselves more favorably as leaders—an important step toward developing a leadership identity (Murphy & Johnson, 2011).

Strengths, limitations, and future directions

Strengths of our study include its developmental focus, its experimental design, and its multimethod and multisource assessments of leadership functioning. Our study also has limitations. First, our study was not preregistered. Although specification-curve analyses demonstrate the robustness of our findings, we call for well-powered replications. Second, the nature of childhood leadership is understudied. We captured leadership emergence using peer nominations and captured leadership functioning using a collaborative task (Gummerum et al., 2014). Supporting the task’s validity, results showed that leaders displayed more leadership behavior than did their followers, and the more leadership behavior they displayed, the better their group performed. Also, followers rated leaders in better-performing groups as more effective. We call for more research on the construct validity of childhood leadership. For example, are more popular children also more likely to emerge as leaders? And does children’s leadership functioning vary across contexts (e.g., high vs. low intergroup competition)?

Our research also points to new research directions. An exciting direction will be to examine leadership as it emerges naturally in groups and develops across the life span. In our study, we randomly assigned one child to be the leader. Would children with relatively high narcissism levels perform better as leaders and would they be more valued by their followers when they have truly earned their leadership roles? If so, would they be more likely to become successful leaders in adulthood? And would their success be driven by their agentic or antagonistic traits (Leckelt et al., 2015)? Research has begun to examine how adults with relatively high narcissism levels attain career success, and how success, in turn, shapes them (Wille, Hofmans, Lievens, Back, & De Fruyt, 2019). Addressing these issues will elucidate how narcissism levels and leadership intersect across the life span.