Thursday, February 10, 2022

Enjoyable experiences go stale in three distinct temporal profiles, and the patterns of "hedonic decline" are stable across time and stimuli, within an individual

Identifying the temporal profiles of hedonic decline. Jeff Galak, Jinwoo Kim, Joseph P. Redden. Organizational Behavior and Human Decision Processes, Volume 169, March 2022, 104128. https://doi.org/10.1016/j.obhdp.2022.104128

Highlights

• Hedonic decline unfolds in three distinct temporal profiles (shapes): Flat, Steady Decline, Rapid Onset Decline.

• Hedonic decline temporal profiles are stable across time and stimuli, within an individual.

• Hedonic decline temporal profiles can be explained by variation in Need for Cognition.

• Hedonic decline temporal profiles have significant downstream consequences on future consumption choice and timing.

• Understanding how hedonic decline temporal profiles unfold can be of great benefit to individuals and to organizations.

Abstract: The unfortunate reality of the human condition is that enjoyable experiences become less enjoyable with time and repetition. This hedonic decline has been well documented across a variety of stimuli and experiences. However, previous work has largely ignored the possibility that the temporal profile of hedonic decline varies at the individual level. In the present work, we first identify three temporal profiles of hedonic decline: flat, steady decline, and rapid onset decline. We next demonstrate that these temporal profiles of hedonic decline are relatively stable across both stimuli and time for any given individuals. That is, a temporal profile observed for one stimulus can be used to predict the temporal profile of hedonic decline for a novel stimulus or the same stimulus at a future date. We further explore the psychological underpinnings of these differences and note that Need for Cognition, a stable personality trait, partially explains which individuals will be more likely to experience different temporal profiles. Finally, we demonstrate two important downstream consequences to these three different temporal profiles of hedonic decline: re-consumption choice and re-consumption timing. This work provides a first look into the various ways in which hedonic decline operates at an individual level and documents predictable heterogeneity in such tendencies, an important departure from previous research looking at hedonic decline in aggregate.

7. General discussion

Across five studies we demonstrate that hedonic decline tends to follow one of three distinct patterns: Rapid Onset Decline, Steady Decline, or Flat. Rapid Onset Decline is characterized by a fast initial decline in enjoyment that tapers off over time. Steady Decline is characterized by the opposite in that there is little hedonic decline at first, but then hedonic decline accelerates once a threshold is seemingly reached. Finally, Flat is characterized by little to no hedonic decline at all. These three temporal profiles consistently emerged across stimuli including food, music, art, and videos. Critically, ex ante, it is not obvious that these are the three temporal profiles that must emerge from such an investigation. Indeed, increases in enjoyment (of various temporal profiles), linear decreases in enjoyment, irregular and/or cyclical changes in enjoyment, or simply no clustering at all were all plausible as common profiles.

Instead, we consistently observed the same three temporal profiles of hedonic decline regardless of the stimuli. Critically, not only do these temporal profiles consistently appear across studies, they appear to be stable for an individual both across time and across stimuli. That is, if an individual is classified into one of these three temporal profiles, they will likely experience hedonic decline the same way both for the same stimulus sometime in the future, as well as a novel stimulus. This type of consistency has yet to be documented in any capacity in the literature on hedonic decline. Indeed, previous work has treated hedonic decline either as a monolith where all people experience hedonic decline the same way, or has allowed for individual variation primarily as a nuisance to statistically control for when modeling more general effects. The present work moves well beyond these prior findings by showing that people experience hedonic decline with predicable heterogeneity that is stable across time and stimuli.

This suggests that these three temporal profiles are fundamental to understanding how one experiences hedonically relevant stimuli over time. As far as we know, no fundamental theory in psychology argues that a given person should experience hedonic decline in generally the same way across stimuli. However, there is ample evidence for stable individual traits that could help account for this. Here, we explore just one such well-known trait, that of need for cognition (NFC in Studies 2 and 3). Of course, we expect that a multitude of other individual differences likely contribute to why a person experiences a particular type of hedonic decline. Potential candidates for future research here could include mindfulness (Bishop et al., 2004), optimal stimulation (Raju, 1980), variety seeking (van Trijp & Steenkamp, 1992), self control (Tangney et al., 2004), as well as many others.

In fact, in a post-hoc analysis of our study demographics, we found that older participants were more likely to exhibit a Flat pattern than a Rapid Onset Decline pattern (see Supplemental Materials for details), yet no differences emerged for gender. This is largely consistent with the notion that older individuals tend to require a lower need for stimulation, a possible correlate of hedonic decline (Kish and Busse, 1968Raju, 1980). There are likely many other demographic and psychological difference that help explain cluster membership, and we expect future work will uncover such differences to further our understanding of the antecedents of hedonic decline. In the present work, we limit ourselves to NFC to first document a novel antecedent to hedonic decline, and second to demonstrate that the three clusters we observe are not just random artifacts of our analytical approach. Rather, these groupings can be predicted, in part, by theory.

Finally, aside from documenting the existence and partial psychological underpinnings of three hedonic decline clusters, we show two critical downstream consequences: re-consumption behavior (Study 4a), and future consumption timing (Study 4b). For people with rapid hedonic decline (Rapid Onset Decline), the choice to re-consume a once enjoyable stimulus is decreased and delayed significantly after just a few exposures. The same is not true for those with little hedonic decline (Flat), as they are more willing to immediately re-consume a stimulus even after repeated exposure to it. In other words, in order to predict either re-consumption behavior or preference for future consumption timing for any given individual, it is not enough to know their initial enjoyment with a stimulus, nor even the number of previous consumption episodes of that stimulus. Rather, to predict re-consumption and preference for future consumption timing, one must also know which hedonic decline trajectory that person is likely to experience.

This research has important implications for our understanding of psychology in that it contributes to our growing understanding of how heterogeneity in experiences can help predict behavior at the individual level (Bolger et al., 2019). In the field of psychology, in particular, there has been limited work devoted to including heterogeneity of human experiences in theory and model development. This work demonstrates the clear importance of doing so, and is meant to be a steppingstone for those working to develop a larger theory of hedonic consumption. Critically, this work does much more than simply claim that people are different (which is largely self-evident), but rather it also identifies specific groups of people in terms of how they respond to hedonic stimuli. Our behavioral results (Studies 4a and 4b) also suggest that some people may naturally show less hedonic decline, making it easier for these Flat decliners to maintain their focus when listening to a speaker, performing a work task, or building expertise. Alternatively, these Flat decliners likely also find it difficult to exhibit self-control at other times such as eating an indulgent food, playing a video game, or spending money on a shopping spree.

This research has implications for practitioners as well. A firm that can identify the hedonic decline type for a person can then use it to predict future preferences. For instance, if a music streaming service sees a person drop a particular song from a playlist after a few plays, this may indicate a Rapid Onset Decliner who needs lots of variety in the future. Alternatively, if a person has been identified as a Flat type, then they are likely to keep using a product more in the long term (suggesting a firm should invest more to acquire and keep them). Given these benefits, we expect managers will find creative ways to identify one’s hedonic decline type. Possibilities include ongoing satisfaction surveys like many retailers and fast food companies offer on the back of receipts, the ongoing ratings of episodes as one watches a streaming series, or the length of time one spends on a media site before losing interest. Likewise, profiles may be built for individuals using other general measures, such as need for cognition, age, etc.

There is also the possibility of our work informing how future researchers should approach the study of hedonic decline more generally. As aforementioned, most research studying hedonic responses of any kind over time, generally assumes that all people follow a similar trajectory (linear) of hedonic decline. To the extent that our work shows this to be far from the case, there is a simple and specific prescription that all researchers should follow: ascertain if the research question of interest varies as a function of cluster membership. That is, much work in this space uses experimental manipulations to demonstrate a shift in overall hedonic decline. A simple addition to that research approach would be to first conduct a cluster analysis as done in this paper, and then test if any experimental manipulation varies as a function of cluster membership. In Supplemental Materials Study S5 we found that disrupting an experience slowed the rate of hedonic decline across all clusters, but this did not need to be the case. It was equally plausible that the disruption would only influence, say, the individuals in the Rapid Onset Decline cluster. For future work, we would encourage all researchers to understand if their interventions are universally applicable, or rather apply to only some subset of individuals. At a minimum, researchers should explore modeling results with individual-level random effects for the intercept, linear, and quadratic terms, and examine the histograms of the individual estimates. Doing so will yield greater insight into the underlying psychology of whatever is being studied.

There are, of course, still some unanswered questions on which the present manuscript can only speculate. For instance, do these same hedonic decline clusters emerge for all stimuli? By design, all of our studies employed repetition in the form of repeating a discrete stimulus (e.g. a single song repeated, or a single type of food repeatedly consumed) to induce hedonic decline, but would we observe the same clusters for stimuli that are structurally different? For instance, videos provide a dynamic stimulus that unfolds in new ways over time. To explore this question, we ran a study in which participants watched a 13-minute nature documentary, and rated their enjoyment every 30 s (without stopping consumption, via an in-experience measure). Consistent with our other studies, we again found the same three clusters of hedonic decline, even for this longer continuous experience (Supplemental Materials Study S6). Beyond continuous versus discrete, another structural difference could be the duration of the experience. In all of our studies, the experiences were relatively short lived, lasting just minutes in totality. In contrast, other work has looked at longer, and perhaps more complex stimuli, such as full length movies or visits to museums (O'Brien, 2019). Indeed, such work has found little hedonic decline with repetition, which may reflect a shift in the mix of our three cluster types for longer and more complex experiences. We leave this and other related questions to future research.

There is also the question of how such clustering would unfold for negative or aversive stimuli. For instance, Nelson & Meyvis (2008) found similar results of the influence of breaks on hedonic decline for both negative and positive stimuli. This seems to suggest that people fundamentally experience similar diminishing hedonic responses to all types of stimuli, be they positive or negative. And yet, some recent work suggests that hedonic responses are not symmetric, at least in some domains. For instance, aversive experience are much more sensitive to hedonic contrasts than positive experiences (Voichek & Novemsky 2021). Might this mean that people experience negative stimuli fundamentally differently from positive stimuli, and thus group into different clusters than what we observed here? Or might there be less stability in clustering when considering clusters observed with a positive stimulus and then projected onto expected experiences with a negative stimulus? This too is an important question that we hope future researchers will tackle.

Going beyond consumption of stimuli, the field of hedonic adaptation has often focused on major life events. The most typical finding is that even after major events like the loss of a child or a change in employment, people’s overall hedonic experience (i.e., their wellbeing) returns to a set point after enough time has passed (Brickman et al., 1978Lucas et al., 2003Lucas et al., 2004). Our research, though robust to a variety of stimuli, is relatively mute on whether similar clusters of hedonic decline will emerge for such larger-scale, longer-term experiences focused on overall well-being. That is, following the loss of a job, people tend to initially experience an extreme negative response, which then returns to their pre-job loss levels with time. But does that hedonic adaptation occur uniformly for all individuals, or rather like in the present research, do some people experience little recovery, some rapidly recover, while others’ recovery occurs only after a prolonged period of extreme negativity. If future work documents such clusters for major life events, that would potentially allow for a stronger understanding of which types of individuals require more intense interventions following major negative life experiences to help them return to their pre-negative experience set points. After all, if some individuals experience Rapid Onset Decline (recovery, in this case), they may be less in need of clinical help than those who experience Flat or Steady Decline. Of course, for now, we can only speculate and hope that such questions will be answered with future research.

In sum, hedonic decline, though ubiquitous, is not quite as singularly determined as once believed. While some work has explored why some individuals could systematically differ in their hedonic decline (Chugani et al., 2015Nelson and Redden, 2017Redden and Haws, 2013), this research is very limited in scope and generally understudied. Further, none of this work considered how there might be systematic patterns across all people across all domains, which is exactly what more general theories of enjoyment would require. Moreover, these responses are similar across a variety of stimuli, and include both repetitive consumption and continuous consumption. We hope that our present work spurs future research both in the area of hedonic decline, as well as more broadly in the area of predictable heterogeneous psychological responses to all forms of stimuli for all types of people.


Ever wondered how grandiose narcissism is related to vulnerable narcissism in the general population? Hint: At very high levels of grandiosity you also see lots of vulnerability

The nonlinear association between grandiose and vulnerable narcissism: An individual data meta-analysis. Emanuel Jauk, Lisa Ulbrich, Paul Jorschick, Michael Höfler, Scott Barry Kaufman, Philipp Kanske. Journal of Personality, December 3 2021. https://doi.org/10.1111/jopy.12692

Abstract

Objective: Narcissism can manifest in grandiose and vulnerable patterns of experience and behavior. While largely unrelated in the general population, individuals with clinically relevant narcissism are thought to display both. Our previous studies showed that trait measures of grandiosity and vulnerability were unrelated at low-to-moderate levels of grandiose narcissism, but related at high levels.

Method: We replicate and extend these findings in a preregistered individual data meta-analysis (“mega-analysis”) using data from the Narcissistic Personality Inventory (NPI)/Hypersensitive Narcissism Scale (HSNS; N = 10,519, k = 28) and the Five-Factor Narcissism Inventory (FFNI; N = 7,738, k = 17).

Results: There was strong evidence for the hypothesis in the FFNI (βGrandiose < 1SD = .08, βGrandiose > 1SD = .36, βGrandiose > 2SD = .53), and weaker evidence in the NPI/HSNS (βGrandiose < 1SD = .00, βGrandiose > 1SD = .12, βGrandiose > 2SD = .32). Nonlinearity increased with age but was invariant across other moderators. Higher vulnerability was predicted by elevated antagonistic and low agentic narcissism at subfactor level.

Conclusion: Narcissistic vulnerability increases at high levels of grandiosity. Interpreted along Whole Trait Theory, the effects are thought to reflect state changes echoing in trait measures and can help to link personality and clinical models.

 DISCUSSION

This study tested the nonlinearity hypothesis on the relation of narcissistic grandiosity and vulnerability using a preregistered individual data meta-analysis (mega-analysis). We observed clear evidence (moderate to large effects) for the hypothesis in the FFNI and weaker evidence (small to moderate effects) in the NPI/HSNS. Specifically, findings for the FFNI showed that there is a sizeable difference in slope (Δβ = .28) between grandiosity and vulnerability at lower versus higher levels (+1 SD) of grandiosity, and this difference becomes stronger as grandiosity further increases (Δβ = .43 at +2 SD). Complementary empirical breakpoint detection yielded an estimate in between those two criteria (+1.35 SD). The effect was not dependent upon moderators such as country of assessment, questionnaire version, or participants' sex but was moderated by participants' age, which we elaborate on in the following. For the NPI/HSNS, we observed a small effect (Δβ = .12) for the hypothesized relation when comparing segments below and above +1 SD, and a moderate effect when applying a stricter criterion (Δβ = .31 at +2 SD). The empirical breakpoint estimate at +1.98 SD aligned with this latter criterion. There was no indication of heterogeneity across samples or a moderation effect, though the interaction seemed to depend on age (as for the FFNI).16 Taken together, these results show that there is evidence for an increase of narcissistic vulnerability at high levels of grandiosity as assessed by trait self-report scales. The differences are subtle, and their detection requires a nuanced and reliable assessment.

4.1 Personality and clinical perspectives on narcissism—paradox lost?

Given the near-orthogonality of grandiose and vulnerable narcissism measures in the general population (Jauk & Kaufman, 2018; Jauk et al., 2017; Krizan & Herlache, 2018; Miller et al., 2011; Wink, 1991), personality models tend to view these two expressions of narcissism as mostly distinct traits. Conversely, clinical perspectives are more inclined to see a common ground for both (cf. Wright & Edershile, 2018), and emphasize that individuals with pathological narcissism can fluctuate between grandiose and vulnerable states (Pincus & Lukowitsky, 2010; Ronningstam, 2009). Higher state variability has also been confirmed in systematic research using different methods (Edershile & Wright, 2020; Gore & Widiger, 2016; Kanske et al., 2017; Oltmanns & Widiger, 2018). Our findings show that personality and clinical perspectives hold true for different subpopulations. While grandiose and vulnerable narcissism reflect largely orthogonal traits at low-to-moderate levels of grandiosity, they become more intertwined at higher levels (+1 SD, or top 15.9%), and substantially related at very high levels (+2 SD, or top 2.6%). This latter criterion lies within the prevalence estimates of NPD (American Psychiatric Association, 2013; Ronningstam, 2009), a personality disorder characterized by extreme grandiosity (Miller et al., 2014).

What mechanisms might drive the increasing correlation of trait measures of grandiosity and vulnerability at high levels of grandiose narcissism? Based on accumulating evidence for variation in grandiose and vulnerable states, particularly at high levels of grandiose narcissism (Edershile & Wright, 2020; Gore & Widiger, 2016; Oltmanns & Widiger, 2018), we assume that increases in trait questionnaires of vulnerability likely reflect increases of such vulnerable states or episodes in those with high levels of grandiosity. That is, to some extent, the experience of vulnerable states likely echoes in trait measures. We base this interpretation on WTT, which assumes that traits can be understood as density distributions of states (Fleeson & Jayawickreme, 2015; Jayawickreme et al., 2019), and trait scales, therefore, indicate the central tendency of intraindividual variation in experience and behavior. The highly grandiose individual might thus experience more frequent and/or more pronounced vulnerable states, which, to some extent, manifests in global self-ratings.

The nonlinear effect is specific for grandiosity and cannot be inversed (see FFNI segmented regression models). Highly vulnerable persons do not show increased grandiosity, which is in line with our previous study (Jauk & Kaufman, 2018) and research demonstrating with other methods that highly grandiose individuals show episodes of vulnerability, but not the other way around (Edershile & Wright, 2020; Gore & Widiger, 2016). However, unexpectedly, the results pattern for the NPI/HSNS deviated, in this regard, from that of the FFNI, as a positive change in slope was also observed along the HSNS distribution. While we have no clear interpretation for this result at this point, tentatively speaking, it might be that the HSNS, which has formerly also been considered a measure of “covert” narcissism (Wink, 1991), draws to some extent on hidden grandiose aspects (“I am secretly ‘put out’ or annoyed when other people come to me with their troubles, asking me for my time and sympathy”; Hendin & Cheek, 1997, p. 592). Higher scale scores might thus be accompanied by higher breakthroughs of grandiosity, so to speak. However, this speculation must remain subject to future studies, and as a whole, the results observed for the FFNI are in greater accordance with studies using different methods (Edershile & Wright, 2020; Gore & Widiger, 2016).

4.2 The nonlinear relationship through the lens of the three-factor model

Factor- and facet-level analyses for the NPI and FFNI showed that with increasing grandiose narcissism, grandiosity becomes less saturated with agentic aspects, and vulnerability becomes more saturated with antagonistic aspects. This is largely in accordance with our previous results (Jauk & Kaufman, 2018) and shows that, on the one hand, adaptive aspects of grandiosity, which could potentially counteract negative consequences (e.g., Kaufman et al., 2020), become less relevant as grandiosity increases. On the other, it shows that vulnerability is tied more strongly to antagonistic aspects, making the common core of grandiose and vulnerable aspects stronger at high levels of grandiosity (though a higher saturation of grandiosity with antagonism, as in our previous study [Jauk & Kaufman, 2018], was not evident).

To further study the interplay of different narcissism aspects directly at the three-factor level, we conducted exploratory response surface analyses, which allow to investigate nonlinear and interactive effects of agentic and antagonistic aspects. For both the NPI and the FFNI, these showed that it is neither agentic nor antagonistic aspects alone that increase vulnerable/neurotic aspects, but a combination of those. Specifically, agentic aspects—at least up to a certain point—seem to buffer antagonistic aspects when it comes to vulnerable/neurotic narcissism. This pattern was more clearly evident in the NPI/HSNS, where, at low levels of agentic narcissism, even mild increases in antagonistic narcissism are accompanied by increases in neurotic narcissism, whereas at high levels of agentic narcissism, it takes longer for antagonistic narcissism to increase neurotic narcissism. Agentic narcissism, however, continues to have this “protective” effect only up to an above-average level, where the relationship levels off. The FFNI results pointed in a similar direction, in that a combination of low agentic and elevated antagonistic narcissism is accompanied by higher neurotic narcissism. Here, however, we observed stronger quadratic effects, which indicate that high scores on either dimension decrease neurotic narcissism again.

Considering the evidence from factor correlation and response surface analyses together, we conclude that antagonistic narcissism does play a key role in explaining vulnerable/neurotic narcissism, but the absence of agentic aspects might be at least as important. Particularly those individuals who have an antagonistic interpersonal style, yet little “positive” and potentially stabilizing (even if self-aggrandizing) experiences linked to agentic narcissism, might display vulnerable/neurotic aspects of narcissism such as shame (which displayed the strongest increase in correlation with overall grandiose narcissism). Similar findings were obtained, for instance, for the absence of positive affect in the development of depression (Wood & Joseph, 2010). More generally, recent research suggested that personality disorders can be understood as emergent interpersonal syndromes (i.e., unlikely and socially problematic trait configurations; Lilienfeld et al., 2019), and the results observed here might be seen as supporting such an account to narcissism.

4.3 Normal and pathological narcissism

The results could further be seen as supporting to some degree the distinction between adaptive and maladaptive or normal and pathological expressions16 of narcissism. Research has long strived to delineate self-report scales of narcissism with respect to the extent to which they assess adaptive or maladaptive aspects (e.g., Ackerman et al., 2011; Pincus et al., 2009). These efforts commonly center around the identification of nomological networks as evident in validity measures, assuming linear effects of the respective scales. While these linear effects do certainly capture the most relevant general trends, it might well be the case that increasing narcissism levels are accompanied by qualitative shifts in the nomological networks. For instance, a person who behaves arrogant in some situations, but not in others, might be quite successful in the social realm, not display signs of psychological maladjustment, and might be considered an example of adaptive/normal narcissism. In contrast, a person who behaves arrogant in almost every situation—including those where others will certainly not tolerate it—will almost inevitably face social problems, which might unveil narcissistic vulnerability. Crucially, both of these persons can be placed on the same narcissism dimension (here: antagonistic narcissism), but in different segments of it. It is thus not necessary to assume qualitative shifts in the narcissism dimension (antagonism) itself, but different (potentially socially mediated) effects of it might manifest in differential relations with other variables, particularly narcissistic vulnerability. These might further be amplified by simultaneous changes in other aspects, most notably the absence of agentic aspects.

It is interesting to note that our findings align well with those from a large-scale study of nonlinear effects of narcissism in the workplace: Grijalava and colleagues (2015) investigated leadership qualities related to narcissism and found narcissism to be positively associated with (supervisor-rated) leadership effectiveness at moderate levels, but negatively related at high levels. As the authors stated, “increasing narcissism in the low range of the trait will lead to more adaptive manifestations of narcissism” whereas “increasing narcissism in the high range of the trait will produce maladaptive manifestations” (p. 26). The effects were not attributable to agentic aspects, but presumably more related to antagonistic aspects (though these were not directly studied), which is in line with the effects observed here.

We thus argue that the adaptiveness or maladaptiveness of inventories such as the NPI or FFNI might not only depend upon their coverage of different construct aspects, but also on the investigated range within the respective dimensions, and potentially interactions with other dimensions. Which form of narcissism might be considered normal or pathological might, from an empirical point of view, well depend upon the level of narcissism, and changes in the nomological network associated with it. We note that the correlation between grandiosity and vulnerability observed here for high levels of FFNI grandiose narcissism is well in line with the intrinsic correlation of grandiose and vulnerable subscales in the PNI—a scale designed to assess maladaptive forms of narcissism, in which the co-occurrence of grandiose and vulnerable aspects is considered vital (Pincus et al., 2009; Wright et al., 2010).

While the idea of qualitative shifts within the same dimension might conflict to some extent with our understanding of desirable psychometric characteristics and necessitate more complex analysis techniques, we believe considering this complexity may better depict the reality of individual differences. Though not very popular in personality psychology, dose–response relationships are common phenomena in science (for instance pharmacology; Tallarida & Jacob, 1979) and also everyday life (considering just the many instances where we say that we “overdid” something). They can be understood as systemic changes within self-organizing systems (e.g., Hayes et al., 2007), which seems a fruitful perspective for the study of personality (Richardson et al., 2013), and specifically personality pathology (Hopwood et al., 2015). Though we used discrete breakpoints here, we do not understand these as isomorphic representations of the empirical relations, but as probabilistic guesses of distribution points around which qualitative shifts are most likely to occur. The results are thus not meant to reflect cutoffs for maladaptive/pathological narcissism, yet, they may provide best guesses for distribution ranges where systemic changes are likely to take place.

4.4 Implications for research and practice

We wish to address three aspects that might be of relevance to narcissism research: first, the difference in slope for the FFNI depended on age to a sizeable degree, as the interaction was stronger for older individuals (though vulnerability was, on average, lower in older individuals). This might be the case because narcissistic vulnerability—even if seeded early in life (Huxley et al., 2021; Kernberg, 1975)—takes time to unfold, or to be unveiled. Someone in their early twenties—on the peak of intellectual and physical capacities, yet in many aspects still protected from the pitfalls of adult life—might, on average, not have experienced a significant amount or intensity of adverse events such as job loss or divorce, or ego-threatening developmental changes such as declines in physical performance or attractiveness. Research has confirmed that such factors do shape our personality (e.g., Specht, 2017), and they might serve as triggers of narcissistic vulnerability particularly after midlife (e.g., Goldstein, 1995). This seems even more important given that grandiose narcissism itself has been found to show longitudinal selection effects in the way that those high in grandiosity have a higher likelihood to experience adversity (Orth & Luciano, 2015). However, cohort effects might also be at play, and future longitudinal studies will be needed to unveil the complex associations. In any case, this result underlines the necessity of studying samples that vary substantially in demographic characteristics such as age, as vulnerable aspects accompanying high grandiosity might otherwise be underestimated.

Second, the results show that considering the absolute level of grandiosity might be important when designing and interpreting studies, particularly those using select populations or extreme groups. Qualitative shifts between lower and higher grandiosity samples could at least partially explain experimentally unveiled signs of vulnerability in highly grandiose individuals, as evident for instance in neuroscience research (Jauk & Kanske, 2021). This can be effectively addressed by, on the one hand, considering the level of narcissistic grandiosity, and, on the other, by complementing designs with measures of narcissistic vulnerability (ibid). For research that aims to test threshold effects, we recommend using the empirically obtained breakpoint estimates as a priori parameters in large and diverse samples.

Third, future studies could assess mediating variables which might explain increases in vulnerability at higher levels of grandiose narcissism. From a clinical perspective, personality functioning, in terms of general self- and other-related emotional competencies, might be a prime candidate, as personality disorders in general (American Psychiatric Association, 2013), and narcissistic pathology specifically (Kernberg, 1975), are conceptualized as constellations where extreme trait expressions meet reduced functioning. Of note, self-regulatory functions (including stabilization of self-esteem) are regarded as central elements of personality functioning (American Psychiatric Association, 2013; OPD Task Force, 2008), and these might be directly relevant for explaining transitions between grandiose and vulnerable states. While personality functioning is not frequently assessed in nonclinical personality research, emotional intelligence might be used as a proxy for it (Jauk & Ehrenthal, 2021). Also, the general factor of psychopathology—closely linked to personality pathology (Oltmanns et al., 2018)—might be studied as a moderator.

For psychological practice, the findings reported here imply that clinicians working with patients who present as highly grandiose should be particularly attentive to signs of narcissistic vulnerability. While the DSM acknowledges that vulnerability can accompany grandiosity (American Psychiatric Association, 2013), the present meta-analysis of large samples from the general population provides quantitative evidence that they are indeed more likely to accompany high grandiosity. Correctly identifying narcissistic vulnerability as such is important as it is associated with a wide range of negative consequences, including suicidal ideation and behavior (e.g., Jaksic et al., 2017). However, since highly grandiose individuals tend to hide or deny vulnerable aspects (cf. Pincus et al., 2014), and, beyond that, also evoke negative reactions in their therapists (Tanzilli et al., 2015), it can be challenging. Seeing vulnerability in those who present as highly grandiose might be even more difficult for those without professional training, as laypeople attribute grandiose behavior to similarly grandiose motives (Koepernik et al., 2021). For an integrated understanding of narcissism, it thus seems important to raise awareness for the interplay of grandiose and vulnerable aspects in highly grandiose individuals, which we hope this study can contribute to.

Wednesday, February 9, 2022

Organic food labels: Participants showed an organic halo effect leading them to consider the organic cookie as healthy as a conventional one despite containing 14% more of the daily reference intake for sugar and 30% more for fat

Organic food labels bias food healthiness perceptions: Estimating healthiness equivalence using a Discrete Choice Experiment. Juliette Richetin et al. Appetite, February 9 2022, 105970. https://doi.org/10.1016/j.appet.2022.105970

Abstract: Individuals perceive organic food as being healthier and containing fewer calories than conventional foods. We provide an alternative way to investigate this organic halo effect using a mirrored method to Choice Experiments applied to healthiness judgments. In an experimental study (N = 415), we examined whether healthiness judgments toward a 200g cookie box are impacted by the organic label, nutrition information (fat and sugar levels), and price and determined the relative importance of these attributes. In particular, we assessed whether food with an organic label could contain more fat or sugar and yet be judged to be of equivalent healthiness to food without this label. We hoped to estimate the magnitude of any such effect. Moreover, we explored whether these effects were obtained when including a widely used system for labeling food healthiness, the Traffic Light System. Although participants' healthiness choices were mainly driven by the reported fat and sugar content, the organic label also influenced healthiness judgments. Participants showed an organic halo effect leading them to consider the organic cookie as healthy as a conventional one despite containing more fat and sugar. Specifically, they considered the organic cookie as equivalent in healthiness to a conventional one, although containing 14% more of the daily reference intake for sugar and 30% more for fat. These effects did not change when including the Traffic Light System. This effect of the organic label could have implications for fat and sugar intake and consequent impacts on health outcomes.

Keywords: Organic food labelPerceived healthinessFat intakeSugar intake



In contrast to substance use or gambling, excessive behaviors (compulsive shopping, sex) are transient for most, and their comparatively lower levels of chronicity questions their designations as ‘addictions’

Addiction chronicity: are all addictions the same? Nolan B. Gooding, Jennifer N. Williams, Robert J. Williams. Addiction Research & Theory , Feb 8 2022. https://doi.org/10.1080/16066359.2022.2035370

Abstract

Background: All addictions have a recurring nature, but their comparative chronicity has never been directly investigated. The purpose of this study is to undertake this investigation.

Method: A secondary analysis was conducted on two large scale 5-year Canadian adult cohort studies. A subset of 1,088 individuals were assessed as having either substance use disorder, gambling disorder, excessive behaviors (e.g. shopping, sex/pornography), or two or more of these designations (‘multiple addictions’) during the course of these studies. Within each dataset comparisons were made between these four groups concerning the number of waves they had their condition; likelihood of having their condition in two or more consecutive waves; and likelihood of relapse following remission.

Results: Multiple addictions had significantly greater chronicity on all measures compared to single addictions. People with an excessive behavior designation had significantly lower chronicity compared to people with gambling disorder and a tendency toward lower chronicity compared to substance use disorder. Gambling disorder had equivalent chronicity to substance use disorder in one dataset but greater chronicity in the other. However, this latter difference is likely an artifact of the different time frames utilized.

Conclusions: Having multiple addictions represents a more pervasive condition that is persistent for most individuals. Substance use disorder and gambling disorder have intermediate and roughly equivalent levels of chronicity, but considerable individual variability, transient for some, but more chronic for others. In contrast, excessive behaviors such as compulsive shopping are transient for most, and their comparatively lower levels of chronicity questions their designations as ‘addictions’.

Keywords: Addictionchronicitylongitudinalcohortgamblingsubstance


By contrast to other tastes, sour taste does not appear to have been lost in any major vertebrate taxa; but for most species, sour taste is aversive: Animals, including humans, that enjoy the sour taste triggered by acidic foods are exceptional

The evolution of sour taste. Hannah E. R. Frank, Katie Amato, Michelle Trautwein, Paula Maia, Emily R. Liman, Lauren M. Nichols, Kurt Schwenk, Paul A. S. Breslin and Robert R. Dunn. Proceedings of the Royal Society B: Biological Sciences, February 9 2022. https://doi.org/10.1098/rspb.2021.1918

Abstract: The evolutionary history of sour taste has been little studied. Through a combination of literature review and trait mapping on the vertebrate phylogenetic tree, we consider the origin of sour taste, potential cases of the loss of sour taste, and those factors that might have favoured changes in the valence of sour taste—from aversive to appealing. We reconstruct sour taste as having evolved in ancient fish. By contrast to other tastes, sour taste does not appear to have been lost in any major vertebrate taxa. For most species, sour taste is aversive. Animals, including humans, that enjoy the sour taste triggered by acidic foods are exceptional. We conclude by considering why sour taste evolved, why it might have persisted as vertebrates made the transition to land and what factors might have favoured the preference for sour-tasting, acidic foods, particularly in hominins, such as humans.

(e) Consequences of sour taste preferences for hominins

Regardless of whether rotting fruits played a role in the shift of the acid preference curve in hominins, we hypothesize that the existence of acid taste preference may have strongly influenced the later relationship between hominins and rotten fruits and other rotten foods. Based on studies in the laboratory, three groups of microorganisms compete during the rot of fruits [78], single-celled budding yeasts (most of which are from the Saccharomycetales clade of fungi), filamentous fungi (such as Penicillium) and lactic acid bacteria. While all of these organisms produce short-chain fatty acids when they ferment fruit, yeasts also tend to produce alcohol, and lactic acid bacteria produce lactic acid. Rotten fruits that become dominated by filamentous fungi can be dangerous [79]. However, rotten fruits that become dominated by yeasts and lactic bacteria are often ‘improved’ from the perspective of consumers. Rot due to lactic acid bacteria and yeasts often increases food caloric, free amino acid and vitamin content and hence improves digestibility by breaking down fibre and plant toxins [8084]. Therefore, in challenging nutritional environments, fruits rotted by yeasts or lactic acid bacteria likely represented a valuable food source that could increase chances of survival [4]. If the acid-preference of the MRCA (whenever acquired) allowed it to more readily consume heavily fermented fruit, or at least the subset of that fruit rotted by lactic acid bacteria, they might have been able to take advantage of a novel source of safe calories.

There exists molecular evidence that the last common ancestor of gorillas, chimpanzees and humans consumed fermented fruits. For example, a single amino acid replacement in the ADH4 gene in the lineage shared by humans and African apes resulted in a 40-fold improvement in ethanol oxidation [85]. This change would have allowed the MRCA to consume yeast-fermented fruits on the ground with higher concentrations of both ethanol and acids [85] without concomitant neurological toxicity (or drunkenness; [53]). This ability may have allowed the MRCA to survive and reproduce more effectively in nutritionally challenging, seasonal environments, particularly as climate change resulted in more fragmented and open habitats. At about the same time, the MRCA acquired a third copy of the HCA3 gene encoding G protein-coupled receptors for hydroxycarboxylic acids, such as lactic acid, produced by the fermentation of dietary carbohydrates by lactic acid bacteria [86]. While this gene is found in all great apes, it is most strongly activated in chimpanzees, gorillas and humans, with humans exhibiting the strongest effects, suggesting that, in some form acid-producing bacteria (and the detection of their products) played a larger role in apes than in other primates and in humans than in non-human apes. As has been considered elsewhere, a fondness for acidic foods, particularly when combined with preferences for umami tastes, may have predisposed ancestral humans to eventual intentional control of rotting to yield more favourable outcomes, which is to say, fermentation [4,87].

Positive effect of swearing on strength: Humorous disinhibition as a potential mediator

Effect of swearing on strength: Disinhibition as a potential mediator. Richard Stephens et al. Quarterly Journal of Experimental Psychology, February 8, 2022. https://doi.org/10.1177/17470218221082657

Abstract

Introduction: Swearing fulfils positive functions including benefitting pain relief and physical strength. Here we present two experiments assessing a possible psychological mechanism, increased state disinhibition, for the effect of swearing on physical strength.

Method: Two repeated measures experiments were carried out with sample sizes N=56, and N=118. Both included measures of physical performance assessing, respectively, grip and arm strength, and both included the Balloon Analogue Risk Task (BART) to measure risky behaviour. Experiment 2, which was pre-registered, additionally assessed flow, emotion including humour, distraction including novelty, self-confidence and anxiety.

Results: Experiments 1 and 2 found that repeating a swear word benefitted physical strength and increased risky behaviour, but risky behaviour did not mediate the strength effect. Experiment 2 found that repeating a swear word increased flow, positive emotion, humour and distraction and self-confidence. Humour mediated the effect of swearing on physical strength.

Discussion: Consistent effects of swearing on physical strength indicate that this is a reliable effect. Swearing influenced several constructs related to state disinhibition including increased self-confidence. Humour appeared to mediate the effect of swearing on physical strength, consistent with a hot cognitions explanation of swearing-induced state disinhibition. However, as this mediation effect was part of an exploratory analysis, further pre-registered experimental research including validated measures of humour is required.

Keywords: swearing, disinhibition, risk-taking, humour, confidence, mediation


Tuesday, February 8, 2022

Some suggest that humans are ‘domesticated’ apes; the wolf–dog comparison has been used to support the idea of the human self-domestication hypothesis, but more recent results are not in line with this claim

Comparing wolves and dogs: current status and implications for human ‘self-domestication’. Friederike Range, Sarah Marshall-Pescini. Trends in Cognitive Sciences, Feb 7 2022. https://doi.org/10.1016/j.tics.2022.01.003

Highlights

Domestication is thought to alter the temperament of a species, making it less fearful and aggressive and more social, thereby promoting their sociocognitive abilities. Some authors suggest that humans are ‘domesticated’ apes.

The wolf–dog comparison has been used to support the idea of the human self-domestication hypothesis, but more recent results are not in line with this claim.

Genetic and behavioral studies of free-ranging, pet, and captive pack-living dogs, as well as different subspecies of wolves, can further our understanding of the dog domestication process.

Current dog domestication hypotheses focus on explaining specific dog–human interactions rather than trying to understand dogs as a social species.

Dog domestication is best understood as an adaptation to a new, human-dominated niche, which included selective pressures by humans.

Abstract: Based on claims that dogs are less aggressive and show more sophisticated socio-cognitive skills compared with wolves, dog domestication has been invoked to support the idea that humans underwent a similar ‘self-domestication’ process. Here, we review studies on wolf–dog differences and conclude that results do not support such claims: dogs do not show increased socio-cognitive skills and they are not less aggressive than wolves. Rather, compared with wolves, dogs seek to avoid conflicts, specifically with higher ranking conspecifics and humans, and might have an increased inclination to follow rules, making them amenable social partners. These conclusions challenge the suitability of dog domestication as a model for human social evolution and suggest that dogs need to be acknowledged as animals adapted to a specific socio-ecological niche as well as being shaped by human selection for specific traits.

Concluding remarks

Although the idea that one selection process might explain the emergence of several traits in humans and domesticated species is exciting, the reality is more complex because species are exposed to different selective pressures in their natural environments. Concerning dogs, their behavior and cognition likely reflect changes in their socio-ecology, going hand in hand with human-enacted selective pressures favoring animals that can be easily inhibited and controlled, allowing humans to exploit the skills of dogs for their own use. However, these data do not support the claims of the HSD hypothesis of a general tamer temperament and higher socio-cognitive skills in dogs compared with wolves. Nevertheless, dog domestication might be a good model to further our understanding of the factors affecting human out-group dynamics and increased propensity for rule-following/adherence to social norms. Finally, we would like to caution researchers to consider the genetic make-up and social experience of their study populations when comparing wolves and dogs. Instead of formulating new domestication hypotheses to explain tiny differences in behavior or cognition, we would like to encourage researchers to also see dogs as a species adapted to their unique ecological niche and not only as a human-made product, and to test hypotheses using different paradigms.
Outstanding questions
Is the difference in fear reactions to humans and human artifacts of wolves and dogs (at least partly) a result of ‘selection for shyness’ in wolves? To investigate this question, different wolf subspecies need to be studied.
Does selection against aggression have a snowball effect on other aspects, such as the social structure of a species? Conversely, does adaptation to a new, more stable foraging niche with small, distributed food items bring about similar changes to those observed during dog domestication? To answer the first question, cognitive and behavioral studies of Belyaev’s ‘tame foxes’, including how these animals interact with conspecifics, would be valuable. To answer the second, a better understanding of how social cognition and behavior changes in canids adapting to an urban environment would be needed.
Which (combination of) hypotheses under which conditions might lead to the observed traits in dogs? Computer models might shed light on this question.
How varied is the social system of both wolves and dogs in the ‘wild’? Can such variability be exclusively explained by the respective ecological conditions? For example, would feral dogs living off hunting (rather than scavenging) show a social structure comparable to that of wolves (e.g., dingoes cooperatively hunt large prey, such as kangaroos)? Or are there consistent differences between the social organization of wolves and dogs, which cannot be explained by the ecological conditions they live in?
Do wolves and dogs perceive humans differently? Do they see humans as social partners at the same eye-level or rather someone to whom they look up to? How much does that depend on specific experiences?
Is the lower inclination of dogs to challenge hierarchies compared with wolves a good model for human evolution? To answer this question, we need to explore whether, compared with chimps (or bonobos), humans are more prone to accept the decisions of their leaders and, thus, avoid potentially costly conflicts with them.

Men & women tend to exhibit meaningful differences in personality & psychopathology, as well as in omnibus morphometry & regional morphometric brain differences, but those differences appear unrelated to the psychological differences

Structural brain differences do not mediate the relations between sex and personality or psychopathology. Courtland S. Hyatt et al. In press at the Journal of Personality, Feb 2022. https://osf.io/dsk53

Abstract

Introduction: Males and females tend to exhibit small but reliable differences in personality traits and indices of psychopathology that are relatively stable over time and across cultures. Previous work suggests that sex differences in brain structure account for differences in domains of cognition.

Methods: We used data from the Human Connectome Project (N = 1098) to test whether sex differences in brain morphometry account for observed differences in the personality traits neuroticism and agreeableness, as well as symptoms of internalizing and externalizing psychopathology. We operationalized brain morphometry in three ways: omnibus measures (e.g., total gray matter volume), Glasser regions defined through a multi-modal parcellation approach, and Desikan regions defined by structural features of the brain.

Results: Most expected sex differences in personality, psychopathology, and brain morphometry were observed, but the statistical mediation analyses were null: sex differences in brain morphometry did not account for sex differences in personality or psychopathology.

Conclusions: Men and women tend to exhibit meaningful differences in personality and psychopathology, as well as in omnibus morphometry and regional morphometric differences as defined by the Glasser and Desikan atlases, but these morphometric differences appear unrelated to the psychological differences.


Keywords: psychopathology, morphometry, sex, five factor model, human connectome project


Monday, February 7, 2022

Examined whether math ability was significant mediator of relation between gender and math anxiety; math ability accounted for a significant amount of the variance between gender and math anxiety, mostly due to manipulation ability

Delage, V., Trudel, G., Retanal, F., & Maloney, E. A. (2021). Spatial anxiety and spatial ability: Mediators of gender differences in math anxiety. Journal of Experimental Psychology: General. Feb 2022. https://doi.org/10.1037/xge0000884

Abstract: Females tend to be more anxious than males while engaging in mathematics, which has been linked to lower math performance and higher math avoidance. A possible repercussion of this gender difference is the underrepresentation of females in STEM fields (science, technology, engineering, and math), as math competencies are an essential part of succeeding in such fields. A related, but distinct, area of research suggests that males tend to outperform females in tasks that require spatial processing (i.e., the ability to mentally visualize, rotate, and transform spatial and visual information). Interestingly, factors from the spatial processing domain (spatial ability and spatial anxiety) are important in explaining gender differences in math anxiety. Here, we examined three types of spatial anxiety and ability (imagery, navigation, and manipulation), as well as math ability, as mediators of gender differences in math anxiety. Undergraduate students (125 male; 286 female) completed assessments of their general level of anxiety, their math anxiety, and their spatial anxiety. They also completed a series of tasks measuring their mathematical skill, their spatial skills, and basic demographics. Results suggest that manipulation anxiety and ability, navigation anxiety, and math ability explained the gender difference in math anxiety, but manipulation anxiety was the strongest mediator of this relation. Conversely, all other measures did not explain the gender difference in math anxiety. These findings help us better understand the gender difference in mathematics, and this is important in reducing the gender gap in STEM fields.


Watching Videos on a Smartphone: Do Small Screens Lead To A Shallower Narrative Experience?

Watching Videos on a Smartphone: Do Small Screens Impair Narrative Transportation. Markus Appel & Christoph Mengelkamp. Media Psychology, Feb 6 2022. https://doi.org/10.1080/15213269.2021.2025109

Abstract: Smartphones are a preferred platform to access audiovisual stories. Prior theory and research suggest that using smaller screens could lead to a shallower narrative experience. In three experiments we examined the influence of screen size (smartphone vs. computer screen) on the experience of being transported into the world of the story (narrative transportation). We further examined interaction effects with manipulations meant to change transportation by means of reviews (Experiment 1, N = 120), consistency of main character information (Experiment 2, N = 139), and prior information meant to facilitate comprehension (Experiment 3, N = 129). Because our series of studies involved theoretically and practically relevant null hypotheses (i.e., screen size does not influence transportation), we added Bayes factor analyses to standard frequentist statistics. A mini meta-analysis was conducted to summarize the results. Taken together, the three experiments indicate that smaller screen size does not impair narrative transportation. Implications and future research are discussed.


After the Yellow Vests movement, the French people would largely reject a tax and dividend policy (a carbon tax whose revenues are redistributed uniformly to each adult): overestimate their net monetary losses, wrongly think that the policy is regressive

Yellow Vests, Pessimistic Beliefs, and Carbon Tax Aversion. Thomas Douenne and Adrien Fabre. American Economic Journal: Economic Policy. Feb 2022, Vol. 14, No. 1: Pages 81-110. https://pubs.aeaweb.org/doi/pdfplus/10.1257/pol.20200092

Abstract: Using a representative survey, we find that after the Yellow Vests movement, French people would largely reject a tax and dividend policy, i.e., a carbon tax whose revenues are redistributed uniformly to each adult. They overestimate their net monetary losses, wrongly think that the policy is regressive, and do not perceive it as environmentally effective. We show that changing people’s beliefs can substantially increase support. Although significant, the effects of our informational treatments on beliefs are small. Indeed, the respondents that oppose the tax tend to discard positive information about it, which is consistent with distrust, uncertainty, or motivated reasoning. (JEL D83, H23, H31, Q54, Q58)



Sunday, February 6, 2022

Semesters or Quarters? The Effect of the Academic Calendar on Postsecondary Student Outcomes

Semesters or Quarters? The Effect of the Academic Calendar on Postsecondary Student Outcomes. Valerie Bostwick, Stefanie Fischer, and Matthew Lang. American Economic Journal: Economic Policy. Feb 2022, Vol. 14, No. 1: Pages 40-80. https://pubs.aeaweb.org/doi/pdfplus/10.1257/pol.20190589

Abstract: There exists a long-standing debate in higher education on which academic calendar is optimal. Using panel data on the near universe of four-year nonprofit institutions and leveraging quasi-experimental variation in calendars across institutions and years, we show that switching from quarters to semesters negatively impacts on-time graduation rates. Event study analyses show that the negative effects persist beyond the transition. Using transcript data, we replicate this analysis at the student level and investigate possible mechanisms. Shifting to a semester: (i) lowers first-year grades, (ii) decreases the probability of enrolling in a full course load, and (iii) delays the timing of major choice. (JEL I23, I28)


Women who earned more than their male partners—thereby making them feel insecure for being the "primary breadwinner"—were 2x as likely to fake orgasms than those who didn’t make more money than their partners

Do Women Withhold Honest Sexual Communication When They Believe Their Partner’s Manhood is Threatened? Jessica A. Jordan et al. Social Psychological and Personality Science, January 31, 2022. https://doi.org/10.1177/19485506211067884

Abstract: We explored whether women who perceive that their partners’ manhood is precarious (i.e., easily threatened) censor their sexual communication to avoid further threatening their partners’ masculinity. We operationalized women’s perceptions of precarious manhood in a variety of ways: In Study 1, women who made more money than their partners were twice as likely as those who did not to fake orgasms. In Study 2, women’s higher perceptions of partners’ precarious manhood indirectly predicted faking orgasms more, lower sexual satisfaction, and lower orgasms rate through greater anxiety and less honest communication. In Study 3, women who imagined a partner whose masculinity was insecure (vs. secure) were less willing to provide honest sexual communication, via anxiety. Together, the studies demonstrate a relationship between women’s perceptions of partner insecurity, anxiety, sexual communication, and sexual satisfaction.

Keywords: faking orgasms, gender threat, orgasm, precarious manhood, sexual communication



Anti-Bisexual Bias

Gendered Anti-Bisexual Bias: Heterosexual, Bisexual, and Gay/Lesbian People’s Willingness to Date Sexual Orientation Ingroup and Outgroup Members. Mackenzie Ess,Sara E. Burke , Marianne LaFrance. Journal of Homosexuality, Feb 3 2022. https://doi.org/10.1080/00918369.2022.2030618

Abstract: Bisexual people may appear to have more potential romantic partners than people only attracted to one gender (e.g., heterosexual, gay, lesbian people). However, bisexual people’s dating choices are limited by non-bisexual people’s reluctance to date bisexual people. Studies have indicated that some heterosexual, gay, and lesbian people are reluctant to date bisexual people, particularly bisexual men. We extend current understandings of gendered anti-bisexual bias through investigating heterosexual, bisexual, gay, and lesbian people’s reported willingness to date within and outside of their sexual orientation groups. Participants (n = 1823) varying in sexual orientation completed measures regarding their willingness to engage in a romantic relationship with heterosexual, bisexual, gay, and lesbian individuals. Heterosexual and gay/lesbian people were less willing to date bisexual people than bisexual people were to date them, consistent with anti-bisexual bias rather than mere in-group preference. Preferences against dating bisexual men appeared particularly strong, even among bisexual women.

Keywords: BisexualitybinegativitydatinggenderLGBTQ+sexual prejudicestereotyping


Spenders were more likely to have psychopathic tendencies, but less likely to be Machiavellian; savers rated themselves as more attractive, healthy, & intelligent than spenders; spenders had more liberal political views & report higher emotional intelligence

Furnham, A., Robinson, C., & Grover, S. (2022). Spenders and savers, tightwads and spendthrifts: Individual correlates of personal ratings of being a spender or a saver. Journal of Neuroscience, Psychology, and Economics, Feb 2022. https://doi.org/10.1037/npe0000155

Abstract: There is limited literature on the causes, correlates, and consequences of being a saver (tightwad) or a spender (spendthrift). This paper reports on five studies which look at demographic, bright- and dark-side personality, money belief, and self-evaluation correlates of to what extent a person considers themselves a spender or saver. In each study, adult participants indicated their spender–saver habits on a single scale and completed a number of tests. The first study looked at trait correlates and showed savers were close-minded, conscientious, stable, extraverts. It also showed as predicted that savers were more likely to associate money with security, and not love or freedom, and claim to have better financial knowledge. The results from the second study on dark-side personality correlates indicated that spenders were more likely to have psychopathic tendencies, but less likely to be Machiavellian. The third study on personality disorder correlates of spender–saver tendencies suggested that spenders were likely to have elevated Cluster B personality disorders. The fourth study examined self-beliefs and showed savers rated themselves as more attractive, healthy, and intelligent than spenders. The fifth study, also using various self-ratings, showed spenders had more liberal political views, report higher emotional intelligence and are less likely to own their own home, while savers rated their physical health higher, and saw themselves as more entrepreneurial. Overall, the results suggest the simple saver–spender question is logically correlated with a number of individual difference variables with savers having a more positive profile. Implications and limitations are considered. 




Consumers who avoid fast food do so not because they think it is unhealthy, but because eating it causes them guilt, and resisting the "sin" gives them a sense of accomplishment

Why Do and Why Don’t People Consume Fast Food?: An Application of the Consumption Value Model. Kiwon Lee, Jonghan Hyun, Youngmi Lee. Food Quality and Preference, Feb 5 2022. https://doi.org/10.1016/j.foodqual.2022.104550

Highlights

• Regular consumers approach fast food mainly due to convenience and taste.

• Non-regular consumers feel a sense of accomplishment when not consuming fast food.

• Regular consumers may avoid fast food when they encounter food safety issues.

• Non-regular consumers may approach fast food when experiencing time pressure.

Abstract: This study explores the nature of the consumption values that differentiate regular consumers of fast food and non-regular consumers using the consumption value model. Data was collected from a total of 307 respondents via a self-administered online survey. The collected data was then classified into two groups, regular consumers (RCs, n=140) and non-regular consumers (non-RCs, n=167), based on the respondents’ self-identification as either a regular fast food consumer or a non-consumer and their fast food consumption frequency (≥ 2-3 times a week for RCs and ≤ 1 time a fortnight for non-RCs). Using factor analysis, 15 factors were extracted for the six consumption values (functional, social, emotional, conditional, epistemic, and process values). Discriminant analysis showed that 5 factors out of those 15 are influential in discriminating RCs and non-RCs. Specifically, RCs were found to consume fast food due to convenience and taste whereas non-RCs were found to avoid fast food due to the feelings of guilt when consuming fast food and the sense of accomplishment when not consuming fast food. Also, RCs and Non-RCs were found to deviate from their normal behavior when certain conditions are present (e.g., food safety issue, time pressure, stress). In all, the results of this study provide marketers a clearer understanding of the consumption values that regular consumers and non- regular consumers perceive in fast foods, further enabling the development of marketing strategies that appeal better to current and potential customers.

Keywords: fast foodconsumption valueconveniencetasteguilt