Sunday, November 24, 2019

Spearman’s g Found in 31 Non-Western Nations, 52000 people: Strong Evidence That g Is a Universal Phenomenon

Spearman’s g Found in 31 Non-Western Nations: Strong Evidence That g Is a Universal Phenomenon. Russell T. Warne. Psychological Bulletin, 145(3), 237-272. Nov 2010. http://dx.doi.org/10.1037/bul0000184

Abstract: Spearman’s g is the name for the shared variance across a set of intercorrelating cognitive tasks. For some—but not all—theorists, g is defined as general intelligence. While g is robustly observed in Western populations, it is questionable whether g is manifested in cognitive data from other cultural groups. To test whether g is a cross-cultural phenomenon, we searched for correlation matrices or data files containing cognitive variables collected from individuals in non-Western, nonindustrialized nations. We subjected these data to exploratory factor analysis (EFA) using promax rotation and 2 modern methods of selecting the number of factors. Samples that produced more than 1 factor were then subjected to a second-order EFA using the same procedures and a Schmid-Leiman solution. Across 97 samples from 31 countries totaling 52,340 individuals, we found that a single factor emerged unambiguously from 71 samples (73.2%) and that 23 of the remaining 26 samples (88.5%) produced a single second-order factor. The first factor in the initial EFA explained an average of 45.9% of observed variable variance (SD = 12.9%), which is similar to what is seen in Western samples. One sample that produced multiple second-order factors only did so with 1 method of selecting the number of factors in the initial EFA; the alternate method of selecting the number of factors produced a single higher-order factor. Factor extraction in a higher-order EFA was not possible in 2 samples. These results show that g appears in many cultures and is likely a universal phenomenon in humans.

Public Significance Statement: This study shows that one conceptualization of intelligence—called Spearman’s g—is present in over 90 samples from 31 non-Western, nonindustrialized nations. This means that intelligence is likely a universal trait in humans. Therefore, it is theoretically possible to conduct cross-cultural research on intelligence, though culturally appropriate tests are necessary for any such research.

KEYWORDS: Spearman’s g, cross-cultural psychology, general cognitive ability, human intelligence, factor analysis

General Discussion of Results

We conducted this study to create a strong test of the theory that general cognitive ability is a cross-cultural trait by searching for g in human populations where g would be the least likely to be present or would be weakest. The results of this study are remarkably similar to results of EFA studies of Western samples, which show that g accounts for approximately half of the variance among a set of cognitive variables (e.g., Canivez & Watkins, 2010). In our study, the first extracted factor in 97 EFAs of data sets from non-Western, nonindustrialized countries was 45.9%.
Moreover, 73.2% of the data sets unambiguously produced a single factor, regardless of the method used to select the number of factors in the EFA. Of the remaining data sets, almost every one in which a second-order EFA was possible produced a single general factor. The only exceptions were from Grigorenko, Ngorosho, Jukes, and Bundy (2006) and Gurven et al. (2017). The Grigorenko et al. (2006) dataset produced two general factors only if one sees the modified Guttman method of selecting the number of first-order factors as being a more realistic solution than MAP. Given the modified Guttman rule’s penchant for overfactoring and the generally accurate results from MAP in simulation studies (Warne & Larsen, 2014), it seems more likely that even the Grigorenko et al. (2006) dataset has two first-order factors and one general factor that accounts for 49.9% of extracted variance. The Gurven et al. (2017) samples both produced two factors in an initial EFA, but the factor extraction process failed for the second-order EFAs for both samples. The inability to test whether the two initial factors could form a general factor makes the Gurven et al. (2017) data ambiguous in regards to the evidence of the presence of g in its Bolivian samples.
Although we did not preregister any exact predictions for our study, we are astonished at the uniformity of these results. We expected before this study began that many samples would produce g, but that there would have been enough samples for us to conduct a post hoc exploratory analysis to investigate why some samples were more likely to produce g than others. With only three samples that did not produce g, we were unable to undertake our plans for exploratory results because g appeared too consistently in the data.
Thus, Spearman’s g appeared in at least 94 of the 97 data sets (97.0%) from 31 countries that we investigated, and the remaining three samples produced ambiguous results. Because these data sets originated in cultures and countries where g would be least likely to appear if it were a cultural artifact, we conclude that general cognitive ability is likely a universal human trait. The characteristics of the original studies that reported these data support this conclusion. For example, some of these data sets were collected by individuals who are skeptical of the existence or primacy of g in general or in non-Western cultures (e.g., Hashmi et al., 2010Hashmi, Tirmizi, Shah, & Khan, 2011O’Donnell et al., 2012Pitchford & Outhwaite, 2016Stemler et al., 2009Sternberg et al., 20012002). One would think that these investigators would be most likely to include variables in their data sets that would form an additional factor. Yet, with only three ambiguous exceptions (Grigorenko et al., 2006Gurven et al., 2017), these researchers’ data still produced g. Additionally, many of these data sets were collected with no intention of searching for g (e.g., Bangirana et al., 2015Berry et al., 1986Engle, Klein, Kagan, & Yarbrough, 1977Kagan et al., 1979McCoy, Zuilkowski, Yoshikawa, & Fink, 2017Mourgues et al., 2016Ord, 1970Rehna & Hanif, 2017Reyes et al., 2010Tan, Reich, Hart, Thuma, & Grigorenko, 2014). And yet a general factor still developed anyway. It is important to recognize, though, that the g factor explained more observed variable variance in some samples than in others.
For those who wish to equate g with a Western view of “intelligence,” this study presents several problems for the argument that Western views of intelligence are too narrow. First, in our search, we discovered many examples of non-Western psychologists using Western intelligence tests with little adaptation and without expressing concern about the tests’ overly narrow measurement techniques. Theorists who argue that the Western perspective of intelligence is too culturally narrow must explain why these authors use Western (or Western-style) intelligence tests and why these tests have found widespread acceptance in the countries we investigated (Oakland, Douglas, & Kane, 2016). Another difficulty for the argument that Western views of intelligence are too narrow is the fact that tests developed in these nonindustrialized, non-Western cultures positively correlate with Western intelligence tests (Mahmood, 2013van den Briel et al., 2000). This implies that these indigenous instruments are also g-loaded to some extent, which would support Spearman’s (1927) belief in the indifference of the indicator.
One final issue bears mention. Two peer reviewers raised the possibility that developmental differences across age groups could be a confounding variable because a g factor may be weaker in children than adults. To investigate this possibility, we conducted two post hoc nonpreregistered analyses. First, we found the correlation between the age of the sample (either its mean or the midpoint of the sample’s age range) and the variance explained by the first factor in the dataset was r = .127 (r2 = .016, n = 84, p = .256). Because a more discrete developmental change in the presence of strength of a g factor was plausible, we also divided the data sets five age groups: <7 years (10 samples), 7–12.99 years (34 samples), 13–17.99 years (12 samples), 18–40.99 years (21 samples), and ≥41 years (five samples). All of these age groups had a mean first factor that had a similar strength (between 41.79% and 49.63%), and the null hypothesis that all age groups had statistically equal means could not be rejected (p = .654, η2 = .031) These analyses indicate that there was no statistical relationship between sample age and the strength of the g factor in a dataset.

Methodological Discussion

A skeptic of g could postulate that our results are a statistical artifact of the decisions we used to conduct a factor analysis. Some data sets in our study had been subjected to EFA in the past, and the results often differed from ours (Attallah et al., 2014Bulatao & Reyes-Juan, 1968Church, Katigbak, & Almario-Velazco, 1985Conant et al., 1999Dasen, 1984Dawson, 1967bElwan, 1996Guthrie, 1963Humble & Dixon, 2017Irvine, 1964Kearney, 1966Lean & Clements, 1981McFie, 1961Miezah, 2015Orbell, 1981Rasheed et al., 2017Ruffieux et al., 2010Sen, Jensen, Sen, & Arora, 1983Sukhatunga et al., 2002van den Briel et al., 2000Warburton, 1951). In response, we wish to emphasize that we chose procedures a priori that are modern methods accepted among experts in factor analysis (e.g., Fabrigar et al., 1999Larsen & Warne, 2010Thompson, 2004Warne & Larsen, 2014). The use of promax rotation, for example, might be seen as an attempt to favor correlated first-order factors—which are mathematically much more likely to produce a second-order g than orthogonal factors. However, promax rotation does not force factors to be correlated, and indeed uncorrelated factors are possible after a promax rotation. Therefore, the use of promax rotation permitted a variety of potential factor solutions—including uncorrelated factors—and permitted the strong test of g theory that we desired.
Another potential source of criticism would be our methods of retaining the number of factors in a dataset. The original Guttman (1954) rule of retaining all factors with an eigenvalue of 1.0 or greater is the most common method used in the social sciences, probably because it is the default method on many popular statistical analysis packages (Fabrigar et al., 1999). However, the method can greatly overfactor, especially when a dataset has a large number of variables, the sample size is large, and when factor loadings are weak (Warne & Larsen, 2014). These circumstances are commonly found in cognitive data sets, which are frequently plagued by overfactoring (Frazier & Youngstrom, 2007). This is why we chose to use more conservative and accurate methods of retaining the number of factors (Warne & Larsen, 2014). The use of MAP is especially justified by its strong performance in simulation studies and its tendency to rarely overfactor. MAP is insensitive to sample size, the correlation among observed variables, factor loading strength, and the number of observed variables (Warne & Larsen, 2014), all of which varied greatly among the 97 analyzable data sets.
Indeed, it is because of our use of modern methods of factor selection and rotation that we believe that prior researchers have never noticed g as a ubiquitous property of cognitive data in non-Western groups. Many prior researchers used varimax rotation and the original Guttman rule, likely because these methods mathematically and computationally were easier in the days before inexpensive personal computers or because both are the default method in popular statistics packages today. (Additionally, the older data sets predate the invention of promax rotation and/or MAP). But both of these methods obscure the presence of g. As an extreme example, Guthrie’s (1963) data consist of 50 observed variables (the most of any dataset in our study) that produced 22 factors when he subjected them to these procedures. Some of Guthrie’s (1963) factors were weak, uninterpretable, or defined by just one or two variables. In our analyses we found five (using MAP) or 10 (using the modified Guttman rule) first-order factors; when subjected to the second-order EFA, the data clearly produced a single factor with an obvious interpretation: g.
The results of this study are highly unlikely to be a measurement artifact because the original researchers used a wide variety of instruments to measure cognitive skills in examinees. While some of these instruments were adaptations of Western intelligence tests (e.g., Abdelhamid, Gómez-Benito, Abdeltawwab, Bakr, & Kazem, 2017), some samples included variables that were based on Piagetian tasks (e.g., Dasen, 1984Kagan et al., 1979Orbell, 1981). Other samples included variables that were created specifically for the examinees’ culture (e.g., Mahmood, 2013Stemler et al., 2009Sternberg et al., 2001van den Briel et al., 2000) or tasks that did not resemble Western intelligence test subtests (Bangirana et al., 2015Berry et al., 1986Gauvain & Munroe, 2009). There were also several samples that included measures of academic achievement in their data sets (e.g., Bulatao & Reyes-Juan, 1968Guthrie, 1963Irvine, 1964). The fact that g emerged from such a diverse array of measurements supports Spearman’s (1927) belief in the “indifference of the indicator” and shows that any cognitive task will correlate with g to some degree.
Other readers may object to our use of EFA at all, arguing that a truly strong test of g theory would be to create a confirmatory factor analysis (CFA) model in which all scores load onto a general factor. However, we considered and rejected this approach because CFA only tests the model(s) at hand and cannot generate new models from a dataset (Thompson, 2004). In this study, EFA procedures did not “know” that we were adherents to g theory when producing the results. Rather, “EFA methods . . . are designed to ‘let the data speak for themselves,’ that is, to let the structure of the data suggest the most probable factor-analytic model” (Carroll, 1993, p. 82). Thus, if a multifactor model of cognitive abilities were more probable in a dataset than a single g factor, then EFA would be more likely to identify it than a CFA would. The fact that these EFAs so consistently produced g in their data is actually a stronger test of g than a set of CFAs would have been because EFA was more likely to produce a model that disproved g than a CFA would. CFA is also problematic in requiring the analyst to generate a plausible statistical model—a fact that Carroll (1993, p. 82) recognized when he wrote:
It might be argued that I should have used CFA. . . . But in view of wide variability in the quality of the analyses applied in published studies, I could not be certain about what kind of hypotheses ought to be tested on this basis. (Carroll, 1993, p. 82)
We agree with Carroll on this point. CFA also requires exactly specifying the appropriate model(s) to be tested. While this is a positive aspect of CFA in most situations, it was a distinct disadvantage when we were merely trying to establish whether g was present in a dataset that may not have been collected for that purpose. This is because most authors usually did not report a plausible theoretical model for the structure of their observed variables, and there was often insufficient information for us to create our own plausible non-g models that could be compared with a theory of the existence of Spearman’s g in the data.3 Indeed, some researchers did not collect their data with any model of intelligence in mind at all (e.g., McCoy et al., 2017). By having EFA to generate a model for us, we allowed plausible competing models to emerge from each dataset and examined them afterward to see if they supported our theory of the existence of Spearman’s g in non-Western cultures. Another problem with CFA’s requirement of prespecified models is that some theories of cognitive abilities include g as part of a larger theoretical structure of human cognition (e.g., Canivez, 2016Carroll, 1993). How the non-g parts of a model might relate to g and to the observed variables is rarely clear.
Another advantage to EFA over CFA is that the former uses data to generate a new model atheoretically, and the subjective decisions (e.g., factor rotation method, second-order procedures, standards used to judge the number of factors) in an EFA are easily preregistered, whereas the subjective decisions in a CFA (e.g., when to use modification indices, how to arrange variables into factors, the number of non-g factors to include in a model) often cannot be realistically preregistered—or even anticipated before knowing which variables were collected—in secondary data analysis if the data were not collected in a theoretically coherent fashion (as was often the case for our data sets). By preregistering the subjective decisions in an EFA, we could ensure that subjective decisions could not bias our results into supporting our preferred view of cognitive abilities.
Finally, we want to remind readers that our dataset search and analysis procedures were preregistered and time stamped at the very beginning of the study before we engaged in any search procedures or analyses. This greatly reduces the chance for us to reverse engineer our methods to ensure that they would produce the results we wanted to obtain. Still, deviations from our preregistration occurred. When we deviated from the preregistration protocol, we stated so explicitly in this article, along with our justification for the deviation. Additionally, some unforeseen circumstances presented themselves as we conducted this study. When these circumstances required subjective decisions after we had found the data, we erred on the side of decisions that would maximize the chances that the study would be a strong test of g theory. Again, we have been transparent about all of these unforeseen circumstances and the decisions we made in response to them.

Full paper, maps, references, etc., at the link at the beginning.

There is a generalized bias towards negativity, but with individual-level differences, which appear to be partly pre-dispositional (durable, with correlations with demographic, partisan & personality measures)

Individual-level differences in negativity biases in news selection. Sarah Bachleda et al. Personality and Individual Differences, November 23 2019, 109675. https://doi.org/10.1016/j.paid.2019.109675

Abstract: Literatures across the social sciences highlight the tendency for humans to be more attentive to negative information than to positive information. We focus here on negativity biases in news selection (NBNS) and suggest that this bias varies across individuals and contexts. We introduce a survey-based measure of NBNS which is used to explore the correlates of negative news bias in surveys in the U.S., Canada, and Sweden. We find that some respondents are more prone to NBNS than others. There is evidence of contextual effects, but panel data suggests that some of the individual-level differences persist over time. NBNS likely reflects some combination of long-term personality differences and short-term situational factors, and is systematically related to a number of economic and political attitudes.

Keywords: Political communicationPersonality differencesNews consumptionNegativity bias


1. Durable versus context-driven individual-level variation in negativity biases

"including Lilienfeld and Latzman's (2014) finding that although conservatives are more responsive to negative information on average, both conservatives and liberals respond to negative information when it poses a threat to their partisan identity; or Federico, Johnston and Lavine's (2014) finding that evidence of negativity biases will be conditional on political engagement."


6. Discussion

There is reason to expect that individual-level variation in negativity biases has an important and durable impact on individuals’ news media use, as well as on a range of economic and political attitudes. This paper has taken a first step toward measuring a negativity bias in news selection. We find that while on balance there is a bias towards negativity, there are individual-level differences. These differences appear to be partly pre-dispositional; that is, they appear to be durable, demonstrated both by correlations with demographic, partisan and personality measures, and by within-respondent correlations across time. We also find that these individual-level differences are correlated with a variety of economic and political attitudes. We take these results as evidence of the potential importance of negativity biases in news selection (NBNS) in understanding attitudes about governments, the economy, and other politically and economically-relevant attitudes. We also suspect that NBNS moderates the impact of news content – those who are high in NBNS may select into a rather different information stream than those who are low in NBNS, which could subsequently shape their political perspectives. Although this application of the measure is not tested here, we thus see disentangling the relationship between political news selection and political preferences as an important avenue for future research. There is also potential for work that explores the degree to which more nuanced variation in tone – i.e., not just positive or negative, but gradations across that range – matters for story selection and measures of negativity biases. Our headlines do not vary in tone much within the negative and positive categories (see Appendix Fig. 2); this was done by design. But past work has suggested nonlinearities in negativity biases (e.g., Ito and Cacioppo, 2005), and these could be more fully explored using headlines that vary systematically in degrees of positivity or negativity. Finally, an exploration of the relationship between NBNS and other measures of negativity biases will be critical for future work. Given that other more standard measures of negativity biases are primarily labbased, we have not examined them in the survey data used here. However, understanding the extent to which NBNS is a domain-specific negativity bias, versus the consequence of a more domain-general bias, requires further research. Our results provide only a first step in this direction. In doing so, however, we regard the preceding analyses as a first signal that individual-level differences in news preferences may be one way in which personality differences are relevant to political attitudes and behavior.



Samples

U.S. Sample
Data for the U.S. study were collected as part of an online panel survey from a sample provided by Qualtrics, which recruited subjects using ClearVoice research. ClearVoice maintains a standing panel of survey respondents who were recruited to the platform through a combination of targeted emails, advertisements, and website intercepts. These individuals then opt-in to taking surveys and are recruited to participate in individual studies either by email or by clicking on a dashboard link. ClearVoice sent emails to 61,865 panelists with the goal of recruiting a broad national sample of at least 3,667 Americans to participate in the study.

Swedish Sample
Data for the first Swedish sample come from the Citizen Panel (original Swedish name: Medborgarpanelen – MP), which is a panel survey fielded online by the Laboratory of Opinion Research (LORE). Specifically, the data come from Citizen Panel 16 (MP16), which was fielded between June 9 and June 30, 2015. The panel used a mixed sampling design whereby 84 percent of the gross sample were opt-in and the remaining 16 percent were probability based. The panel wave included five separate modules and our data come from module 3 (Negativity Biases). This module yielded 12,867 complete responses for an AAPOR participation rate (RR5) of 92%.

Data for the second Swedish sample also come from the Citizen Panel. Specifically, the data come from Citizen Panel 29 (MP29), which was fielded between March 22 and April 16, 2018. The panel used a mixed sampling design whereby 76 percent of the gross sample were opt-in and the remaining 24 percent were probability based. The panel wave included five separate modules and our data come from module 2 (Negativity Biases in News Selection). Additional information about the Citizen Panels can be found at http://lore.gu.se/surveys/citizen.

Canadian Sample
The Canadian data come from the 2015 Canadian Election Study. Full documentation for the study can be found at: http://ces-eec.arts.ubc.ca/ english-section/surveys/. The study was funded by the Social Sciences and Humanities Research Council of Canada.


Saturday, November 23, 2019

From 2018... Seven deadly sins of potential romantic partners: dealbreakers in mating preferences

From 2018... Seven deadly sins of potential romantic partners: dealbreakers in mating preferences. Mihály Berkics, Zsófia Csajbók. Conference: 13th Conference of the European Human Behaviour and Evolution Association At: Pécs, Hungary, April 2018. https://www.researchgate.net/publication/325961275

Abstract
Objective. Most of the research on mate preferences has focused on what people desire in a partner, and not on dealbreakers, i.e. what traits make people reject a potential mate. Recently, Jonason et al (2015) published a multi-study paper presenting extensive research on dealbreakers, emphasizing their importance. However, their items loaded on a single factor, so they turned to sorting the items by independent coders into face-valid categories to establish more distinguished facets of dealbreakers. The goal of the present research is to identify dealbreakers in a large sample with factor-analytic methods.

Methods. In Study 1, potential dealbreakers were collected with open-ended questions from a sample of 173 participants. Based on their responses, in Study 2 a closed-ended questionnaire was compiled and administered to a large sample (N = 2,445) of heterosexual respondents (48% female), who had to rate each dealbreaker trait according to how likely it would make the participant reject a potential partner. First exploratory, then confirmatory factor analyses were performed on these ratings. Participants also rated themselves on 23 desirable traits representing 7 factors of mate preferences (from Csajbók & Berkics, 2017).

Results. Seven factors of dealbreakers were confirmed: loserness, hostility, bad hygiene, arrogance, ugliness, overattachment, and abusiveness. Women in general were more selective, i.e. they were more likely to reject prospective partners with undesirable traits, except for ugliness, where males scored higher. Individual differences were also found, as participants' ratings of themselves predicted which dealbreakers they found to be more or less repulsive.

Conclusions. Dealbreakers can be measured as factors just as desirable traits in a potential mate. This offers a more nuanced method to study sex and individual differences with regards to what traits make people reject a candidate when looking for a partner.

Human belief formation is sensitive to social rewards and punishments, such that beliefs are sometimes formed based on unconscious expectations of their likely effects on other agents

Socially Adaptive Belief. Daniel Williams. Nov 2019. Forthcoming in Mind and Language. https://www.academia.edu/40935572/Socially_Adaptive_Belief

Abstract: I outline and defend the hypothesis that human belief formation is sensitive to social rewards and punishments, such that beliefs are sometimes formed based on unconscious expectations of their likely effects on other agents - agents who frequently reward us when we hold ungrounded beliefs and punish us when we hold reasonable ones. After clarifying this phenomenon and distinguishing it from other sources of bias in the psychological literature, I argue that the hypothesis is plausible on theoretical grounds: in a species with substantial social scrutiny of beliefs, forming beliefs in a way that is sensitive to their likely effects on other agents leads to practical success. I then show how the hypothesis accommodates and unifies a range of psychological phenomena, including confabulation and rationalisation, positive illusions, and identity protective cognition.

My comments: I would add to all this that if one is in the group of those not so easily influenced by other agents' punishments/rewards it is due to "excessive" weight to internal reputation (e.g., being uncomfortable with oneself if one bends to the consensus vision when one believes the data is not clear in support of the others' views or even is against the consensus).

---
5. Conclusion
The core claim of this paper has been simple: the way in which we form beliefs is sensitive to their effects on other agents. I have argued that this hypothesis is plausible on theoretical grounds in light of distinctive characteristics of human social life, and I have identified several putative examples of this phenomenon in a range of different areas.  These three examples are not supposed to be exhaustive. Collectively, however, they illustrate important features of human cognition that theorists from a range of different fields have sought to illuminate by appeal to the influence of social incentives on belief formation. As I have noted, some of these examples are more controversial than others.  My aim in this paper has not been to conclusively vindicate SAB but to render it plausible in the hope that this might spur future research on this phenomenon. To that end, I will conclude by noting three important areas for future research.

First, it would be beneficial in future work to have a more formal taxonomy of the various ways in which social motives influence belief formation. These motives are heterogeneous: to be socially and sexually desirable, to build, maintain, and strengthen relationships and alliances, to attain social dominance and prestige, and so on. It would be useful to have a more systematic understanding of how this diverse array of complex social goals guides the way in which we seek out and process information.  Second, I have not addressed in any detail the psychological mechanisms and processes that underlie socially adaptive belief formation. A more rigorous treatment in the future should rectify this. As I noted in Section 3.1, motivated cognition in general is facilitated by a variety of different strategies and there is no reason to think that socially adaptive belief formation would be different. Nevertheless, the treatment of this topic here has been shallow. Remedying this defect is a crucial task for future work in both philosophy and psychology.

Finally, and most importantly, future research should focus on a more rigorous examination of the evidence for and against SAB. Importantly, there are really two issues here. First, although I have tried to explain why SAB offers a plausible explanation of the phenomena outlined in Section 4, I have also noted that in most cases there are competing explanations of such phenomena that make no reference to social incentives: for example, the second-order ignorance widely thought to drive confabulation, the purely personal hedonic and motivational benefits of positive illusions, and the combination of in-group trust and unfortunate epistemic circumstances alleged to underpin the relationship between group identity and ungrounded beliefs. Future work should search for more effective ways of adjudicating such controversies. To take only one example, SAB makes a straightforward prediction: manipulating people’s expectations about the social consequences of candidate beliefs should influence the way in which they seek out and process information.8 Future experimental work should look for ways to test this prediction.

Just as important, however, is a more theoretical question that I have largely ignored throughout this paper: even granting that social incentives influence the way in which we seek out and process information, why treat the cognitive attitudes that result from such incentives as beliefs? 9 Many philosophers and psychologists have sought to draw a distinction between different kinds of cognitive attitudes that are often subsumed under the general term “belief.”10 Although none of the distinctions such theorists have drawn that I am aware of align straightforwardly with the difference between socially adaptive beliefs and ordinary world-modelling beliefs that I have outlined here, one might nevertheless worry that the functional properties of the former are sufficiently different from the latter to warrant status as a different kind of cognitive attitude. To take only the most obvious example, socially adaptive beliefs are typically much less responsive to evidence than ordinary beliefs. If one individuates cognitive attitudes by their functional properties, doesn’t this threaten the idea that they constitute the same kind of attitude?

From my perspective, this line of argument is better thought of as a potential clarification of SAB than a critique. After all, implicit in the theoretical argument of Section 2 is that one should expect socially adaptive beliefs to function differently from ordinary world-modelling beliefs. That is, insofar as their function is to elicit desirable responses from other agents, one would expect their functional properties to be adapted to this function. For example, one would expect agents to shield socially adaptive beliefs from counter-evidence, to be emotionally invested in such beliefs, to advertise them to others, to be reluctant to draw implications from such beliefs that are not themselves socially adaptive, and to be reluctant to act on such beliefs outside of social contexts.  Indeed, I noted in Section 3 that beliefs that we are less likely to act on are the prime candidates for the influence of motivational influences such as social goals. If one concludes from such functional differences that socially adaptive beliefs are not really beliefs at all but rather a different kind of cognitive attitude merely masquerading as beliefs, that would be an important theoretical clarification of SAB.

Nevertheless, it is a notoriously difficult philosophical question how to functionally individuate beliefs (and psychological kinds more generally), and there are equally persuasive considerations for treating socially adaptive cognitive attitudes as a kind of belief. For example, they guide sincere verbal assertions, which is plausibly the most important cue used by ordinary people for belief ascription (Rose et al. 2014), and the functional differences just outlined are themselves differences of degree, not kind.  Agents do in fact act on the kinds of socially adaptive attitudes outlined in Section 4, they do use them in reasoning, and they are not literally impervious to counterevidence.  Further, the appeal to motivational influences is intended to explain the functional differences between motivated beliefs and non-motivated beliefs without appealing to a difference of kind in the relevant cognitive attitudes. A drug addict motivated to deny her drug problem, for example, also harbours beliefs with fundamentally different functional properties to ordinary beliefs (Pickard 2016). Rather than explaining such functional differences by introducing a distinct cognitive attitude, it is plausibly more illuminating to explain them in terms of the way in which a single kind of cognitive attitude adapts to the influence of the agent’s motivations. It may be that something similar should be said about socially adaptive beliefs.

These brief remarks barely scratch the surface of this complex issue. For a fully satisfying understanding of the way in which the contents of our minds are shaped by the structure of our social worlds, however, this is an issue that must be addressed in future work.

These studies showed that being alone with others was worse for people’s affective outcomes and sense of belonging than being completely alone, contrary to hypotheses

Being alone with others: A unique form of social contact and its impact on momentary positve affect. Karin Sobocko. PhD Thesis, 2019. https://curve.carleton.ca/system/files/etd/bfbbd316-51d1-4c7d-9f75-77001610c855/etd_pdf/04c82a9d7323d33a0e71e705c6358d9d/sobocko-beingalonewithothersauniqueformofsocialcontact.pdf

Abstract
Social relationships are essential to human well-being. Although people receive the most benefit
from interactions with others who are close to them (Reis, Sheldon, Gable, Roscoe, & Ryan,
2000), the need for human contact can also be satisfied through minimal interactions with others
(Sandstrom & Dunn, 2014a, 2014b). This dissertation extended the research regarding the
benefits of contact with acquaintances by proposing that being alone with others, i.e. being
around others without verbally interacting with them, could be an alternative way of satisfying
the need for social contact and improving positive affect. In an experience sampling study (N =
453), being alone with others was associated with similar positive (PA) and negative affect (NA),
and lower sense of belonging, than being completely alone. Additional results supported
existing research associating the best affective outcomes with interactions with close others, and
higher positive affect after talking to acquaintances than not talking to them (Sandstrom & Dunn,
2014a). A second study was designed to test: whether merely sharing a space with others
produces a higher sense of belonging; whether this belongingness could explain better outcomes
of being alone with others compared to being alone; whether effects depend on performing the
same task as others. Participants (N = 265) were randomly assigned to watch a pleasant video:
alone, together with a confederate, or alone when a confederate was doing something else. I
found no differences in the amplification of PA and sense of belonging, or in reduction of NA
between the social conditions; however, these outcomes were also not different in the alone
condition. Sharing a space with others, regardless of simultaneously performing a task together,
did not lead to better outcomes than being alone. Trait introversion-extraversion was also
explored, and two main trends were found in both studies: extraverts reported higher PA and
sense of belonging than introverts in all situations, and introverts and extraverts reported similar
amplifications of affective states in different social and experimental conditions. Overall, both
studies revealed that being alone with others was worse for people’s affective outcomes and
sense of belonging than being completely alone, contrary to hypotheses.



Not surprisingly, humans receive the most benefit from interactions with others who are familiar to them, such as family members or close friends (Mehl et al., 2010; Reis et al., 2000; Vittengl & Holt, 1998; Wheeler et al., 1983). Recent studies by Sandstrom and Dunn (2014a, 2014b) indicated that engaging in weak-tie interactions, i.e. interactions with people with whom we do not share a close or intimate connection, can lead to positive outcomes. In one of their studies, people at a coffee shop were asked to engage in small talk, smiling, and eye contact with the barista, while others were asked to make their visit as efficient as possible by talking only if necessary. As predicted, the more interactive group showed significantly larger improvements in their momentary positive affect and sense of belonging than the efficient group (Sandstrom & Dunn, 2014a). This finding is especially significant for contemporary ultra-individualistic societies, since it shows that people can satisfy their need for human contact and increase their momentary positive affect through even minimal interactions with others who are weakly connected to them, i.e. with people they do not know well. Irrespective of the above findings, and whether due to personality traits, psychological disorders, or the worry of breaking unwritten social rules, some people choose to be around others less frequently. For example, introverts report, on average, spending less time in social situations (Asendorph & Wilpers, 1998; Lucas, Le, & Dyrenforth, 2008), speaking less (Mehl, Gosling, & Pennebaker, 2006), and overall enjoying solitude more than extraverted people (Burger, 1995; Long, Seburn, Averill, & More, 2003). However, recent research has shown that when introverts were asked to act extraverted, i.e. act bold, assertive, or talkative, they experienced an increase in their momentary positive affect without any short-term negative effects of this counterdispositional behaviour (Fleeson, Malanos, & Achille, 2002; McNiel & Fleeson, 2006; McNiel, Lowman, & Fleeson, 2010; Sandstrom & Dunn, 2014a; Smillie, 2013; Wilt, Noftle, Fleeson, & Spain, 2012; Zelenski, Santoro, & Whelan, 2012; Zelenski et al., 2013). Overall, acting in more extraverted ways seem to be enjoyable to all people, regardless of their levels of extraversion-introversion trait, but introverts tend to underpredict how well they would feel acting extraverted, which leads to them avoiding social situations more often (Zelenski et al., 2013). As seen so far, although in general people benefit from social contact, such contact can be hindered for various reasons (e.g., fatigue, personality), which could prevent people from experiencing the boost in positive affect associated with being around others. Hence, the purpose of this dissertation is to test a minimal form of social contact, which may be less bothersome to some people, yet it could still improve their positive affect and sense of belonging. Specifically, being alone with others, i.e. being around people weakly tied to us, who we do not know well, or being around total strangers, without verbally interacting with them, could provide enough social contact to increase our momentary positive affect. Studying the alone with others social situation is unique because the scarce existing research regarding minimal social contact and the resulting affective outcomes is predominantly characterized by an inclusion of an element of verbal interaction (e.g., Sandstrom & Dunn, 2014a, 2014b). It is important to acknowledge that the amplification of momentary positive affect resulting from being alone with others was not expected to surpass positive affect stemming from verbally interacting with others, especially others we love, trust, and who offer us their support. However, I wanted to test whether people, who did not verbally interact with each other, would still be able to experience belongingness simply by sharing a physical space and being close to others, and whether this alternative way of satisfying the need for social contact would also improve their positive affect. Said another way, does being physically near others feel better than being alone?

Contrary to the hypotheses derived from extant literature, atheists, non-religious, and religious participants did not significantly differ on measures of cardiovascular reactivity or recovery

Comparing Atheist, Non-Religious, And Religious Peoples' Cardiovascular Reactivity: A Laboratory Stressor. Rolf A. Ritchie's PhD Thesis, Dec 2019. https://etd.ohiolink.edu/!etd.send_file?accession=bgsu15730518157556&disposition=inline

Abstract: Atheists and the non-religious have historically been excluded from cardiovascular research assessing the relation between religion and reactivity. Researchers have suggested that atheists and the non-religious ought to have increased cardiovascular reactivity and decreased recovery following a stressor. The primary theoretical justifications for this hypothesized difference are atheists/non-religious lack religious coping resources or that they are exposed to minority stress. However, few previous studies have incorporated atheists, had adequate methodology to explore this relation, or used measures designed to appropriately categorize atheist/non-religious participants. In order to explore this relation, 61 participants were recruited and using the Non-Religious Non-Spiritual Scale, were separated into three groups: atheist, nonreligious, or religious. Participants were then exposed to a social stressor to elicit cardiovascular reactivity. Heart rate, high-frequency heart rate variability, and blood pressure were recorded during the experimental procedure. Results indicated that contrary to the hypotheses derived from extant literature, atheists, non-religious, and religious participants did not significantly differ on measures of cardiovascular reactivity or recovery.

21% of the pedestrians in an urban setting in Belgium violate traffic lights; presences of ush buttons and worn off zebra markings increase the frequency of violations

Non-compliance with pedestrian traffic lights in Belgian cities. Kevin Diependaele. Transportation Research Part F: Traffic Psychology and Behaviour, Volume 67, November 2019, Pages 230-241. https://doi.org/10.1016/j.trf.2016.11.017

Highlights
• 21% of the pedestrians in an urban setting in Belgium violate traffic lights.
• There is large variability; percentages below 15% and above 30% are no exceptions.
• Higher traffic volume and complexity reduce the frequency of red-light running.
• Gap acceptance theory can account for the effect of traffic volume and complexity.
• Push buttons and worn off zebra markings increase the frequency of violations.
• Auxiliary signals, either visual or auditory, have a lowering effect on violations.


Abstract: The frequency of red light running was investigated across the nine most populated cities in Belgium. The results show that approximately 21% of the pedestrians violate the lights. There is, however, large variability in the frequency of violations depending on the specific context. Traffic volumes, motorized as well as pedestrian volumes, and situational characteristics that are generally associated with higher traffic complexity (rush hours, number of driving directions, number of lanes per driving direction and the presence of a tram or bus lane) have a lowering effect. A number of technical characteristics of the pedestrian crossing were also found to exert a significant influence: push buttons and worn off zebra markings increase the frequency of violations. On the other hand, auxiliary signals, either visual or auditory, have a positive effect.

Keywords: PedestriansRed light runningBelgium


5.5. Push buttons
Fig. 4 illustrates the effects of situational characteristics which are not clearly associated with motorized traffic volumes and apply to the technical design of the pedestrian crossing. The first effect of this kind concerns push buttons: when push buttons are present, we see a significantly higher degree of red light violations by pedestrians. One could argue that this is due to the fact that these locations are associated with a lower overall pedestrian volume (see top left panel in Fig. 4). With fewer pedestrians passing, the chance of arriving during a red phase will on average be higher because a green phase only occurs when pedestrians make a request. Red light violations may thus be observed more frequently than in the absence of push buttons without any inherent effect of push buttons on the willingness to commit red light violations among pedestrians. The above explanation nevertheless also predicts a clear effect on the phase frequency, i.e., a reduced number of phases per time unit for crossings equipped with push buttons. Such an effect is not evident in the data. In the light of this, it is important to consider the alternative explanation that in many cases, the presence and functional characteristics of push buttons are not transparent enough for pedestrians. In Belgium, several different designs of push buttons exist with heterogeneous functional characteristics. Waiting times after requests are generally not transparent. The lack of transparent information about waiting times has indeed been shown to exert a strong negative influence on safe crossing behaviour by pedestrians (e.g., Eccles et al., 2007; Markowitz et al., 2006; Schlabbach, 2010).


5.7. Visibility of zebra markings

The last effect concerns the visibility of the zebra marking. It appears that pedestrians are more inclined to commit red light violations when zebra markings are in bad condition (i.e., the paint is worn off; see Fig. 5). This effect is intriguing, as it cannot be linked to marked differences in pedestrian and/or vehicle traffic volumes which could explain the degree of wear and tear. An interesting hypothesis is that the effect illustrates the interaction of physical spaces and social norms. Keizer, Lindenberg, and Steg (2008), for instance, provided several demonstrations of so-called ‘‘spreading of disorder” phenomena.
The idea is related to the so-called ‘‘Broken window theory” in criminology (Kelling & Wilson, 1982) and entail that public spaces that are unorganized and show traces of decay and criminal activity facilitate illegal and anti-social behaviour. The classical example is that people are more inclined to litter in a poorly maintained public space. Keizer et al. argue that spreading of disorder can also translate itself into very subtle phenomena such as traffic rule violations. In the present context, it was certainly not the case that poorly visible zebra markings were always associated with a generally ill-maintained area. More specific underlying dynamics could be that pedestrians associate poor investments in traffic infrastructure with low levels of police enforcement or even low risk levels.

Relative to themselves, people believe that an identically paying other will get more enjoyment from the same experience, but an identically enjoying other will pay more for the same experience

Overestimating the valuations and preferences of others. Jung, Minah H. Moon, Alice Nelson, Leif D. Journal of Experimental Psychology: General. Nov 2019. https://psycnet.apa.org/record/2019-69146-001

Abstract: People often make judgments about their own and others’ valuations and preferences. Across 12 studies (N = 17,594), we find a robust bias in these judgments such that people overestimate the valuations and preferences of others. This overestimation arises because, when making predictions about others, people rely on their intuitive core representation of the experience (e.g., is the experience generally positive?) in lieu of a more complex representation that might also include countervailing aspects (e.g., is any of the experience negative?). We first demonstrate that the overestimation bias is pervasive for a wide range of positive (Studies 1–5) and negative experiences (Study 6). Furthermore, the bias is not merely an artifact of how preferences are measured (Study 7). Consistent with judgments based on core representations, the bias significantly reduces when the core representation is uniformly positive (Studies 8A–8B). Such judgments lead to a paradox in how people see others trade off between valuation and utility (Studies 9A–9B). Specifically, relative to themselves, people believe that an identically paying other will get more enjoyment from the same experience, but paradoxically, that an identically enjoying other will pay more for the same experience. Finally, consistent with a core representation explanation, explicitly prompting people to consider the entire distribution of others’ preferences significantly reduced or eliminated the bias (Study 10). These findings suggest that social judgments of others’ preferences are not only largely biased, but they also ignore how others make trade-offs between evaluative metrics.

General Discussion

People are sometimes called upon to assess the preferences of others, assessments which
we find to be prone to persistent biases. Across several studies, we find that across various
measures of valuation and utility (i.e., WTP (willingness to pay), enjoyment, and willingness-to-wait), people believe
that others have more intense experiences than they themselves do (Studies 1-8). We propose
that this overestimation of others stems from a narrow focus on the primary dimension of the
option being evaluated (e.g., a trip to Rio De Janeiro is generally thought to be positive, shaving
your head is generally thought to be negative). But this only involves estimations of others. Selfassessments are further informed by the subtle vagaries of personal preferences, reducing the
total preference intensity (e.g., Rio De Janeiro is encumbered by its hard-to-pronounce local
language, and a shaved head is buoyed by the opportunity it affords for a novel scalp tattoo).
Thus, personal evaluations are more moderate than are the estimates of the evaluations of others.
This intuition is strong enough that it is applied even when the target of comparison is
explicitly similar to the self (Studies 9A-9B). When asked to evaluate someone with an identical
WTP, people think that person will anticipate more enjoyment; and when evaluating someone
with identical anticipated enjoyment, people think that person will have a higher WTP. In
combination, people can demonstrate the paradoxical belief that others are willing to pay more
for the same level of enjoyment (when asked about someone identical in enjoyment) or that
others are willing to pay less for the same level of enjoyment (when asked about someone
identical in WTP). Finally, explicitly prompting people to think about the full distribution of
others’ possible valuations significantly interrupted the intuitive process of overestimation based
on the core representation of objects being considered (Study 10).

Relation to Previous Research

Why do people have such persistent judgmental errors when estimating the evaluations of
others? After all, people are not blind to the evaluations of others. People frequently observe the
choices of others, and at least occasionally, are told something about the preferences which led to
those choices. Research indicates that judgments about values and preferences are often
inherently automatic (Chaiken & Trope, 1999; Kahneman & Frederick, 2002; Kahneman, 2003;
Simmons & Nelson, 2006, 2018; Sloman, 1996). Understanding the trade-offs between
evaluative metrics (e.g., a longer wait versus a higher price), however, is more complicated (e.g.,
Tversky, Sattath, & Slovic, 1988). When reporting their own evaluations, people have the
benefits of each metric being accessible and generally reliable, and consequently, evaluative
trade-offs are more likely to be consistent. When predicting the evaluations of others, on the
other hand, people do not have the same basis of knowledge. Without knowledge of how other
people trade off between evaluative metrics, people appear to ignore them altogether.
Consequently, they use intuitive but incomplete heuristics that people experience things more
intensely, which can be misapplied in the case of similar others (i.e., those who would like a
good as much as they would or would pay as much for a good than they would).
Previous research in judgment and decision-making documents abundant evidence that
people do not always hold stable preferences but construct them on the spot when they are
making decisions (Bettman, Luce, & Payne, 1998; Fischhoff, 2013; Payne, Bettman, & Johnson,
1992; Slovic, 1995). If preferences are partially constructed for the self, they might be entirely
constructed when judging others. Studies 9A-9B demonstrate that while people’s own valuation
of a good remains stable, their beliefs about others’ valuation of the same good reverse
depending on how they are asked about others’ valuation. More specifically, people believed that
others derived simultaneously more and less utility from the same goods than they did.
The overestimation bias we document also offers a new approach to understanding the
endowment effect and why selling prices tend to exceed buying prices. Previous research has
largely focused on a “pain-of-losing” account for this phenomenon, which proposes that people
feel significantly more pain when selling their good than others feel when acquiring the same
good (Kahneman & Tversky, 1979; Thaler, 1980; Tversky & Kahneman, 1991). Another
explanation more recently put forth by Weaver and Frederick (2012) hypothesizes that instead
sellers and buyers use different reference prices. Sellers typically focus on market prices in
determining their selling price, whereas buyers typically focus on their own valuation. Because
market prices tend to be higher than people’s valuations (Kahneman, Knetsch, & Thaler, 1991)
and both parties are averse to bad deals, selling prices tend to exceed buying prices. Our
overestimation bias account suggests that in addition to these explanations, people’s expectation
that others derive more value from goods might also contribute to a discrepancy in buying and
selling prices. In particular, sellers may believe that buyers would value the good more than they
themselves would, leading them to set higher selling prices.

Alternative accounts for the overestimation bias

This paper reports 12 experiments showing the existence, robustness, and consequence of
the overestimation bias. We also conducted a handful of additional investigations to try to
understand the forces that may moderate the expression of our effects. Though these studies do
not authoritatively answer why people overestimate others’ valuation, in combination they may
provide some hints. We review two of those investigations, and report them in further detail in
the Supplemental Materials.

> Others with Extreme Preferences.
Study 9A introduced the matching paradigm as a
strong tactic for controlling how people generate an exemplar when estimating the evaluation of
others. An alternative approach, we thought, might be to simply heighten the salience of some
comparison others who are more or less positive about the same stimulus. If people are
spontaneously thinking of an enthusiastic consumer, then forcing them to consider the behavior
of an unenthusiastic consumer might change their estimate. We examined this exceptional other
account in two additional studies described in detail in the Supplemental Materials.
First, in Study S8 (N = 807), we recruited people who self-identified as having extreme
preferences to investigate whether the overestimation bias would persist. Specifically, we
recruited self-identified fans of Star Wars movies and asked them to estimate either: (a) the
average Star Wars fan’s or (b) the average US person’s evaluations of a Star Wars product.
Though these Star Wars fans rationally understood that the average US person’s evaluation of a
Star Wars product would be less extreme than their own, their overestimation emerged when
considering the average Star Wars fan, assuming that the average Star Wars fan would evaluate
the product more positively than they themselves would.
Second, in Study S9 (N = 1214), we used the match paradigm in Study 9A with an
additional factor (Other). In addition to examining how people view identical others (i.e., those
matched on either enjoyment or WTP), we explored people’s estimations for either: (a) a person
who had greater preference for the product (i.e., would pay $5 more than they would for the
product [Higher WTP Other] or would enjoy the product 5 units more than they would [Higher
Enjoyment Other]), or (b) a person who had lesser preference for the product (i.e., would pay $5
less than they would for the product [Lower WTP Other] or would enjoy the product 5 units less
than they would [Lower Enjoyment Other]). By our reasoning, it is possible that explicitly
considering a less enthusiastic consumer would disrupt people’s intuitions for their preferences,
thereby eliminating overestimation. We first replicated the paradoxical results of Study 9A when
people considered identical others: People assumed both that those matched on enjoyment would
pay more for the product than they would, but also that those matched on WTP would enjoy the
product more than they would. But importantly, people asked to consider lower enjoyment others
(i.e., those who would enjoy the product 5 units less than they would) rationally assumed that
those others would pay less for the product than they would, and people asked to consider lower
WTP others (i.e., those who would pay $5 less than they would for the product) rationally
assumed that those others would enjoy the product less than they would.
Together, the results from these two supplemental studies bolster our finding in Studies
9A-9B that the bias cannot be fully explained by the salience of others with extreme preferences
or the extremity of one’s own preferences. When people explicitly consider others who are less
positive towards a product, people display rational responses. However, when considering
average others or those who should have similar preferences, the overestimation bias persists.
Although people often inaccurately predict others’ preferences, they are more likely to be
accurate about the relative difference between their own and others’ preferences of certain
experiences. For instance, if a parent were asked how much they and others would like their
child’s drawing, they would no doubt recognize that their liking of the drawing would be greater
than that of others. Or if people are explicitly told that someone likes a product less than they
themselves do, this also appears to disrupt the reliance on intuitive core representations. The
results in Study 10 are consistent with this logic: People can more accurately predict others’
preferences when they are explicitly prompted to consider others whose preferences are not
consistent with the core representation of a stimulus. Therefore, people are capable of
understanding others’ preferences, but they do not spontaneously consider and integrate the
entire distribution of possible preferences unless they are explicitly compelled to do so.
The combination of the above points, does highlight an interesting parallel account; one
that we can articulate with some clarity, but one that our present findings can neither perfectly
rule out, nor perfectly rule in.10 In our theorizing, the prospective visitor to Rio De Janeiro
forecasts a positive experience encumbered by a small number of idiosyncratic negative
experiences. That person, when judging others, starts with the core representation of the
experience (i.e., that a visit to Rio De Janeiro is enjoyable), and that confidence in that initial
intuition means that they do not adjust from there. Accordingly, whereas personal assessments of
Rio De Janeiro are somewhat middling, others are perceived to be more positive. The alternative
account focuses not on the mixture of experience within an individual that moves a high rating to
a lower rating, but rather the mixture of experiences across people that produces many positive
evaluations, but also some idiosyncratically individually low ratings for generally positive
stimuli. Consider again the person evaluating the trip to Rio De Janeiro. On average, that person
is probably positive (say, an 85 on a 101-point scale), but some people might be quite negative
(perhaps they are actively avoiding irritating in-laws back in Brazil), and give an extremely low
evaluation of the potential visit. The average person and the negative person both have equal
weight in the overall true average, but they might not have equal weight in how people form their
perceptions of the average other. In essence, it may be the case that people are fully capable of
integrating both the core representation of a prospect with the more unusual negative features;
they accurately recognize that most people think that Ipanema beach is beautiful, and they
accurately recognize that most people are nevertheless bothered by the risk for potential theft,
but they fail to capture that for some people the latter factor is so significant that it overwhelms
the former. That is, they accurately perceive the experiences of others, but they do not consider
all of those experiences when estimating the average experience.
There is merit to this account. First, even for very positive stimuli, there are always a
number of participants whose valuations are quite low. Consider, just as an example, the density
plot for WTP for a movie ticket from Study 1. The mean is not low, but the distribution is hardly
normal, and a sizable fraction of participants (26.4%) say that they are willing to pay $0. Perhaps
it is exactly that segment of the population that people are failing to identify when constructing
their averages. The distributions generated by participants in Study 10 partially challenge the
extreme version of that possibility. Participants generated very accurate representations of other
people’s WTP (albeit less accurate for other people’s enjoyment), but still showed the overall
bias. Still, it may be the case that people are capable of bringing the full and accurate
representation of the distribution to mind, but they do not do so unless prompted.
For now, we remain agnostic. It could be the case that the core representation of a
positive product or experience creates an intuition that biases predictions about everyone
upward, or it may be the case that creates a mental sample that is biased by selecting out those
people who are uncharacteristically negative. Some of our data seems more consistent with one,
but none of it is so consistent to eliminate the possibility of the other. We think that future
research can hopefully untangle those (and we hope that we are the researchers who do so).