Monday, January 13, 2020

The better people are educated, the less positive their other-perceptions are; a potential explanation could be that a good education goes along with a sense of self-importance & haughtiness

Seeing the Best or Worst in Others: A Measure of Generalized Other-Perceptions. Richard Rau, Wiebke Nestler, Michael Dufner, Steffen Nestler. Assessment (to be published), January 2020. DOI: 10.13140/RG.2.2.16925.87521

Abstract: How positively or negatively people generally view others is key for understanding personality, social behavior, and psychopathology. Previous research has measured generalized other-perceptions by relying on either explicit self-reports or judgments made in group settings. With the current research, we overcome the limitations of these past approaches by introducing a novel measurement instrument for generalized other-perceptions: the Online-Tool for Assessing Perceiver Effects (O-TAPE). By assessing perceivers’ first impressions of a standardized set of target people displayed in social network profiles or short video sequences, the O-TAPE captures individual differences in the positivity of other-perceptions. In Study 1 (n = 219), the instrument demonstrated good psychometric properties and correlations with related constructs. Study 2 (n = 142) replicated these findings and also showed that the O-TAPE predicted other-perceptions in a naturalistic group setting. Study 3 (n = 200) refined the nomological network of the construct and demonstrated that the OTAPE is invulnerable to effects of social desirability.

Keywords: generalized other-perception, perceiver effect, interpersonal perception, person judgment, positivity bias


General Discussion

In the current research, we introduced the O-TAPE, a measurement tool that
objectively and reliably captures individual differences in the positivity of generalized other-perceptions. We developed two versions of the tool that use different types of stimuli:
screenshots of social network profiles (SoN-TAPE) and short video sequences (ViS-TAPE).
In both versions, perceivers differed considerably in how they judged a standardized set of
individuals, and these perceiver differences could be aggregated into a score with excellent
internal consistency, reflecting the positivity in judgments made across different targets and
traits. Further, in Studies 1 and 2 the two instruments demonstrated good convergent validity
and showed remarkable retest reliability when we administered parallel forms in a time
interval of one to three weeks. Moreover, the O-TAPE was able to predict generalized
other-perceptions in a real-life context in Study 2.
The nomological network analyses established robust convergent and divergent
relationships with a number of individual difference constructs across two heterogenous
online samples (Studies 1 and 3) and a student sample (Study 2). Most notably, more positive
generalized other-perceptions were associated with several interpersonally relevant
personality characteristics such as high communion, high agreeableness, high honestyhumility, low dispositional contempt, and (albeit less consistently) narcissistic rivalry. This
suggests that generally positive vs. negative views of others underlie many personality traits
tapping differences on the continuum from communal/prosocial to antagonistic/antisocial.
Further, some demographic variables (i.e., gender and education) were associated with
generalized other-perceptions. The gender effect converges with previous research reporting
more positive generalized other-perceptions among women (Srivastava et al., 2010; Winquist
et al., 1998; Wood et al., 2010) and indicates that women might be more mellow in their
social judgments. The education effect indicates that the better people are educated, the less
positive their other-perceptions are. A potential explanation could be that a good education
goes along with a sense of self-importance and haughtiness, but this explanation is speculative
and might be addressed in future research. Other characteristics such as openness to
experience, conscientiousness, height, explicit anthropologic beliefs, and psychological
adjustment were not or were not consistently linked to generalized other-perceptions. Finally,
the O-TAPE predicted how positively or negatively students viewed their future classmates
when they met them for the first time on a welcoming day at their university in Study 2. This
suggests that the O-TAPE captures generalized other-perceptions in an ecologically valid
way. Importantly, we ruled out the possibility that the correlations were driven by general
scale-use bias. Thus, the results are informative, specifically about the positivity in
generalized other-perceptions rather than about a global tendency to provide rather positive or
negative evaluations on rating scales in general. Finally, we also demonstrated that O-TAPE
scores are unaffected by differences in socially desirable responding.

Applications and Adaptations of the O-TAPE

Which version of the O-TAPE should be applied? The results of Study 1 and 2 suggest
that neither version should be preferred on the basis of psychometric properties. However, it
might be wise to use the ViS-TAPE rather than the SoN-TAPE when studying populations
that are not familiar with online social networks (e.g., elderly people). Yet, in most other
contexts, it might be advisable to use the SoN-TAPE rather than the ViS-TAPE for pragmatic
reasons. Specifically, the material of the SoN-TAPE can be adjusted for other languages and
the technical implementation of images into online survey platforms is usually easier than the
implementation of videos. For these reasons, we only administered the SoN-TAPE in Study 3.
There, completing the measure took most participants between five and eight minutes
(interquartile range), suggesting that researchers can draw on it even when there are time
constraints.
Moreover, the clear unidimensional factor structure of trait perceiver effects and the
high internal consistency of positivity scores suggest that it would not be problematic to
reduce the number of traits rated per target in future studies in order to obtain an even shorter
instrument that still warrants a highly reliable and valid measurement of generalized other-perceptions. For this purpose, assessing impressions of at least five (sufficiently evaluative)
traits should be adequate. In order to establish unidimensionality, samples sizes should be 200
or larger. At the same time, we advise against reducing the number of rated targets given that
large target heterogeneity is crucial to warrant the generality of the measured construct.
Importantly also, Study 3 emphasized that solely assessing people’s perceptions of “a typical
person” without providing actual target stimuli does not do the job.
Researchers who are interested in applying and adapting the O-TAPE are referred to
the Open Science Framework (https://osf.io/6wuf8/). There, we provide all materials
necessary to apply the O-TAPE as well as templates and instructions for adjusting the social
network stimuli for the use in non-German speaking countries.
Finally, results of Study 1 and 2 showed that generalized other-perceptions as
measured with the O-TAPE exhibit moderate yet significant overlap with scale-use bias as
measured with the ORT. However, this did not substantially affect the validity correlations in
the present work because most validation measures were themselves relatively insusceptible
to individual differences in scale-use. We thus refrained from including the ORT in Study 3.
Nevertheless, researchers might consider complementing the O-TAPE with the ORT when
they have a specific reason to suspect that scale-use bias might impair the validity of their
results.
How should O-TAPE raw data be aggregated to obtain a scale score for each
participant? In most cases of substantive research, it will not be necessary to run random
effects models and report ICCs. As long as the only goal is to capture generalized other-perceptions, it is justified to treat the ten target stimuli as if they were items in a questionnaire
without examining how much of the overall variance is due to differences in participants vs.
differences in “item difficulties” (i.e., targets). It is also warranted to treat the rating
dimensions as if they were subscales of a questionnaire in which subscale scores can be
averaged to index an overall construct (i.e., positivity). Reporting Cronbach’s coefficient
alpha across these “subscales” serves as a straightforward (and conservative) estimate of the
positivity score’s reliability.

No comments:

Post a Comment