Saturday, December 7, 2019

High-achieving boys, to avoid bullying, use strategies to maintain an image of masculinity, for example becoming bullies themselves, disrupting the lessons, or devaluing girls’ achievements

Being bullied at school: the case of high-achieving boys. Sebastian Bergold et al. Social Psychology of Education, December 7 2019. https://link.springer.com/article/10.1007/s11218-019-09539-w

Abstract: Bullying victimization has been shown to negatively impact academic achievement. However, under certain circumstances, levels of academic achievement might also be a cause of bullying victimization. Previous research has shown that at least in Western countries, high school engagement is connoted by students as un-masculine. Therefore, high school engagement and achievement in school violate boys’, but not girls’, peer-group norm. This might put high-achieving boys at higher risk of bullying victimization as compared to high-achieving girls. The present study investigated boys’ and girls’ risk of bullying victimization, depending on different achievement levels. To this end, representative data of N = 3928 German fourth grade students were analyzed. Results showed that boys among the top-performers and also boys among the worst performers had a markedly higher risk of being bullied than girls showing the same achievement, whereas there were no such risk differences between genders in the average achievement groups. The relation between academic achievement and bullying victimization, features with regard to gender, and directions for future research are discussed.

Keywords: Bullying Peer victimization Academic achievement Gender differences Gender roles Elementary school

4 Discussion

Showing high engagement for school as a boy is against the peer-group norm
whereas doing so as a girl is not. Violating the peer-group norm, in turn, is 
sanctioned by the classmates. As school engagement is an important determinant of
academic achievement, we investigated whether boys showing exceptionally high
academic achievement would be at higher risk of bullying victimization than girls
with exceptionally high academic achievement. We drew on representative data of
fourth graders from the combined TIMSS and PIRLS 2011 assessments conducted
in Germany.

4.1 General relation between academic achievement and bullying victimization
In accordance with previous research on the relation between academic achievement
and bullying victimization (e.g., Nakamoto and Schwartz 2010), we found that there
was a negative relation between both variables in general. The higher the level of
academic achievement, the lower was self-reported bullying victimization. Bullying
victimization was lowest in the profle with students exhibiting the highest achievement 
level, which is in line with previous studies showing that high-performing or
gifted students in general are somewhat less often bullied than average students
(Estell et al. 2009; Peters and Bain 2011). This pattern is also in accordance with
studies investigating the social integration of gifted students (which, on average,
show markedly higher achievement than students with average ability; e.g., Rost
and Hanses 1997; Wirthwein et al. 2019). For example, gifted students in elementary 
school age as well as in adolescence were found to be well-integrated into their
classes: They seemed to be even somewhat more popular among their classmates
and somewhat less rejected than students with average ability (e.g., Czeschlik and
Rost 1995; Rost 2009).
Whereas this is an encouraging result for high-performing and gifted students as
a whole, it is a worrying finding for low-performing students. It became apparent
that the frequency of victimization was alarming for students in the profles with
low achievement: A quarter of the Profle 1 students reported being bullied weekly,
and another 45% of these students reported being bullied once or twice a month.
Altogether, this makes well over two-thirds of these students being victimized to a
non-trivial extent. Of course, due to the cross-sectional nature of our data, we cannot draw any conclusions about the causal direction of this relation. Drawing on
previous research, it can be regarded as certain that bullying victimization impedes
academic achievement, probably through many diferent pathways (e.g., Buhs et al.
2006; Juvonen et  al. 2000, 2011; Ladd et  al. 1997, 2017;  Schwartz et  al. 2005).
However, there might additionally be an efect in the other direction. Very poor
achievement might also predispose students to being victimized by classmates. This
would accord with Olweus’ (1978) assumption that not only students with extremely
high, but also students with extremely low achievement might be at higher risk of
being bullied. This would also be in line with efects found in vocational contexts,
according to which not only high, but also low performers are victimized more
often than average performers (Jensen et  al. 2014). Also the study by Ladd et  al.
(2017) (see Sect. 1) might point in this direction, because most of the diferent profles of victimization trajectories in this study had difered in academic achievement
from the outset. If this causal direction should indeed prove true anti-bullying programs should pay greater attention to low performance as a risk factor for bullying
victimization.

4.2 Bullying victimization by academic achievement and gender
Although there was a clear negative relation between bullying victimization and academic achievement when considering the entire sample regardless of gender, taking
gender into consideration provided more nuanced results. Consistent with our main
hypothesis, we found that in the profle of students with extremely high achievement, boys had a markedly higher risk of being bullied than girls: Boys’ risk of
being bullied weekly was more than twice as large as girls’; and boys’ risk of being
bullied once or twice a month was increased by over 40% as compared with girls’.
Importantly, this was not the case in the profles in the middle of the achievement
spectrum, showing that this finding was specifc to the group of extreme high (and
low, see below) achievers. Although we cannot draw causal conclusions from this
finding either, it is at least consistent with the hypothesis that highly engaged and
therefore high-achieving boys (but not girls) violate the peer-group norm by showing high academic achievement and are therefore more prone to victimization than
girls engaging in, and excelling at, school. Importantly, as the students excelling
in one domain (e.g., reading) and the students excelling in the other domains (e.g.,
mathematics) were the same individuals, this finding did not difer across domains
stereotypically denoted as “male” or “female”.
Of course, one might argue that it might not be achievement (or engagement)
itself, which increases high-achieving boys’ risk of victimization. Instead, highachieving boys could show other, specifc behaviors or attitudes that increase their
risk. This would be consistent with the finding that gifted boys, but not so much
gifted girls, are seen by their teachers as being more maladjusted (Preckel et  al.
2015), and maladjustment might easily make them victims of bullying (Eriksen
et al. 2014; Reijntjes et al. 2010, 2011; Schwartz et al. 1993). However, studies have
shown that those stereotypes do not match reality: Gifted students, whether they
may be boys or girls, do not show worse adjustment in any regard (e.g., Bergold
et al. 2015; Francis et al. 2016; Rost 2009). Therefore, this alternative explanation
appears unlikely.
Our finding has important practical implications: As a result of being bullied
because of their high engagement and achievement, boys might reduce their school
engagement and their academic achievement after having experienced victimization
in order to get themselves out of the fring line. Renold (2001) has also documented
further strategies of high-achieving boys to maintain masculinity, for example
becoming bullies themselves, disrupting the lessons, or devaluing girls’ achievements. 
All these avoidance strategies come at a price too high for both the individual
student(s) and society in the long run. To avoid these undesirable consequences,
several interventions could be implemented. One problem surely is that victimized
students—and especially boys—often do not seek help from others, for example
from their teacher or from their parents (e.g., Hunter et al. 2004). The psychological 
costs for help-seeking are often perceived as too high, comprising the fear of
(further) disapproval by the classmates (which is especially present in boys), feelings 
of own weakness, and feelings of a lack of autonomy (not being able to solve
the problem on one’s own) (Boulton et al. 2017). One possibility to help victimized
students (especially boys) would be to encourage them to confde in their teachers or
their parents. This can be helpful, yet the efect heavily relies on the adult’s reaction
and on the specifc situation (Bauman et  al. 2016). Especially for high-achieving
students, telling the teacher about victimization could sometimes be problematic
because some high-achieving students might already be perceived by their 
classmates as the “teacher’s pet” (Babad 1995; Tal and Babad 1990; Trusz 2017). Telling 
the teacher about bullying and disclosing the perpetrator(s) might then possibly
even worsen the situation. Therefore, intervention strategies could additionally start
at other points. One option would be to change the peer-group norm for boys. Interventions 
could aim at a masculinization of academic achievement and engagement
in school. For example, the learning strategy of memorizing new material is more
often used by girls than by boys (e.g., Artelt et al. 2010; Heyder and Kessels 2016).
However, Heyder and Kessels (2016) showed that labeling memorizing with a 
stereotypically masculine designation (“training consequently” vs. “memorizing diligently”)
increased boys’ choice of the memorizing strategy (whereas there was no
efect on girls’ choices). This could be a promising approach to make school 
engagement seem more acceptable to boys and, thereby, to destigmatize boys who show
high levels of school engagement, which could in turn decrease their victimization.
Likewise, high academic achievement might be made more acceptable to boys by
labeling it as a result of competition, which is perceived as a stereotypically male
domain (e.g., Niederle and Vesterlund 2011). However, it would be important here
to defne competition in an intra-individual sense rather than in an inter-individual
sense, since competition between classmates would likely trigger average students’
upward comparisons, making negative reactions to the high-achieving students possibly 
even more likely (Di Stasio et al. 2016; Festinger 1954). Rather, instruction 
should stimulate intra-individual comparisons, inspiring boys to compete 
with themselves to achieve better and better, with high(er) academic 
achievement as a kind of trophy finally gained.
Another interesting finding, which we had however not predicted, was that not
only high-, but also low-achieving boys showed a greater risk of bullying 
victimization than their female counterparts. Whereas the risk diference 
was well-explainable for the high-achieving 
students (violation of peer-group norm by showing high
engagement and achievement), it appears harder to explain it for the low-achieving
students, because displaying poor achievement (and engagement) is not inconsistent
with the male gender role. Maybe boys’ academic achievement sufers more from
victimization than girls’. Another explanation would be that especially 
low-achieving boys rather than low-achieving girls and average- or high-performing boys react
more aggressively to victimization (aggression and cognitive ability are negatively
related; e.g., Duran-Bonavila et al. 2017), which might in turn evoke negative reactions 
from the classmates, reinforce the perpetrator(s), and thus increase 
victimization further (Salmivalli et al. 1996; 
Sokol et al. 2015). However, as we cannot test
this hypothesis on the basis of our data, this could be a subject of future studies.

Data from reproductive suppression in humans support the argument that populations subjected to environments dangerous for children yield birth cohorts that exhibit great longevity

Reproductive suppression and longevity in human birth cohorts. Katherine B. Saxton  Alison Gemmill  Joan A. Casey  Holly Elser  Deborah Karasek  Ralph Catalano. American Journal of Human Biology, December 6 2019. https://doi.org/10.1002/ajhb.23353

Abstract
Objectives: Reproductive suppression refers to, among other phenomena, the termination of pregnancies in populations exposed to signals of death among young conspecifics. Extending the logic of reproduction suppression to humans has implications for health including that populations exposed to it should exhibit relatively great longevity. No research, however, has tested this prediction.

Methods: We apply time‐series methods to vital statistics from Sweden for the years 1751 through 1800 to test if birth cohorts exposed in utero to reproductive suppression exhibited lifespan different from expected. We use the odds of death among Swedes age 1 to 9 years to gauge exposure. As the dependent variable, we use cohort life expectancy. Our methods ensure autocorrelation cannot spuriously induce associations nor reduce the efficiency of our estimates.

Results: Our findings imply that reproductive suppression increased the lifespan of 24 annual birth cohorts by at least 1.3 years over the 50‐year test period, and that 12 of those cohorts exhibited increases of at least 1.7 years above expected.

Conclusions: The best available data in which to search for evidence of reproductive suppression in humans support the argument that populations subjected to environments dangerous for children yield birth cohorts that exhibit unexpectedly great longevity.


We found that 5‐year‐olds, but not 3‐year‐olds, cheated significantly more often if they overheard the classmate praised for being smart

Young Children are More Likely to Cheat After Overhearing that a Classmate is Smart. Li Zhao  Lulu Chen  Wenjin Sun  Brian J. Compton  Kang Lee  Gail D. Heyman. Developmental Science, December 6 2019. https://doi.org/10.1111/desc.12930

Abstract: Research on moral socialization has largely focused on the role of direct communication and has almost completely ignored a potentially rich source of social influence: evaluative comments that children overhear. We examined for the first time whether overheard comments can shape children's moral behavior. Three‐ and 5‐year‐old children (N = 200) participated in a guessing game in which they were instructed not to cheat by peeking. We randomly assigned children to a condition in which they overheard an experimenter tell another adult that a classmate who was no longer present is smart, or to a control condition in which the overheard conversation consisted of non‐social information. We found that 5‐year‐olds, but not 3‐year‐olds, cheated significantly more often if they overheard the classmate praised for being smart. These findings show that the effects of ability praise can spread far beyond the intended recipient to influence the behavior of children who are mere observers, and they suggest that overheard evaluative comments can be an important force in shaping moral development.

Discussion

We investigated the effects of overheard evaluative comments on young children’s moral behavior. After asking participants to promise not to cheat in a guessing game, we assessed the extent to which they would break this promise across two conditions: an overheard praise condition in which  children overheard that a classmate who was no longer present is smart, or a control condition in which they overheard comments that involved non-social information. We found that the effects of overhearing ability praise differed by age: 5-year-olds cheated significantly more frequently in response to overheard ability praise than to overheard non-social information, but the 3-year-olds’ cheating rate was not sensitive to this manipulation. These results extend prior findings (Zhao et al., 2017) by showing that, at least for 5-year-olds, ability praise can promote cheating without it being conveyed to children directly. It is noteworthy that Zhao et al. (2017) found direct ability praise promoted cheating even among 3-year-olds, with 62% of 3-year-olds and 58% of 5-year-olds engaging in cheating in response to direct ability praise, as compared to 40% and 68%, respectively, in the overheard praise condition of the present study.

Why might these contexts have a differential effect for 3-year-olds but not 5- year-olds? We believe this difference may be due to the information processing demands of overhearing a multi-party communication. In the present research, the overheard communication involved three other individuals (the two adults who were speaking, and the classmate who was being praised), as compared to one other individual (the experimenter) in the prior work on direct praise. One might expect this cognitive complexity to affect 5-year-olds as well, but this does not appear to be the case. This may be because by age 5, children have the cognitive capacity to be able to understand complex multi-party interactions, and because they have the relevant social experience to know that they can learn a great deal from overheard conversations about other people.

An alternative explanation is that 3-year-olds are only sensitive to information about their own abilities, and thus the developmental transition concerns gaining the ability to see the behavior of others as relevant to the self. This is plausible because the direct praise study differed from the overheard praise condition in the present study not only in the form the communication took (direct versus overheard), but also in the target of the praise (the participant versus another child). However, the preliminary results of an ongoing study we are conducting suggest that this target effect cannot account for this difference: we are finding that after overhearing that they themselves are smart, 3- year-olds are cheating at a level that is close to the 40% rate that was seen in the present study.

However, this does not rule out the possibility that processing information about others is inherently more complex than processing information about the self, and that it may add to the complexity of processing overheard information in third-party contexts. This possibility would be generally consistent with theories suggesting that children use the self as a starting point for social cognition (Meltzoff, 2007). Consequently, future studies will be needed to disentangle the effects of the type of communication, versus the target of the evaluative comments. Further research will also be needed to more fully understand the effect of overheard ability praise that was observed among 5-year-olds in the present study. As noted previously, overheard ability praise may elicit concerns with social comparison. It may also lead to the inference that the experimenter places a high value on being smart, or that being smart is highly valued more generally.

These possibilities could be explored by examining whether there are similar effects on cheating when concerns with social comparison are elicited in other ways, or when the social value of being smart is communicated in other ways. An additional finding from the present study was that among 5-year-olds, boys cheated more than girls, which is consistent with gender differences in dishonesty among adults (e.g., Alm et al., 2009; Bucciol et al., 2013; Tibbetts, 1999). However, it is somewhat surprising that no gender by condition interaction was found within either age group, given the three-way interaction observed for participants overall. This might be due to the fact that our sample size for this age group was not large enough to reveal a significant two-way interaction.

This possibility is supported by a power analysis based on the results of our three-way interaction for participants overall, which revealed that a required sample size of 107 would be needed to detect a significant interaction, just 7 participants more than the current sample size of 100 (However, we made similar power analyses based on the results of the condition and gender effects for 5-year-olds. Both analyses yielded a required sample size of 220, which is more than twice the current sample size of 100). Given that our sample size was predetermined on the basis of existing findings of condition differences, future research with larger sample sizes will be needed to look more closely at this issue. The present research significantly extends previous work on the effects of overheard conversations.

This prior work has primarily focused on how overheard interactions might promote children’s learning about language, objects, and emotions (e.g., Akhtar et al., 2001; Akhtar, 2005; Floor & Akhtar, 2010; Phillips et al., 2012; Repacholi & Meltzoff, 2007). Our work shows that overheard conversations can have unintended consequences for children’s moral behavior. Our findings also extend previous work on gossip (e.g., Eder & Enke, 1991; Gottman & Mettetal, 1986; Hill, 2007; Ingram & Bering, 2010), given that overheard ability praise can be considered a form of gossip, which is commonly defined as “the sharing of evaluative information about an absent third party” (e.g., Dunbar, 1996; for a review, see Foster, 2004). Previous work has suggested that it is not until about 8 years of age that children begin to use gossip to help them navigate social situations such as inferring social norms (e.g., Aikins, 2015; see also, Hill, 2007). The current findings suggest that even 5-year-olds have some capacity to use gossip in a similar way, and it raises questions about other ways in which young children might use gossip to make sense of the social world.

Future research will be needed to examine the effects of overhearing other forms of praise, such as praise for being honest. Another important topic to address will be the effects of overheard criticism, although addressing this question raises challenging ethical issues. The results of this research will help us to better understand the effects of overheard evaluative comments on children’s moral socialization. Our findings have broad practical implications for parents, teachers, and other caregivers. Given that evaluative comments such as ability praise are often made in public contexts, more attention should be paid to minimize the potential negative effects on children who may be listening.

In summary, the present research is the first to demonstrate that children as young as age 5 are more likely to engage in cheating after overhearing praise of another child for being smart. Our findings suggest that the negative implications of ability praise can spread outward, beyond the intended recipient, to affect the behavior of children who are mere observers. More broadly, our findings identify overheard evaluative information, a ubiquitous aspect of children’s social environment, as an important force in shaping moral development.

Tinder users prefer a potential partner whom they perceive to be similar in the personality traits agreeableness & openness to experience; no evidence for preferences for assortative mating based on attractiveness

Never mind I'll find someone like me – Assortative mating preferences on Tinder. Brecht Neyt, Stijn Baert, Sarah Vandenbulcke. Personality and Individual Differences, Volume 155, March 1 2020, 109739. https://doi.org/10.1016/j.paid.2019.109739

Abstract: Previous literature has identified assortative mating as the most frequent deviation from random mating both in offline dating and on classic online dating websites. However, several recent studies have suggested that assortative mating is fading due to the advent of mobile dating apps. Therefore, in this study we examine whether preferences for assortative mating are still present on the most popular mobile dating app of the moment, Tinder. For this means, we analyze experimental and survey data on 7846 Tinder profile evaluations. We unambiguously find that Tinder users prefer a potential partner whom they perceive to be similar in the personality traits agreeableness and openness to experience. With respect to similarity in perceived age, we find either no assortment or positive assortment, depending on whether we condition on other participant characteristics. Finally, we do not find any evidence for preferences for assortative mating based on attractiveness. We examine heterogeneous preferences by the gender and age of the experiment participants.

Keywords: Assortative matingPersonality traitsBig FiveDating appsTinder


4. Discussion

In this study we examined whether assortative mating preferences
that are often identified in offline dating and on classic online dating
websites are still present on the recently popular MDAs such as Tinder.
More specifically, we investigated whether Tinder users had a preference
for potential partners whom they perceived to be similar in age,
attractiveness, and Big Five personality traits. We examined this by
using experimental and survey data collected by Neyt et al. (2018).
In line with previous literature examining both offline dating and
dating on classic online dating websites, we found evidence for assortative
mating preferences based on age when controlling for other
participant characteristics. Given that in their literature review
Watson et al. (2004) point to age as one of the factors with the strongest
positive assortment, it is unsurprising that also in these analyses it was
the factor upon which assortative mating was the strongest. However,
correlation analyses showed no evidence for this sorting behavior. As a
consequence, results of analyses unconditional on other participant
characteristics are in line with recent studies on Tinder which found
that assortative mating preferences are fading on this dating platform
(Neyt et al., 2019; Ortega & Hergovich, 2017). Additionally, we found
that individuals prefer potential partners whom they perceive to be
similar on the Big Five personality traits agreeableness and openness to
experience (both in analyses controlling and not controlling for other
participant characteristics). This is in line with the studies of
Botwin et al. (1997) and Rammstedt and Schupp (2008), although these
studies also found assortative mating based on conscientiousness. Apparently,
even in a setting with no search frictions in which people
show interest in a potential partner a priori to meeting them, they prefer
a potential partner whom they perceive to be similar in the personality
traits agreeableness and openness to experience. The finding that assortative
mating based on perceived personality traits is lower than
assortative mating based on age, is in line with the review of the literature
in Watson et al. (2004) as well as with these authors’ own
findings.
With respect to similarity in age (when controlling for other participant
characteristics) and similarity in openness to experience, we also
found that these results are driven by the female participants and the
older participants. A suggestive explanation for this finding is that these
groups of participants have higher standards with respect to whom they
show interest in. For the female participants this would be in line with
the finding by Botwin et al. (1997) who showed that females express
more discriminating preferences for personality characteristics in their
ideal mate compared to males. This in turn is in line with parental investment
theory (Trivers, 1972) which argues that the sex that invests
more in offspring – for humans this is the females – are more discriminating
in their mate preferences. For the older participants this
higher discrimination in mate preferences could be because they are
looking for a more serious relationship.
Further, we did not find any evidence for assortative mating preferences
based on attractiveness. We argue that this is the case because
attractiveness is not a horizontal attribute upon which individuals mate
assortatively but rather a vertical attribute where there exists a predefined
consensus on which potential partners are the most desirable, in
casu highly attractive individuals. This behavior is likely reinforced by
the fact that showing interest in a person on Tinder is low in psychological
costs in case of rejection.
Finally, the finding that women have a preference for a potential
partner whom they perceive to be older whereas men do not exhibit age
preferences is in line with the findings of Kenrick and Keefe (1992).
Indeed, they report that while in early mating years – which most of our
participants are at, see Table 1 for descriptive statistics on participants’
age – men do not yet exhibit preferences for a younger potential
partner, women have a preference for an older potential partner already
in their early mating years.
We end this study by pointing out the main limitations of our research
design. First, we only examined mating behavior in the first
stage of the dating process. Nonetheless, we believe findings with respect
to this first stage are interesting, as it is a necessary stage each
individual trying to find a partner on a MDA needs to get through to
advance to the next stages of a relationship.
Second, although the experimental design was a very close reflection of
reality and although the data could not suffer from socially
desirable answers, it would still be interesting to verify whether partner
preferences identified in this study also hold in reality. We suggest
future studies – if possible – to use data directly provided by Tinder.
Third, with the data we used in this study we are only able to examine
whether certain assortative mating preferences exist and whether
they differ between certain groups of participants. However, we are
unable to deduce why these exist. We would suggest future studies to
examine – potentially using qualitative data – why exactly, for example,
female and older participants have higher preferences for potential
partners who are similar in openness to experience.
Fourth, in this study we made use of the TIPI to measure the Big Five
personality traits. Given that this scale measures each personality trait
with two questions, other – more extensive – scales are able to capture
personality more rigorously. However, given that we asked participants
to rate 16 profiles, using a more elaborate scale was not appropriate in
this study as results would suffer too much from bias due to boredom.
Still, we encourage future studies to use a more extensive scale of the
Big Five personality traits to examine assortative mating based on
personality in dating.
Next, in this study we only examined individuals from Western
countries (supra, Section 2). However, it would be interesting to
examine whether preferences for assortative mating also differ between
cultures, e.g. between Western and non-Western individuals. While
Buss (1989) conducted an analysis on absolute mating preferences over
37 cultures, to the extent of our knowledge no such analysis has been
done on assortative mating preferences.
Finally, in this study we only examined assortative mating preferences
based on age, attractiveness, and personality. Naturally, individuals
could have preferences for similarity on many more characteristics such as
ethnicity, socioeconomic status, and education level
to name a few. As assortative mating based on these characteristics
would have substantial societal consequences, we encourage future
research to examine sorting behavior with respect to these
characteristics on the recently popular MDAs such as Tinder.


Sex with another person, with an orgasm, was perceived to have a relatively stronger effect on men compared to women in terms of sleep quality; activity without orgasm was, to men, sleep-impairing

A national survey on how sexual activity is perceived to be associated with sleep. Ståle Pallesen et al. Sleep and Biological Rhythms, December 3 2019. https://link.springer.com/article/10.1007/s41105-019-00246-9

Abstract: There is a paucity of studies investigating how sexual activity is perceived to influence sleep, despite conceptions about significant gender differences regarding this issue. In all, 4000 persons, aged between 18 and 55 years, were randomly drawn from the Norwegian Population Registry and invited to participate in a postal survey. The respondents were asked how sexual activity with another person, with or without orgasm, and how masturbation, with and without orgasm, influenced sleep latency and sleep quality. A total of 1080 persons participated (response rate 28.2%) of which 56.1% were women. The mean age of the sample was 38.7 years (SD = 10.8). Sexual activity with an orgasm was perceived to have a soporific effect by both men and women. Sexual activity with another person, with an orgasm, was perceived to have a relatively stronger effect on men compared to women in terms of sleep quality. Sexual activity without an orgasm was by men reported to have a sleep impairing effect, whereas the perceived effect reported by women was equivocal. Sexual activity with orgasms was perceived as having a soporific effect in both men and women. Sexual activity without an orgasm had an equivocal perceived effect on sleep.

Keywords: Gender differences Orgasm Sexual activity Sleep onset latency Sleep quality Soporific effect


Discussion

The mean self-reported habitual sleep onset latency
reported by the sample was somewhat longer than normal
for young adults, albeit within normal range for middle
and older adults for both men and women [15]. Generally,
sexual activity with orgasm was perceived to shorten sleep
latency as well as improve sleep quality in both men and
women. This is in line with previous notions that orgasm
has soporific effects [6, 7, 9], and supports as such our
first hypothesis stating that orgasms following sexual
activity generally will be perceived to have a soporific
effect, albeit larger for men than for women. The exact
mechanism behind the soporific effect of orgasms is not
clear, but it may be attributable to the release of neurohormones
such as oxytocin, prolactin, and endorphins that are
assumed to have relaxing properties [16–18]. The effect
seemed to be larger for men than for women, especially
concerning orgasm following sexual activity with another
person. The positive perceived effect on sleep of sex with
an orgasm was also reported by the only previous survey
on this topic, but in the previous survey no gender differences
were found [9]. The gender difference regarding the
perceived soporific effect of masturbation with orgasms
was, however, not significant, a finding in line with the
aforementioned survey [9].
The difference score (effect of masturbation with an
orgasm—effect of sexual activity with another person with
an orgasm) was negative for men, but neutral for women in
terms of sleep latency. Thus, regarding sleep latency, men
perceived a greater soporific effect of sexual activity with
another person, with an orgasm, compared to masturbation
with an orgasm, whereas no significant difference was
reported by women. The difference score in terms of sleep
quality was negative for both men and women. Still, it was
significantly larger for men, implying that men, compared
to women, seem to experience greater soporific effect of
sexual activity with another person, with an orgasm, compared
to masturbation with an orgasm. Hence, sex with
another person, with an orgasm, had a stronger perceived
soporific effect for men than women (both for sleep latency
and sleep quality) compared to masturbation with an orgasm.
This lends support to our second hypothesis (masturbation
followed by orgasms relative to orgasms following sex with
another person will be perceived to have relatively stronger
soporific effect for women compared to men). One possible
explanation to this finding is that men, according to some
studies, have a higher energy expenditure during intercourse
than women [19], which may promote sleep [20]. However,
not all studies have shown that men spend relatively more
energy during sexual activity than women [21], and since
sexual activity often is of relatively short duration [22]
potential differences in gender expenditure during sexual
activity is not likely to explain gender differences concerning
the perceived soporific effects of sexual activity on sleep.
Another explanation to these gender differences is that men
have a stronger and more biologically and genitally sexual
drive, whereas women’s sexual drive to a larger extent is
romantically driven with a higher emphasis on intimacy [10,
11]. This view seems congruent with models of sexual selection
which posit that males invest less in the offspring, have
a higher reproductive rate and benefit more from mating
multiply than women [23].
Hence, when having sex with another person women generally
may put more emphasis on the relationship, whereas
men may put more emphasis on sexual gratification [10, 11,
23]. This may contribute to men easier falling asleep after
sexual activity with another person ending in orgasm, compared
to women, as men at this point may have obtained their
goal, whereas women still may want emotional intimacy or
confirmations about the relationship. It is also known that
most men following orgasms have a refractory period where
they cannot experience further erection or orgasms, whereas
women’s postorgasmic genital arousal is more variable [24]
which may influence the soporific effects of sexual activity
differently across genders.
According to our third hypothesis, sexual activity without
orgasm was expected to have no influence on sleep.
However, men actually reported longer sleep onset latency
and poorer sleep quality both when sexual activity with
another person and masturbation did not provide orgasm.
For women, this was only the case for sleep latency following
sexual activity with another person without orgasm.
Women reported no effect on sleep onset latency following
masturbation without orgasm, and no effect on sleep quality
when sexual activity with another person or masturbation
did not end in orgasm. Taken together, these findings show
that men seem to be negatively affected by sexual activity
without an orgasm, whereas women appeared to respond
less and more neutral to this. In this regard the present findings
are not in line with findings reported by the previous
survey by Lastella and colleagues, where it was suggested
that sexual activity, whether or not ending in orgasms, had a
perceived soporific effect. However, it seems that the questions
used in the previous survey were somewhat blurred
in terms of absence of orgasms explicitly [9], which might
explain the discrepancy in results. The third hypothesis was
thus not supported for men, and only partly supported for
women. Overall, it seems that lack of orgasm following
sexual activity is reported to be more frustrating for men
than for women, leading to perceived poorer sleep for men
compared to women. This may again reflect different emphasis
on behalf of men (e.g. sexual) compared to women (e.g.
intimacy) when it comes to sexual activity. It is also known
that sexual encounters more often end in orgasms for men
compared to women [25], hence lack of orgasm may thus
be more frustrating and sleep impairing for the male gender.

Limitations and strengths

The response rate of the present study was low, despite the
fact that the questionnaire was short and up till two reminders
were sent and material reinforcement (gift card lottery)
was used. However, the low response rate can probably be
explained by the sensitive (sexual) topic being investigated
[26]. It should be noted that low response rates do not imply
that results are invalid [27]. Still, we acknowledge that the
findings should be replicated in future studies. Although
similar to those used in a recent survey [9], the questions
about sexual activity’s perceived effect on sleep were constructed
for the purpose of the present study, hence their
psychometric properties are unknown. This is a limitation
and future research efforts should be taken to establish items
for this topic, for example by the method of Delphi [28].
Questions about sexual behaviors are sensitive by nature,
hence it cannot be ruled out that some did not answer truthfully.
However, care was taken to inform about how the data
would be registered and confidentiality ensured. In addition,
self-completion questionnaires were used as this seems to
result in more valid reports than interviews [29].
It should be noted that the questions were quite general
(sexual activity with another person or masturbation), and
future studies on this topic should therefore differentiate better
between different sexual behaviors (e.g. sex with a new
vs. familiar partner) and also assess their duration to investigate
how sleep is affected by them. In some of the analyses
the number of respondents was lower than the total sample,
as those answering “not relevant” were left out of the analysis.
The effect of sexual behavior on sleep was evaluated
retrospectively, which may render the responses vulnerable
to recall bias, thus the use of diaries in future studies on
this topic is encouraged [30]. It should also be noted that
only two sleep outcomes were evaluated (sleep onset latency
and sleep quality), as these were regarded most sensitive
to potential soporific effects of sex. Still, future studies
should include a wider array of sleep variables as outcomes
[31]. The present study was based on subjective rating of
sleep only, hence the findings should be corroborated by
objective sleep measures in the future. As orgasms may be
described along several dimensions, and since there may
be some gender differences in this regard [32], this should
be taken into consideration in future studies on this topic.
The present study did not differentiate between phases of
the menstrual cycle for the female respondents, although
this may influence both sleep [33] and sexual behavior [34].
Hence, future studies should take this into account. Prospect
research should in addition aim at identifying variables
beyond gender that might explain variance in the soporific
effects of sexual activity.
In terms of strengths, it should be noted that the present
study is one of the first large surveys that has addressed the
soporific effect of sexual behavior on sleep and contributes
as such with novel findings on a topic that is often debated
and heavily surrounded by myths. The sample was drawn
from the Norwegian Population Registry, which increases
the generalizability of the present findings. The sample was
weighted by the discrepancy between the general population
and sample characteristics in terms of age and gender, and
thus corrected for different response rates among subgroups.

Australia: Disabled men were at least twice as likely to be attracted to females & males, not experience sexual attraction, identify as bisexual or homosexual & have female & male sexual partners

Does sexual orientation vary between disabled and non-disabled men? Findings from a population-based study of men in Australia. Anne-Marie Bollier et al. Disability & Society, Dec 3 2019. https://doi.org/10.1080/09687599.2019.1689925

Abstract: Some research suggests that disabled people are more likely to be sexual minorities than non-disabled people, but this evidence comes mainly from younger or older populations. We used data from a large survey of Australian men aged 18–55 to examine the relationship between disability and minority sexual orientations. Results from our statistical analyses suggest that a larger proportion of disabled than non-disabled men are sexual minorities. Our estimates showed that disabled men were at least twice as likely as non-disabled men to be attracted to females and males, not experience sexual attraction, identify as bisexual, identify as homosexual and have female and male sexual partners—relative to the likelihood of female-only attraction, heterosexual identity and female-only sexual partners. Findings provide new information about sexual diversity in disabled versus non-disabled Australian men, which can help inform inclusive service provision and identify avenues for future research about sexual minority disabled people.

Keywords: disability, men, sexual orientation, sexual minority, sexual identity, sexual attraction


Distribution of Facial Resemblance in Romantic Couples Suggests Both Positive and Negative Assortative Processes Influence Human Mate Choice

Holzleitner, Iris J., Kieran J. O'Shea, Vanessa Fasolt, Anthony J. Lee, Lisa M. DeBruine, and Benedict C. Jones. 2019. “Distribution of Facial Resemblance in Romantic Couples Suggests Both Positive and Negative Assortative Processes Influence Human Mate Choice.” PsyArXiv. December 5. doi:10.31234/osf.io/pw5c

Abstract: Previous research suggests that humans show positive assortative mating, i.e. tend to pair up with partners that are similar to themselves in a range of traits, including facial appearance. Facial appearance can function as a cue to genetic similarity and plays a critical role in human mate choice. Evidence for positive assortative mating for facial appearance has largely come from studies showing people can match pictures of couples’ faces at levels greater than chance and that facial photographs of couples are rated to look more similar than those of non-couples. However, interpreting results from matching studies as evidence of positive assortative mating for facial appearance is problematic, since this measure of perceived compatibility does not necessarily reflect actual physical similarity, and may be orthogonal to, or even negatively correlated with, physical similarity. Even if participants are asked to rate facial similarity directly, it remains unclear which, if any, face shape cues contribute to an increased perception of similarity in romantic couples. Here we use a shape-based assessment of facial similarity to show that the median similarity of long-term couples’ face shapes is only slightly greater than that of an age-matched control sample. Moreover, this was driven by the most similar 40% of couples, while the most dissimilar 20% of couples actually showed disassortative mating for face shape when compared to the control sample. These data show that a simple measure of central tendency obscures variability in the extent to which couples display assortative or disassortative mating for face shape. By contrast, a more fine-grained analysis that considers the distribution of variation across couples in the extent to which they resemble each other suggests that both positive and negative assortative processes influence human mate choice.

Dissimilarity data and analysis code are at available at https://osf.io/m9f54

Excerpts:

The extent to which romantic couples physically resemble each other is a long-standing question with implications for influential theories of mate choice, such as optimal outbreeding theory22. Optimal outbreeding theory acknowledges that mating with closely-related individuals can have a large negative effect on reproductive fitness (i.e., results in less viable offspring), but emphasizes that excessive outbreeding (mating with highly genetically dissimilar individuals), too, can have a negative effect on reproductive fitness23,24. Consequently, while folk psychology theories predict that romantic couples will physically resemble each other, optimal outbreeding theory predicts that both assortative and disassortative processes may influence human mate choice. Several studies have demonstrated that perceptions of facial similarity are very highly correlated with (i.e. nearly indistinguishable from) perceptions of genetic relatedness, demonstrating that facial similarity can function as a cue of genetic relatedness25,26. Moreover, facial appearance is known to play a critical role in social interaction, including romantic partner choice14,27,28. Consequently, much of the research on the extent to which romantic couples physically resemble each other has investigated facial similarity between romantic partners. While several studies have reported that the faces of romantic partners can be matched at levels greater than chance15-19, such results do not necessarily indicate that romantic couples physically resemble each other. For example, matching of romantic couples at levels greater than chance could occur simply because people similar in physical attractiveness are judged more likely to be in a romantic relationship with each other than people who differ in their physical attractiveness29 (but see30). Moreover, the physical traits associated with attractiveness in men and women are not identical and, in some cases, even opposite. For example, feminine facial features are attractive in women, while masculine facial features are attractive in men (although the extent to which this is the case is disputed31-35). This first important limitation of previous work can be avoided entirely by using nonperceptual measures of facial resemblance. One approach for objectively defining and comparing face shape is to assess the position a face occupies in ‘face space’. Face space is a multi-dimensional space representing the global face shape dimensions derived from Principal Component Analysis of shape coordinate. Within this multi-dimensional face space, similarity can be quantified as the Euclidean distance between individual faces (see36 for a recent review). A second important limitation of previous work on this topic is that it has used measures of 4 central tendency to investigate the extent to which couples on average resemble each other. Focusing exclusively on measures of central tendency can, however, obscure important variation in the data37,38. This variation is likely to be particularly important in the context of research motivated by optimal outbreeding theory, since optimal outbreeding theory explicitly predicts that both assortative and disassortative processes will influence mate choice. In light of the above, we first used distance in face space to objectively assess the degree of similarity between romantic couples in face shape and compared these scores with controls. We then sought to establish whether there are systematic differences among couples in the extent to which they resembled each other. First, we calculated shape-dissimilarity scores for 3D scans of 178 couples’ faces. Shapedissimilarity scores were the Euclidean distance in a multidimensional face space derived from ten-fold cross-validated PCA of 3D face-shape coordinates. In order to create a control distribution, we identified all possible pairings between each woman and all men in the set who were within five years of her actual partner’s age. The median number of control pairings per woman was 71. We then calculated the dissimilarity score for each control pairing. The median control dissimilarity score for each woman was calculated and is referred to hereon as the control dissimilarity score. Figure 1A shows the distributions of couple and control dissimilarity scores.

[Figure 1. (A) Dissimilarity score distributions of couples (target women + actual partner, N=178) and controls (target women + median of age-matched controls, N=178). Scores were centered on median control dissimilarity (dashed line). Median couple dissimilarity was marginally smaller than
median control similarity. (B) Difference strip chart showing the difference scores of similarity between each woman and her actual partner/her median control. The horizontal lines mark the deciles, with the thicker line marking the median. (C) The shift function shows the difference of couples – control for each decile (y-axis) as a function of couple deciles (x-axis). For each decile difference, the vertical line indicates the 95% bootstrap confidence interval (1000 samples).]

Couple and control dissimilarity scores were initially compared using a paired-samples bootstrapping technique. The median difference score between couple and control dissimilarity was significantly lower than 0 (estimate=−71, p=.040; Figure 1B), suggesting couples are slightly less dissimilar than chance. However, this analysis of central tendency ignores more fine-grained information about the full distribution. Therefore, we next separated couples into deciles based on their dissimilarity scores. Within each decile, we then compared the couple and control dissimilarity scores and plotted this difference at each decile (Figure 1C). If distributions of couple and control dissimilarity scores were identical, one would expect to see a flat line around 0 for all deciles. If distributions were merely shifted to the left or right, the shift function would show a flat line below or above 0. Figure 1C shows that couple dissimilarity was significantly lower than control scores in the first four deciles (i.e., the most similar 40% of couples) and significantly greater than control scores in the last two deciles (i.e., the most dissimilar 20% of couples). Thus, while the most similar 40% of couples show assortative mating for face shape, the most dissimilar 20% of couples show disassortative mating for face shape. This underlines the limitation of a simple central-tendency comparison of similarity when testing for assortative or disassortative mating. Analysis of a measure of central tendency showed the type of assortative mating predicted by folk psychology and reported in some previous research. However, the effect was weak. By contrast, analyzing resemblance between couples using deciles, which allows for a far more fine-grained analysis of the distribution of resemblance across couples, showed clear evidence of both assortative and disassortative processes in human mate choice. This finding suggests that individuals may differ in the costs and benefits of assortative vs disassortative mating. Future research could investigate predictors of such individual differences. Not only does the pattern of results found here support an explicit prediction from optimal outbreeding theory (that both assortative and disassortative processes will influence human mate choice), it also highlights the pervasive problem of relying on analyses of measures of central tendency when studying complex behaviors.

Often we say we are Bayesian reasoners; people instead reason in a digital manner, assuming that uncertain information is either true or false when using that information to make further inferences

Johnson, S. G. B., Merchant, T., & Keil, F. C. (2019). Belief digitization: Do we treat uncertainty as probabilities or as bits? Journal of Experimental Psychology: General. Dec 2019. https://doi.org/10.1037/xge0000720

Abstract: Humans are often characterized as Bayesian reasoners. Here, we question the core Bayesian assumption that probabilities reflect degrees of belief. Across eight studies, we find that people instead reason in a digital manner, assuming that uncertain information is either true or false when using that information to make further inferences. Participants learned about 2 hypotheses, both consistent with some information but one more plausible than the other. Although people explicitly acknowledged that the less-plausible hypothesis had positive probability, they ignored this hypothesis when using the hypotheses to make predictions. This was true across several ways of manipulating plausibility (simplicity, evidence fit, explicit probabilities) and a diverse array of task variations. Taken together, the evidence suggests that digitization occurs in prediction because it circumvents processing bottlenecks surrounding people’s ability to simulate outcomes in hypothetical worlds. These findings have implications for philosophy of science and for the organization of the mind.

General Discussion

Do beliefs come in degrees? Here, we showed that they do not when we use those beliefs to make
further predictions—in such cases, probabilities are converted from an ‘analog’ to a ‘digital’ format
and are treated as either true or false. Compared to Bayesian norms, participants across our studies
consistently underweighted low-probability relative to high-probability hypotheses, often ignoring
low-probability events completely. This neglect challenges theories of cognition that posit a central
role to graded probabilistic reasoning. Here, we discuss where this tendency appears to come from
and in what ways it might be limited.

Predictions from Uncertain Beliefs
Many studies have found that when an object’s category is uncertain, people rely on the single
most-probable category when predicting its other features. Although some studies find individual
differences and variability among tasks, single-category use has held up among many different kinds
of categorization schemes (e.g., Johnson, Kim, & Keil, 2016; Lagnado & Shanks, 2003; Malt, Murphy, & Ross, 1995; Murphy & Ross, 1994, 1999).
Plausibly, these limitations on probabilistic reasoning are specific to category-based induction
tasks. The purpose of categories, after all, is to simplify the world and carve it into discrete chunks.
But another possibility is that these previous findings are due to a much broader tendency in our
reasoning about uncertain hypotheses and their implications. A categorization of an object is a
hypothesis about what kind of object it is, but similarly a causal explanation is a hypothesis about what led something to happen and a mental-state inference is a hypothesis about what someone is thinking. The current studies find that people only think in terms of one hypothesis at a time in a causal reasoning task, suggesting that such digital thinking is a broad feature of hypothetical thinking. This is consistent with the singularity hypothesis (Evans, 2007), according to which people entertain only a single possibility at a time—an idea with broad explanatory power in higher-level cognition.
Why does digitization occur when making predictions from uncertain beliefs? Such predictions
typically require three processes. First, potential hypotheses must be evaluated, given the available
evidence, resulting in estimates of the hypothesis probabilities P(A) and P(B) (abduction). Second, the prediction needs to be made conditionally on each hypothesis holding, that is, in each relevant possible world, resulting in estimates of the predictive probabilities P(Z|A) and P(Z|B) (simulation). Finally, these conditional predictions need to be weighted by the plausibility of each hypothesis (integration), leading to an estimate of P(Z). Although people are able to perform each of these processes, they each are accompanied by limitations and bias. How do each of these stages contribute to digitization? Our experiments are most consistent with a model in which abduction leads to more extreme explicit hypothesis probabilities, simulation capacity limits result in digitization, and integration leads people to under-use hypothesis probabilities relative to predictive probabilities. This conclusion is necessarily provisional at this early stage, but here we lay out the best case made by the evidence.
The abduction phase—deciding among potential hypotheses as the best explanation for the
data—relies on a variety of heuristics. Although many of these heuristics may adaptively help to
circumvent computational limits or even lead to more accurate inferences, these heuristics lead to
systematic biases relative to Bayesian norms. Most relevant, people assign a higher probability to a
hypothesis that outperforms its competitors, relative to what is implied by objective probabilities
(Douven & Schupbach, 2015; see also Lipton, 2004). This sort of process could plausibly give rise to
digitization. Moreover, explanation often leads to overgeneralization in the face of exceptions
(Williams et al., 2013), consistent with the idea that abduction tends to underweight or ignore lower-
probability hypotheses. But in our studies, abduction does not seem to be a necessary ingredient for
digitization, since digitization even occurs when hypothesis probabilities P(A) and P(B) are provided
explicitly, avoiding the need for abductive processing (Studies 5 and 8A). The most likely resolution
of this puzzle is that abduction leads us to explicitly assign more extreme probabilities to hypotheses,
relative to Bayesian norms, but not to ignore those less-likely hypotheses altogether.
The simulation phase—imagining the plausibility of the prediction in the possible worlds defined
by each hypothesis—is known to have sharp capacity limits (Hegarty, 2004). Indeed, even within a
simulation of a single causal system, people imagine each step in that system piecemeal. Thus, it seems unlikely that people can simultaneously simulate multiple possible worlds and store their outputs simultaneously. Consistent with the idea that this is the key processing bottleneck that produces digitization, people do consider multiple possibilities when the predictive probabilities P(Z|A) and P(Z|B) are given explicitly, avoiding the need to simulate these outcomes (Studies 8B and 8C).
Yet, this does not seem to be the whole story. The integration phase—putting together multiple
pieces of evidence and weighing each by their diagnosticity—is also subject to biases. In particular,
people tend to over-rely on information about evidence strength (e.g., the proportion of cases consistent with a hypothesis) relative to information about evidence weight (e.g., sample size) (Griffin & Tversky, 1992; Kvam & Pleskac, 2016). Although this bias should not be extreme enough to lead people to
ignore lower-probability hypotheses, it could result in overconfidence—overly extreme probabilities—if people treat predictive probabilities as strength information (how likely the prediction is within each possible world) and hypothesis probabilities as weight information (how much to consider each
possible world). This pattern seems to be consistent with the data. Even when both the hypothesis
and predictive probabilities are given explicitly, requiring only integration to occur, participants overrely on the high-probability relative to low-probability hypothesis (Study 8C).
Thus, all three processing steps appear to contribute to overly extreme probability judgments,
albeit in different ways. Abduction may result in explicit probabilities that are too extreme, relative to
Bayesian norms. Integration seems to result in under-responsiveness to hypothesis probabilities. And
simulation seems to lead people to ignore lower-probability hypotheses entirely.
If digitization can lead to systematic errors, relative to Bayesian norms, why might the mind use
this principle? Digitization is often necessary to avoid a combinatorial explosion (Bobrow, 2012;
Friedman & Lockwood, 2016). Suppose you are unsure whether the Fed will raise interest rates.
Depending on this decision, Congress may attempt fiscal stimulus; depending on Congress’s decision, the CEO of Citigroup may decrease capital reserves; and depending on the CEO’s decision, SEC regulators may tighten enforcement of certain rules. Integrating across such chains of possibilities becomes daunting even for a computer as the number of branches increases. As recently as the 1990s, chess-playing computers used brute force methods to search through trees of possible moves, and even the famous Deep Blue, despite its massive processing power, did not consistently defeat the best human players, such as Garry Kasparov (Deep Blue lost 2.5 out of 6 games in their final match). The computationally efficient way to approach such a problem is precisely the opposite of brute force—to construct plausible scenarios and ignore the rest. Human chess players had, and probably still have, far better heuristics for pruning this huge space of possibilities. Our participants’ error was using this strategy even when the normative calculation is straightforward. This strategy may be adaptive in other contexts. Indeed, when the most-likely hypothesis has a probability close to 100%, it may even be areasonable approximation to the Bayesian solution.
What, then, should we make of probabilistic theories of cognition (Gershman et al., 2015;
Tenenbaum et al., 2011)? People clearly can represent analog probabilities at some level (“a 70%
chance of rain”) but our results show that they cannot use these probabilities to make downstream
predictions, instead digitizing them. Because probabilistic models typically characterize the output of
reasoning processes rather than the underlying mechanisms, they can be of great value in
characterizing the problems that our minds solve. But to the extent that such theories make
mechanistic claims involving the processing of analog probabilities within complex computations—
even at an implicit level—simpler, heuristic mechanisms may better account for human successes,
such as they are, with uncertainty. We look forward to the possibility that computational approaches
to the kinds of tasks we model in this paper can help to shed further insight on the underlying cognitive processing.

Perceptual limits of eyewitness identifications: The distance threshold of reliable identification

Nyman, T. J., Lampinen, J. M., Antfolk, J., Korkman, J., & Santtila, P. (2019). The distance threshold of reliable eyewitness identification. Law and Human Behavior, 43(6), 527-541. Dec 2019. http://dx.doi.org/10.1037/lhb0000342

Abstract: Increased distance between an eyewitness and a culprit decreases the accuracy of eyewitness identifications, but the maximum distance at which reliable observations can still be made is unknown. Our aim was to identify this threshold. We hypothesized that increased distance would decrease identification, rejection accuracy, confidence and would increase response time. We expected an interaction effect, where increased distance would more negatively affect younger and older participants (vs. young adults), resulting in age-group specific distance thresholds where diagnosticity would be 1. We presented participants with 4 live targets at distances between 5 m and 110 m using an 8-person computerized line-up task. We used simultaneous and sequential target-absent or target-present line-ups and presented these to 1,588 participants (age range = 6–77; 61% female; 95% Finns), resulting in 6,233 responses. We found that at 40 m diagnosticity was 50% lower than at 5 m and with increased distance diagnosticity tapered off until it was 1 (±0.5) at 100 m for all age groups and line-up types. However, young children (age range = 6–11) and older adults (age range = 45–77) reached a diagnosticity of 1 at shorter distances compared with older children (age range = 12–17) and young adults (age range = 18–44). We found that confidence dropped with increased distance, response time remained stable, and high confidence and shorter response times were associated with identification accuracy up to 40 m. We conclude that age and line-up type moderate the effect distance has on eyewitness accuracy and that there are perceptual distance thresholds at which an eyewitness can no longer reliably encode and later identify a culprit.

Public Significance Statement
The present study advances earlier findings regarding the negative impact that increased distance has on eyewitness accuracy by providing evidence for an upper distance threshold at 100 m for correct identifications. Our findings highlight the perceptual limits of eyewitness identifications and are relevant for use in courts of law by providing evidence that objective distance can be used as an estimation of eyewitness reliability.

KEYWORDS: eyewitness, identification, distance, face recognition


Discussion

Considering that distance negatively impacts facial encoding and later identification (e.g., Lampinen et al., 2014, 2015), we investigated the effect of distance on eyewitness accuracy in different age groups and line-ups. To achieve this, we conducted an ecologically valid outdoor experiment in which we presented participants with four live targets at distances between 5 m and 110 m, followed by an immediate identification task.

The Effect of Distance and Age on Eyewitness Accuracy
Distance had a significant negative effect in all age groups on identification accuracy in TP line-ups and rejection accuracy in TA line-ups. This held true for both simultaneous and sequential line-ups. There were also significant differences between age groups with the young children (ages 6 to 11) being significantly worse at identifying targets in TP sequential line-ups compared with young adults (ages 18 to 44). Further, older adults (ages 45 to 77) were significantly worse at making correct rejections in both TA simultaneous and sequential line-ups compared with young adults. On our initial analyses (post hoc analyses) the cut-offs in the simultaneous line-ups were 61 m (76 m) for young children, 98 m (110 m) for older children, 77 m (89 m) for young adults, and 69 m (89 m) for older adults. In the sequential line-ups the cut-offs were 47 m (60 m) for young children, 75 m (96 m) for older children, 63 m (79 m) for young adults, and 52 m (69 m) for older adults.
The initial results, which assumed unbiased line-ups, gave us diagnosticity cut-offs that were on average between 10 m and 20 m below the cut-offs found when using the most selected TA filler as the innocent suspect. Arguably, this means that the cut-offs from our initial analyses are perhaps too conservative (i.e., low). However, we would like to emphasize that the decline in diagnosticity with increased distance was similar using either approach and all diagnosticity levels had fallen to 1 (±0.5) at 100 m for all age groups and line-up types. Furthermore, by using the higher cut-offs that are based on the post hoc analyses, we can with adequate certainty define probable upper thresholds where there was no information gained from the line-ups (Wells & Lindsay, 1980; Wells & Olson, 2002). Our findings illustrate that distance has a dramatically negative effect on eyewitness accuracy so that even small variations in distance can play an important role in eyewitness accuracy. Collectively, these results indicated that when assessing eyewitness identifications, an objective measure of the distance between the eyewitness and the culprit is an important gauge of the odds of identification accuracy.
Interestingly, correct rejection rates for young adults decreased rather than increased with increased distance. For other age groups, correct rejection rates remained relatively stable over distance. Instead of an increase in rejection rates, we found an increase in filler selections (see Appendix B in the online supplemental materials), which is also reflected in the shift toward a more liberal response bias as distance increased. Only older children appear to have shifted to a slightly more conservative response bias at 90 m and above. Nevertheless, the overall increase in choosing suggests that participants were not good at taking into account the difficulty of the task and this could be taken as support for the hypotheses that when memory strength is low, more of the photographs match the target equally well, so participants may tend to choose rather than reject. In real life scenarios, where an eyewitness is asked to take part in a police line-up, the identification task inflates choosing rates due to less pristine conditions or the witness wanting to help the police (e.g., Wells et al., 2000). The results, therefore, suggest that the ability of participants to metacognitively judge the difficulty of the task was not proportional to the actual degree of difficulty; mirroring earlier findings (Smith et al., 2018).

Distance Estimation Accuracy
The main findings regarding distance estimation was that as distance increased the level of accuracy decreased and that young children and older adults made more erroneous distance estimations compared with young adults. Moreover, in comparison with young adults, increased distance increased error rates more for young children but less for older adults. Although it is difficult to make any clear interpretations regarding the age-related differences, it may be that experience plays an important role in estimating distance. It is possible that body height is a confounding variable, as a taller person (i.e., adults) might have an advantage when estimating larger distances. This could explain some of the age-related differences, although it does not explain why older adults had more errors and were less affected by increased distance compared with young adults. Notably, the large variation and overall low accuracy of distance estimation indicates that subjective estimations of distance are highly unreliable.

Simultaneous and Sequential Line-ups
Simultaneous line-ups provided an advantage over sequential line-ups, with higher accuracy and a less steep decline in diagnosticity and d′ for all age groups with increased distance. However, young children (ages 6 to 11) were worse at identifying targets in TP sequential lineups compared with young adults (ages 18 to 44) and older adults (ages 45 to 77) were worse at correctly rejecting line-ups in TA simultaneous and TA sequential line-ups, compared with young adults. The differences between age groups fall partly in line with earlier results (Fitzgerald & Price, 2015), because young children and older adults faired much worse compared with young adults. Interestingly, it was apparent from the simultaneous TA rejections that the older adults appear to have been almost equally good/bad at rejecting line-ups with increased distance. This suggests that older adults are prone to choose no matter the memory strength in the TA simultaneous line-ups (but not in the TA sequential line-ups), which could be seen as a dependency on familiarity rather than recollection (Healy et al., 2005; Shing et al., 2010, 2008). Generally, response bias was more liberal in sequential line-ups compared with simultaneous line-ups (see Appendix B in the online supplemental materials), indicating that sequential line-ups increased the likelihood of choosing compared with simultaneous line-ups. Nevertheless, response bias increased in both line-ups as distance increased, indicating that all age groups adopted a more liberal response criterion as the task became more difficult. We have interpreted this as reflecting a higher reliance on a familiarity-based rather than a recollection-based strategy.
Before placing too much emphasis on the differences between the simultaneous and sequential line-ups, it is important to note that the sequential line-ups differed from common U.S. police practice in that the task was absolute and no additional rounds were permitted (e.g., Steblay, Dietrich, Ryan, Raczynski, & James, 2011). Moreover, the number of images in the sequential line-ups was mentioned in the line-up instructions and this can decrease discriminability especially if the target image is presented late in the line-up (Horry, Palmer, & Brewer, 2012). It is, thus possible that the differences between the line-up types are partly due to different degrees of pristine conditions.
These results are relevant to the ongoing debate over simultaneous and sequential line-ups. There have been findings showing that simultaneous line-ups have an advantage, due perhaps to an increased discriminability in the relative judgment task (Clark, 2012; Clark et al., 2015; Gronlund et al., 2015; Wixted et al., 2016). Others have shown that sequential line-ups have an advantage because they decrease mistaken identifications without impacting the number of correct identifications (Steblay et al., 2003, 2011; Wells et al., 2015). Some have even proposed that sequential line-ups do not improve discriminability but have an advantage because they encourage the use of a more conservative criterion (Palmer & Brewer, 2012). The present results indicate that there are important differences in age groups depending on memory encoding and line-up type. When considering increased distance as a representation of lower memory quality, it is clear that most of the age differences disappeared at higher distances due to floor effects, representing the limits of perception and encoding. More research is needed to gain a more in-depth understanding of how different age groups make judgments based on variations in memory strength.

Confidence
A CAC analysis (Mickes, 2015) confirmed that high confidence is associated with high accuracy at distances up to 40m. This was true for all age groups. After 40m there were too few high-confidence observations to reliably analyze the results. The average levels of confidence fell with increased distance, meaning that participants perhaps understood, to a certain degree, the difficulty of the task and downshifted their confidence as distance increased. These results are interesting in relation to the continuing debate regarding the degree to which high confidence is a postdictive indicator of accuracy. It has previously been suggested that less optimal estimator variables will negatively impact the relationship between confidence and accuracy (Deffenbacher, 2008). However, a counterargument is that in pristine conditions and when memory is examined immediately, as in an immediate identification task, then high confidence is associated with high accuracy (Brewer & Wells, 2006; Clark et al., 2015; Sporer et al., 1995). It is also suggested that under such conditions, estimator variables such as distance, will not influence the confidence-accuracy relationship and that participants will adjust their confidence downward when the memory-match for photographs in the line-up is low (Semmler et al., 2018; Wixted & Wells, 2017). The current results appear to fit the latter hypothesis. Nevertheless, it is important to state that in the current sample, there were very few high-confidence responses after 40m; of which very few were correct. This suggests that in real world situations, high-confidence identifications at longer distances most likely reflect either very unique encoding conditions, as for example in the case of a familiar face, or the impact of suggestive factors that inflate confidence, such as an investigator positively reinforcement of the choice made (see, e.g., Wixted & Wells, 2017).

Response Times
The results regarding the relationship between response time and identification accuracy showed that shorter response times were robustly associated with higher identification accuracy at distances below approximately 40m in simultaneous TP line-ups. Earlier studies have suggested that there is a cut-off between 10 and 12 s, below which there is a higher degree of accuracy (Dunning & Perretta, 2002). However, more recent work has called this cut-off into question and has shown that there is great variability in response time and that the previously suggested cut-off point does not accurately distinguish between high and low accuracy (Sauer, Brewer, & Wells, 2008; Weber, Brewer, Wells, Semmler, & Keast, 2004). The present results suggest that shorter response times do have a postdictive value, at least below approximately 40m, with decisions made under five seconds being the most accurate. The implications are that, as with confidence, more research is needed to understand the effect that increased distance has on the relationship between response time and accuracy.

Practical Applications
The main take-home message of the current study is that both objective distance and age are crucial factors to take into consideration when assessing the benefit of conducting a line-up and that there are upper distance limits to eyewitness reliability. For practitioners in the field it is also important to emphasize that at 40 m diagnosticity was 50% lower compared with diagnosticity at 5 m. Moreover, as distance increased, diagnosticity tapered toward 1 so that by 100 m, no age group, using either line-up type, produced diagnosticity values higher than 1 (±0.5). Nevertheless, there were substantial differences between age groups, showing that older children (ages 12 to 17) and young adults (ages 18 to 44) had upper distance cut-offs that were roughly 10–20 m higher compared with young children (ages 6 to 11) and older adults (ages 45 to 77).
Importantly, the current results were obtained in pristine conditions (i.e., best practice methods), with optimal viewing conditions (i.e., 20-s viewing time, natural and optimal lighting, no distractions), and using an immediate line-up task. Therefore, the distance thresholds reported in this article are likely to be overestimates of thresholds in real life settings, where flawed line-up procedures, less optimal viewing conditions, and delayed identifications are much more common. For example, Felson and Poulsen (2003) estimated that approximately 50% of crimes take place after 8 p.m. (i.e., when lighting and visibility is low).
The current perceptual distance thresholds should be interpreted as the maximum thresholds possible in the best possible conditions. When an actual crime takes place, there are often other factors present, such as stress (Deffenbacher, Bornstein, Penrod, & McGorty, 2004) and weapon focus (Erickson, Lampinen, & Leding, 2014; Fawcett, Russell, Peace, & Christie, 2011), that make it more likely that correct identifications are already improbable at shorter distances. Additionally, it is known that (facial) memory is imperfect, susceptible to distortion, and decays with time (Deffenbacher, Bornstein, McGorty, & Penrod, 2008; Lacy & Stark, 2013). It can, therefore, be assumed that delayed identifications will produce less accurate results compared with the present findings. In addition to this, Lindsay and colleagues (2008) found that delayed responses gave rise to a significantly higher number of “not sure” and incorrect rejections compared with an immediate identification task.

Limitations
The current data collection is not without its limitations. One limitation is that we used very similar targets and that more variation in appearance, ethnicity, or age would have been informative. The setting was in a science center, so although the results are highly generalizable, there might be some variation in choosing and rejection rates in comparison with an actual or mock police setup, where the consequences of choosing or not choosing are more critical. The design was a prospective task where participants knew beforehand that they would be witnessing four targets and conducting four identifications. On the one hand this increases the significance of our results because this is an additional optimal condition factor, but on the other hand it would be very informative to investigate the effect of distance on an uninformed and a retrospective line-up task. Instructing the participants as to how many images would be shown in the sequential line-ups, also slightly hampered the interpretation of the results. Despite these limitations, the present research represents a substantial improvement on past research where less ecologically valid paradigms have been used.

Emotional support hinges on attributions of emotion control: People are more inclined to react supportively when they judge that the target individual cannot regulate their own emotions

Cusimano, Corey, "Attributions Of Mental State Control: Causes And Consequences" (2019). PhD Thesis, Publicly Accessible Penn Dissertations, 3524. https://repository.upenn.edu/edissertations/3524

Abstract: A popular thesis in psychology holds that ordinary people judge others’ mental states to be uncontrollable, unintentional, or otherwise involuntary. The present research challenges this thesis and documents how attributions of mental state control affect social decision making, predict policy preferences, and fuel conflict in close relationships. In Chapter 1, I show that lay people by-and-large attribute intentional control to others over their mental states. Additionally, I provide causal evidence that these attributions of control predict judgments of responsibility as well as decisions to confront and reprimand someone for having an objectionable attitude. By overturning a common misconception about how people evaluate mental states, these findings help resolve a long-standing debate about the lay concept of moral responsibility. In Chapter 2, I extend these findings to interpersonal emotion regulation in order to predict how observers react to close others who experience stress, anxiety, or distress. Across six studies, I show that people’s emotional support hinges on attributions of emotion control: People are more inclined to react supportively when they judge that the target individual cannot regulate their own emotions, but react unsupportively, sometimes evincing an intention to make others feel bad for their emotions, when they judge that those others can regulate their negative emotion away themselves. People evaluate others’ emotion control based on assessments of their own emotion regulation capacity, how readily reappraised the target’s emotion is, and how rational the target is. Finally, I show that judgments of emotion control predict self-reported supportive thoughts and behaviors in close relationships as well as preferences for university policies addressing microaggressions. Lastly, in Chapter 3, I show that people believe that others have more control over their beliefs than they themselves do. This discrepancy arises because, even though people conceptualize beliefs as controllable, they tend to experience the beliefs they hold as outside their control. When reasoning about others, people fail to generalize this experience to others and instead rely on their conceptualization of belief as controllable. In light of Chapters 1 and 2, I discuss how this discrepancy may explain why ideological disagreements are so difficult to resolve.


Limitations and Future Directions

Subjects in our studies were exclusively recruited through Amazon’s Mechanical
Turk. Although samples recruited from AMT are more representative of the U.S. than
typical university student samples, individuals on AMT tend to be less religious,
wealthier, and better educated than the average person in the United States (Paolacci &
Gabriele, 2014). Additionally, our entire sample consisted of people living in the United
States who, like other so-called WEIRD populations, are wealthier and better educated
than most people in the world, and are predominately Christian (Heinrich, Heine &
Norenzayan, 2010). Cross cultural work has revealed striking differences in how different
groups think about individuals’ agency. Of particular note, individuals in some non-U.S.
cultures appear to attribute less agency to individuals than do individuals in the United
States (e.g., Iyengar and Lepper, 1999; Kitayama et al., 2004; Miller, Das, &
Chakravarthy, 2011; Morris, Nisbett & Peng, 1995; Savani et al., 2010; Specktor et al.,
2004). For instance, compared to children in the United States, Nepalese children are
more inclined to view some behaviors as constrained by social rules and therefore outside
of their control, with this gap widening with age (Chernyak et al., 2013). In a similar
vein, Indian adults appear to be less likely than U.S. adults to construe everyday
behaviors as choices (Savani et al., 2010). Of clearest relevance to the present studies,
some work suggests that Christians tend to attribute more control to others over deviant
mental states (e.g., consciously entertaining thoughts of having an affair) than do Jews,
thus showing evidence for cultural moderation with respect to mental states in particular
(Cohen & Rozin, 2001). In light of this sort of evidence, we should not automatically
assume that the results from our studies will replicate across different cultural or religious
contexts.
Although we are uncertain as to whether our findings will generalize to all
cultures, our findings do suggest an important direction for cross-cultural work.
Specifically, future work measuring attributions of belief control should distinguish
between lay theories of belief control and the introspective-experience of belief control.
One virtue of measuring both is that we may expect different amounts of variation
between these two measures of control across cultures. For instance, assuming that
beliefs are indeed uncontrollable to a significant degree (see above), we should expect
that the felt-experience of low control will vary little from culture to culture. By contrast,
the lay theory of belief, which may be influenced by highly variable norms (e.g., religious
norms, Cohen & Rozin, 2001), or folk theories of agency (see paragraph above), may be
more likely to vary across cultures. For this reason, we speculate that self-other
differences in belief control are most likely to arise in cultures where the lay theory of
belief posits high control, as it is in these cultures where this lay theory will most likely
diverge from the felt-experience of belief.
Another limitation in our studies regards the limited range of beliefs that we
sampled. The beliefs in Studies 3.1-3.3 were highly abstract, complex, or value-laden
(e.g., belief in God, the correct policy for genetically modified foods, the wrongness of
not returning money to its rightful owner). We addressed this in Studies 3.4-3.5 by using
beliefs that subjects themselves provided – specifically, the first beliefs that came to
mind. This yielded a considerably wider sampling of belief contents (see Table 3.2 for a
list of examples). Yet, it still leaves open the question of how people reason about their
own control relative to that of others for very simple, concrete beliefs (e.g., “there is a
two thirds chance of pulling a marble out of the bucket,” “there is a quarter in my
pocket,” “it is raining”). We are ambivalent about whether to expect the same
discrepancy in cases such as these. It may be that the self-other difference is attenuated or
eliminated given that the relevant constraints on belief change are far more apparent for
beliefs of this sort. Continuing to delimit the bounds of the self-other discrepancy remains
a valuable goal for future research.
Finally, research should investigate whether, and when, self-other differences in
attributions of belief control extend to other mental states. Although the present paper
focuses only on the constraints on belief change, it may be that other mental states,
including desires, evaluative attitudes, and emotions, are subject to similar constraints. If
they are, then we might expect similar self-other discrepancies in perceived control –
particularly in light of past work showing that people generally attribute high control to
others over many mental states (Cusimano & Goodwin, in press). Indeed, there is
already one reason to expect the self-other discrepancy to extend to other mental states,
namely, that a person’s beliefs often play a pivotal role in determining his or her other
mental states. For instance, if someone is depressed because she believes she will not
recover from a severe illness, an observer may think she is more capable of cheering up
than she herself does, precisely because the observer judges her as more able to change
her belief about her prognosis than she does. However, whether such self-other
differences do in fact extend to other mental states awaits empirical testing.