Tuesday, December 3, 2019

For years, SAT developers & administrators have declined to say that the test measures intelligence, despite the fact that the SAT can trace its roots through the Army Alpha & Beta tests, & others

What We Know, Are Still Getting Wrong, and Have Yet to Learn about the Relationships among the SAT, Intelligence and Achievement. Meredith C. Frey. J. Intell. 2019, 7(4), 26; December 2 2019, https://doi.org/10.3390/jintelligence7040026

Abstract: Fifteen years ago, Frey and Detterman established that the SAT (and later, with Koenig, the ACT) was substantially correlated with measures of general cognitive ability and could be used as a proxy measure for intelligence (Frey and Detterman, 2004; Koenig, Frey, and Detterman, 2008). Since that finding, replicated many times and cited extensively in the literature, myths about the SAT, intelligence, and academic achievement continue to spread in popular domains, online, and in some academic administrators. This paper reviews the available evidence about the relationships among the SAT, intelligence, and academic achievement, dispels common myths about the SAT, and points to promising future directions for research in the prediction of academic achievement.

Keywords: intelligence; SAT; academic achievement

2. What We Know about the SAT


2.1. The SAT Measures Intelligence

Although the principal finding of Frey and Detterman has been established for 15 years, it bears repeating: the SAT is a good measure of intelligence [1]. Despite scientific consensus around that statement, some are remarkably resistant to accept the evidence of such an assertion. In the wake of a recent college admissions cheating scandal, Shapiro and Goldstein reported, in a piece for the New York Times, “The SAT and ACT are not aptitude or IQ tests” [6]. While perhaps this should not be alarming, as the authors are not experts in the field, the publication reached more than one million subscribers in the digital edition (the article also appeared on page A14 in the print edition, reaching hundreds of thousands more). And it is false, not a matter of opinion, but rather directly contradicted by evidence.
For years, SAT developers and administrators have declined to call the test what it is; this despite the fact that the SAT can trace its roots through the Army Alpha and Beta tests and back to the original Binet test of intelligence [7]. This is not to say that these organizations directly refute Frey and Detterman; rather, they are silent. On the ETS website, the word intelligence does not appear on the pages containing frequently asked questions, the purpose of testing, or the ETS glossary. If one were to look at the relevant College Board materials (and this author did, rather thoroughly), there are no references to intelligence in the test specifications for the redesigned SAT, the validity study of the redesigned SAT, the technical manual, or the SAT understanding scores brochure.
Further, while writing this paper, I entered the text “does the SAT measure intelligence” into the Google search engine. Of the first 10 entries, the first (an advertisement) was a link to the College Board for scheduling the SAT, four were links to news sites offering mixed opinions, and fully half were links to test prep companies or authors, who all indicated the test is not a measure of intelligence. This is presumably because acknowledging the test as measure of intelligence would decrease consumers’ belief that scores could be vastly improved with adequate coaching (even though there is substantial evidence that coaching does little to change test scores). One test prep book author’s blog was also the “featured snippet”, or the answer highlighted for searchers just below the ad. In the snippet, the author made the claims that “The SAT does not measure how intelligent you are. Experts disagree whether intelligence can be measured at all, in truth” [8]—little wonder, then, that there is such confusion about the test.

2.2. The SAT Predicts College Achievement

Again, an established finding bears repeating: the SAT predicts college achievement, and a combination of SAT scores and high school grades offer the best prediction of student success. In the most recent validity sample of nearly a quarter million students, SAT scores and high school GPA combined offered the best predictor of first year GPA for college students. Including SAT scores in regression analyses yielded a roughly 15% increase in predictive power above using high school grades alone. Additionally, SAT scores improved the prediction of student retention to the second year of college [9]. Yet many are resistant to using standardized test scores in admissions decisions, and, as a result, an increasing number of schools are becoming “test optional”, meaning that applicants are not required to submit SAT or ACT scores to be considered for admission. But, without these scores, admissions officers lose an objective measure of ability and the best option for predicting student success.

2.3. The SAT Is Important to Colleges

Colleges, even nonselective ones, need to identify those individuals whose success is most likely, because that guarantees institutions a consistent revenue stream and increases retention rates, seen by some as an important measure of institutional quality. Selective and highly selective colleges further need to identify the most talented students because those students (or, rather, their average SAT scores) are important for the prestige of the university. Indeed, the correlation between average SAT/ACT scores and college ranking in U.S. News & World Report is very nearly 0.9 [10,11].

2.4. The SAT Is Important to Students

Here, it is worth recalling the reason the SAT was used in admissions decisions in the first place: to allow scholarship candidates to apply for admission to Harvard without attending an elite preparatory school [7]. Without an objective measure of ability, admissions officers are left with assessing not just the performance of the student in secondary education, but also the quality of the opportunities afforded to that student, which vary considerably across the secondary school landscape in the United States. Klugman analyzed data from a nationally representative sample and found that high school resources are an important factor in determining the selectivity of colleges that students apply for, both in terms of programmatic resources (e.g., AP classes) and social resources (e.g., socioeconomic status of other students) [12]. It is possible, then, that relying solely on high school records will exacerbate rather than reduce pre-existing inequalities.
Of further importance, performance on the SAT predicts the probability of maintaining a 2.5 GPA (a proxy for good academic standing) [9]. Universities can be rather costly and admitting students with little chance of success until they either leave of their own accord or are removed for academic underperformance—with no degree to show and potentially large amounts of debt—is hardly the most just solution.


3. What We Get Wrong about the SAT

Nearly a decade ago, Kuncel and Hezlett provided a detailed rebuttal to four misconceptions about the use of cognitive abilities tests, including the SAT, for admissions and hiring decisions: (1) a lack of relationship to non-academic outcomes, (2) predictive bias in the measurements, (3) a problematically strong relationship to socioeconomic status, and (4) a threshold in the measures, beyond which individual differences cease to be important predictors of outcomes [13]. Yet many of these misconceptions remain, especially in opinion pieces, popular books, blogs, and more troublingly, in admissions decisions and in the hearts of academic administrators (see [14] for a review for general audiences).

3.1. The SAT Mostly Measures Ability, Not Privilege

SAT scores correlate moderately with socioeconomic status [15], as do other standardized measures of intelligence. Contrary to some opinions, the predictive power of the SAT holds even when researchers control for socioeconomic status, and this pattern is similar across gender and racial/ethnic subgroups [15,16]. Another popular misconception is that one can “buy” a better SAT score through costly test prep. Yet research has consistently demonstrated that it is remarkably difficult to increase an individual’s SAT score, and the commercial test prep industry capitalizes on, at best, modest changes [13,17]. Short of outright cheating on the test, an expensive and complex undertaking that may carry unpleasant legal consequences, high SAT scores are generally difficult to acquire by any means other than high ability.

That is not to say that the SAT is a perfect measure of intelligence, or only measures intelligence. We know that other variables, such as test anxiety and self-efficacy, seem to exert some influence on SAT scores, though not as much influence as intelligence does. Importantly, though, group differences demonstrated on the SAT may be primarily a product of these noncognitive variables. For example, Hannon demonstrated that gender differences in SAT scores were rendered trivial by the inclusion of test anxiety and performance-avoidance goals [18]. Additional evidence indicates some noncognitive variables—epistemic belief of learning, performance-avoidance goals, and parental education—explain ethnic group differences in scores [19] and variables such as test anxiety may exert greater influence on test scores for different ethnic groups (e.g., [20], in this special issue). Researchers and admissions officers should attend to these influences without discarding the test entirely.

Merely Possessing a Placebo Analgesic Reduced Pain Intensity: Preliminary Findings from a Randomized Design

Merely Possessing a Placebo Analgesic Reduced Pain Intensity: Preliminary Findings from a Randomized Design. Victoria Wai-lan Yeung, Andrew Geers, Simon Man-chun Kam. Current Psychology, February 2019, Volume 38, Issue 1, pp 194–203. https://link.springer.com/article/10.1007/s12144-017-9601-0

Abstract: An experiment was conducted to examine whether the mere possession of a placebo analgesic cream would affect perceived pain intensity in a laboratory pain-perception test. Healthy participants read a medical explanation of pain aimed at inducing a desire to seek pain relief and then were informed that a placebo cream was an effective analgesic drug. Half of the participants were randomly assigned to receive the cream as an unexpected gift, whereas the other half did not receive the cream. Subsequently, all participants performed the cold-pressor task. We found that participants who received the cream but did not use it reported lower levels of pain intensity during the cold-pressor task than those who did not receive the cream. Our findings constitute initial evidence that simply possessing a placebo analgesic can reduce pain intensity. The study represents the first attempt to investigate the role of mere possession in understanding placebo analgesia. Possible mechanisms and future directions are discussed.

Keywords: Placebo effect Mere possession Cold pressor Placebo analgesia Pain

Discussion
Past research has demonstrated that placebo analgesics can increase pain relief. The primary focus was on pain relief that occurred following the use of the placebo-analgesic treatment. We tested the novel hypothesis that merely possessing a placebo analgesic can boost pain relief. Consistent with this hypothesis, participants who received but did not use what they were told was a placebo-analgesic cream reported lower levels of pain intensity in a cold-pressor test than did participants who did not possess the cream. To our knowledge, the present data are the first to extend research on the mere-possession phenomenon (Beggan 1992) to the realm of placebo analgesia.

Traditional placebo studies have included both possessing and consuming: Participants first possess an inert object, then they consume or use it and report diminished pain as a consequence (Atlas et al. 2009; de la Fuente-Fernández et al. 2001; Price et al. 2008; Vase et al. 2003). The current study provided initial evidence that consuming or using the placebo analgesia is unnecessary for the effect. However, it remains possible that the effect would be enhanced were possession to be accompanied by consumption or use. This and related hypotheses could be tested in future studies.

In the current experiment, we measured several different variables (fear of pain, dispositional optimism, desire for control, suggestibility, and trait anxiety) that could be considered as potential moderators of the observed placebo-analgesia effect. However, none of them proved significant. Although we remain unsure of the processes responsible for the mere possession effect we observed, a previously offered account may be applicable. Specifically, participants’ pain reduction may have been induced by a positive expectation of pain relief that was mediated by an elevated perception of self efficacy in coping with pain (see Peck and Coleman 1991; Spanos et al. 1989). To directly test this possibility in further research, it would be important to measure participants’ self-perceived analgesic efficacy in relation to the mere-possession effect.

It is possible that the mere possession of what participants were told was an analgesic cream induced a positive affect through reception of a free gift. The affect may have influenced participants’ perceived pain intensity. In order to test this possibility, we looked more closely at an item in the State-Anxiety Subscale (Spielberger et al. 1983), specifically, BI feel happy^. Participants in the mere-possession condition did not feel happier (M = 2.47, SD = .96) than those in the nopossession condition (M = 2.80, SD = .70), t(37) = 1.22, p = .23, d = .38, CI95% = [−0.24, 1.00]. Nevertheless, since the participants completed the State-Anxiety Subscale after they received the cream and following the pain-perception test, in order to strictly delineate the effect of affect from other factors, future research should measure participants’ mood after they receive the cream and prior to the pain-perception test. In our study, participants’ pain reduction could not be attributed to the mere-exposure effect because participants in both conditions were initially exposed to the sample of the cream simultaneously. The only difference between the two conditions was that participants in the mere-possession condition were subsequently granted ownership of the sample cream, but participants in the no-possession condition did not.

A significant group difference in pain perception appeared in the analysis of the MPQ results but not those from the VAS. There are at least two possible reasons for this outcome. First, prior researchers had demonstrated that the VAS is sensitive to changes in perceived pain when participants are asked to continuously report their pain intensity (Joyce et al. 1975; Schafer et al. 2015).

In our study, participants reported their pain intensity only once. Whether a significant group difference would be observed if the VAS was to be administered several times within the 1-min immersion duration is presently unknown. Second, it should be noted that VAS may not be sensitive to Asians’ pain perception (Yokobe et al. 2014). No similar observation has been made about results from the use of the MPQ.

Our findings add to the placebo-analgesia literature by indicating potential directions for further research, including limitations of our study that will need to be considered. First, we induced participants to seek the reduction of pain and to anticipate the effectiveness of the placebo. Doing so may have optimized the incidence of the mere-possession effect. Second, although our data demonstrated that the effect we observed was not due to a positive feeling in response to receiving a free gift, future studies might involve a control condition in which the gift is not purported to relieve pain. Third, our participants were healthy university students of Chinese ethnicity. Prior research has shown that cultural background influences pain perception (Callister 2003; Campbell and Edwards 2012). Future researchers may extend the ethnic and cultural range of the participants in an effort to generalize the current findings. Moreover, it seems critical to conduct future research with clinical patients who are in demonstrable pain. Lastly, it is unclear whether the mere-possession effect extends to other types of pain-induction tasks, such as those involving heat (e.g., Mitchell et al. 2004; Duschek et al. 2009) or loud noise (e.g., Brown et al. 2015; Rose et al. 2014).

A message coming from behind is interpreted as more negative than a message presented in front of a listener; social information presented from behind is associated with uncertainty and lack of control

Rear Negativity:Verbal Messages Coming from Behind are Perceived as More Negative. Natalia Frankowska  Michal Parzuchowski  Bogdan Wojciszke  Michał Olszanowski  Piotr Winkielman. European Journal of Social Psychology, 29 November 2019. https://doi.org/10.1002/ejsp.2649

Abstract: Many studies have explored the evaluative effects of vertical (up/down) or horizontal (left/right) spatial locations. However, little is known about the role of information that comes from the front and back. Based on multiple theoretical considerations, we propose that spatial location of sounds is a cue for message valence, such that a message coming from behind is interpreted as more negative than a message presented in front of a listener. Here we show across a variety of manipulations and dependent measures that this effect occurs in the domain of social information. Our data are most compatible with theoretical accounts which propose that social information presented from behind is associated with uncertainty and lack of control, which is amplified in conditions of self‐relevance.

Excerpts:

General Discussion

Rear Negativity Effect in Social Domain
The present series of studies document a “rear negativity effect” – a phenomenon where perceivers evaluate social information coming from a source located behind them as more negative than identical information coming from a source located in front of them. We observed this effect repeatedly for a variety of verbal messages (communications in a language incomprehensible to the listeners, neutral communications, positive or negative words spoken in participants’ native language), for a variety of dependent variables (ratings, reaction times), and among different subject populations (Poland, US). Specifically, in Study 1, Polish subjects interpreted Chinese sentences as more negative when presented behind the listener. In Study 2, Polish subjects evaluated feedback from a bogus test as indicative of poorer results when it was presented behind, rather than in front of them. In Study 3, Polish subjects evaluated the Chinese sentences as the most negative when they were played from behind and when they supposedly described in-group (i.e., Polish) members. In Study 4, US subjects judged negative traits more quickly when the traits were supposedly describing self-relevant information and were played behind the listener.

Explanations of the effect

The current research extends previous findings that ecological, naturally occurring, sounds are detected quicker and induce stronger negative emotions when presented behind participants (Asutay & Västfjäll, 2015). Critically, the current studies document this effect in the domain of social information and show it to be stronger or limited to processing of self-relevant information, whether this relevance was induced by reference of messages to the self or to an in-group. Our characterization of the “rear negativity” effect in the social domains is compatible with several considerations and theoretical frameworks. Most generally, the effect is consistent with a notion common in many cultures that things that take place “behind one’s back” are generally negative. However, the accounts of why this is vary – ranging from metaphor theory, simple links between processing ease and evaluation, affordance and uncertainty theories, attentional as well as emotion-appraisal accounts.

Spatial metaphors. People not only talk metaphorically, but also think metaphorically, activating mental representations of space to scaffold their thinking in a variety of non-spatial domains, including time (Torralbo et al., 2006), social dominance (Schubert, 2005), emotional valence (Meier & Robinson, 2004), similarity (Casasanto, 2008), and musical pitch (Rusconi et al., 2006). Thus, it is interesting to consider how our results fit with spatial metaphor theories. Specifically, perhaps when people hear a message, they activate a metaphor and, as a result, evaluate the information as being more unpleasant, dishonest, disloyal, false, or secretive when coming from behind than from the front. Our results suggest that reasons for the rear negativity of verbal information go beyond simple metaphorical explanation. This is because this negativity occurs solely or is augmented for information that is personally relevant to the listener, and that it occurs even in paradigms that require fast automatic processing, leaving little time for activation of a conceptual metaphor. Of course, once the valence-to-location mapping is metaphorically established, it could manifest quickly and be stronger for personally-important information. In short, it would be useful to conduct further investigation of the metaphorical account, perhaps by manipulating the degree of metaphor activation, its specific form or its relevance.

Associative learning and cultural interpretations. The valence-location link could have metaphorical origins but could also result from an individual’s personal experiences that create a mental association (Casasanto, 2009). One could potentially examine an individual’s personal history and her cultural setting and see whether a rear location has linked to negative events. Specifically, everyday experiences could lead to a location-valence association. For example, during conversations an individual may have encountered more high-status people in front of her rather than behind, thus creating an association of respect and location. Or, the individual may have experienced more sounds from behind that are associated with criticism or harassment rather than compliments. Beyond an individual’s own associative history, there is also culture. For example, European cultures used to have a strong preference for facing objects of respect (e.g., not turning your back to the monarch, always facing the church altar). As a result, sounds coming from behind may be interpreted as coming from sources of less respect. More complex interpretative processes may also be involved. As discussed in the context of Study 3, hearing from behind from an out-group about one’s own group can increase the tendency to attribute negative biases to the outgroup. It can lead then to interpreting the outgroup’s utterances as being more critical or even threatening, especially when such utterances are negative (e.g. Judd et al., 2005; Yzerbyt, Judd, & Muller, 2009). However, these speculations are clearly post-hoc and further research is needed to understand the full pattern of results. Simila [cut here!!!!]

Fluency. One simple mechanistic explanation of the current results draws on the idea that difficult (disfluent) processing lowers stimulus evaluations, while easy (fluent) processing enhances evaluations (Winkielman et al., 2003). People usually listen to sounds positioned in front. So, it is possible that sounds coming from behind are perceived as more negative because they are less fluent (or less familiar). However, fluency, besides increasing the experience of positive affect, is also manifested through the speed of processing (i.e. fluent stimuli are recognized faster). Yet, it is worth mentioning that in Study 4 we did not observe the effect of location on overall reaction times. Moreover, previous research suggests that if anything, information presented from behind is processed faster (Asutay & Västfjäll, 2015). For these reasons, and because the effect is limited to self-relevant information, the fluency approach does not explain the presented effects. However, future research may consider a potential of fluency manipulations to reduce the rear negativity effect.

Affordances. Yet another possible explanation draws on classic affordance theory suggesting that the world is perceived not only in terms of objects and their spatial relationships, but also in terms of one’s possible actions (Gibson, 1950, 1966). Thus, verbal information located in the back may restrict possible actions to the listener and hence may cause negative evaluation. However, this explanation is weakened by our observations that reward negativity effect also appears when participants are seated and blindfolded, so they cannot see in the front. Further examination of this account could include a set-up that involves restricting participants’ hands or using virtual reality to manipulate perspective and embodied affordances.

Monday, December 2, 2019

Neural correlates of moral goodness & moral beauty judgments: Moral beauty judgment induced greater brain activity implicated in theory of mind, suggesting that it needs to understand the others' mental states, unlike moral goodness


Neural correlates of moral goodness and moral beauty judgments. Qiuping Cheng et al. Brain Research, Volume 1726, January 1 2020, 146534. https://doi.org/10.1016/j.brainres.2019.146534

Highlights
•    Moral goodness judgment and moral beauty judgment recruited the common brain activity in the left inferior OFC.
•    Moral goodness judgment mainly relied on the emotional facet of moral cognition, but moral beauty judgment relied on both the rational and emotional components of moral cognition.
•    Moral beauty judgment induced greater brain activity implicated in theory of mind (ToM), suggesting that it needs to understand the others` mental states but moral goodness judgment does not.
•    Moral beauty judgment also activated a cortical network that is considered mainly responsible for the processing of empathy, indicating that it involves empathic concerns for others.
•    The brain harbors neural systems for common and for domain-specific evaluations for moral goodness and moral beauty judgments.

Abstract: The objects of moral goodness and moral beauty judgments both generally refer to the positive moral acts or virtues of humans, and goodness must precede moral beauty. The main difference is that moral beauty, but not moral goodness, triggers emotional elevation. However, little is known about the neural mechanisms involved in both judgments. In the current study, 28 healthy female participants were scanned when they rated the good and beautiful extent of positive moral acts in daily life depicted in scene drawings to investigate the neural systems supporting moral goodness and moral beauty, specifically to test whether neural activity associated with moral beauty is same or different than moral goodness. The conjunction analysis of the contrasts between moral goodness judgment and moral beauty judgment identified the involvement of the left inferior orbitofrontal cortex (OFC), suggesting that the two judgments recruited the activity of a common brain region. Importantly, compared with the moral goodness judgment, the moral beauty judgment induced greater activity in more advanced cortical regions implicated in elevated emotions, including the anterior cingulate cortex (ACC), medial prefrontal cortex (mPFC), superior frontal gyrus (SFG) and the left temporo-parietal junction (TPJ). These regions have been strongly correlated with the cognitive aspects of moral cognition, including theory of mind (ToM). In addition, moral beauty judgment also activated brain regions implicated in empathy including the midline structures and the anterior insula. Based on these results, the brain harbors neural systems for common and for domain-specific evaluations of moral goodness and moral beauty judgments. Our study thus provides novel and compelling neural evidence for the essence of moral beauty and advances the current knowledge of the neural mechanisms underlying the beauty-is-good stereotype.


1. Introduction

Imagine the following scenario: A young man is helping an elderly person cross the street. Now, please answer the first question: do you think this young man is morally good? Then, for the same behavior, please answer another question: do you think this young man displays inner beauty? Comparing these two questions, the first question is actually a moral goodness judgment in which an observer endows moral values to certain individual based on principles that have become a general law (Kant, (1785/1993), Haidt, 2007) and judges him a morally good person based on quick intuition – gut feelings (Haidt and Bjorklund, 2008, Hume, (1777/2006)., Greene, 2007, Greene, 2017). Meanwhile, the second question essentially reflects a moral beauty judgment in which an observer identifies a kind-natured person as displaying inner beauty based on an understanding of social rules and involves highly developed moral emotions (Diessner et al., 2006, Diessner et al., 2008, Haidt, 2003a, Haidt, 2003b, Haidt, 2007, Keltner and Haidt, 2003, Wang et al., 2015). Morality is considered as the sets of customs and values that are embraced by humans to guide social conduct (Moll et al., 2005) and is a product of evolutionary pressure that have shaped social cognitive and motivational mechanisms (Schulkin, 2000). Moral goodness and moral beauty judgments both belong to moral judgments, which are strongly influenced by emotional drives rooted in socio-emotional instincts (Darwin, 1874/1997), and they share a very important characteristic, that is, the object of both judgments is universal human virtues, for example, a kindness virtue of helping others in the scenario described above. These similarities prompt a critical question: Are the moral goodness judgment and moral beauty judgment the same? What about the neural correlates of two judgments?

Most objects of beauty judgments are pleasing to the eye and ear (Chatterjee and Vartanian, 2016), but another type of beauty, inner beauty, which is also delicately called moral beauty, has been identified (Haidt and Keltner, 2004). Moral beauty primarily addresses the natural beauty of persons and the beauty of their acts, states of character, and the like (Kosman, 2010), and emphasizes beauty as inherent to the individual’s inward appearance. It expresses several positive and culturally universal virtues that are independent of perceivable physical forms, such as wisdom, courage, humanity (love and kindness), and justice (Peterson and Seligman, 2004), similar to the object of moral goodness. Haidt and Keltner (2004) defined moral beauty as “the ability to find, recognize, and take pleasure in the existence of goodness in the social worlds” (p.537). Moral beauty and moral goodness are closely related, because the virtues of moral beauty are also generally considered signs of moral goodness (Diessner et al., 2006). According to Haidt and Keltner (2004), moral beauty is experienced when faced with displays of virtue or moral goodness in the social world (Haidt and Keltner, 2004, Güsewell and Ruch, 2012). Moral acts with virtues are “positive stimuli” and usually viewed as good and beautiful. Each is pleasant and offers potential for reward. According to Darwin, moral judgments are influenced by emotional processes, and these ‘social-emotional instincts’ are central for the feeling of pleasure when helping others and of unease when harming others (Darwin, 1874/1997). Individuals with inner beauty are not necessarily externally beautiful, but they must be morally good, because if beauty is not applied to a work of art, goodness must precede beauty and neither arises as a consequence of beauty nor is defined teleologically as a means to its accomplishment (Kosman, 2010). It seems that moral beauty and moral goodness are the same, at least, there is a partial overlap between the two judgments.

The remarks that “what is beautiful is good…” (Sappho, Fragments, No. 101) and that “physical beauty is the sign of an internal beauty, a spiritual and moral beauty…” (Schilier, 1882) indicates that the beauty-is-good stereotype is prevalent in human society (Berscheid and Walster, 1974, Dion et al., 1972, Tsukiura and Cabeza, 2011). For example, people with a better physical appearance receive more positive evaluations from others (Eagly et al., 1991, Feingold, 1992) and more preferential treatments in many aspects of life (Langlois et al., 2000), such as greater hiring opportunities (Marlowe et al., 1996) and earning a better income (Hamermesh and Biddle, 1994). Thus, beautiful individuals appear to be more positively associated with a socially desirable personality and moral traits (Dion et al., 1972, Eagly et al., 1991, Feingold, 1992, Langlois et al., 2000). The reliance on attractive features to infer moral character suggests a close relationship between beauty and moral valuation (Ferrari et al., 2017). One is capable of, at first sight, considering another human being attractive or unattractive while at the same time assigning values to that person. Tsukiura and Cabeza (2011) scanned participants with functional magnetic resonance imaging (fMRI) while they made attractiveness judgments about faces and goodness judgments about hypothetical acts to investigate the neural mechanisms underlying the relationship between beauty judgment and moral judgment. Both higher attractiveness ratings of faces and higher goodness ratings of moral acts correlated with increased activation of the orbitofrontal cortex (OFC). Takahashi et al., 2008, Wang et al., 2015 also observed relationships between the neural correlates of beauty judgment and moral feeling with the activation of the OFC. OFC is known as a region associated with processing positive emotions and reward, with its activity increasing as a function of beauty and moral goodness. These functional imaging studies support the idea of similar contribution of OFC to beauty and moral goodness judgments. However, no study has directly investigated whether the moral beauty judgment also shares the same neural correlate as the moral goodness judgment.

Although moral beauty objectively refers to the same human act or virtue as moral goodness, they differ in terms of the emotional response and motivation (Diessner et al., 2006, Diessner et al., 2008, Keltner and Haidt, 2003). Diessner et al. (2008) conceive “responsiveness to goodness and beauty” as a continuum, stretching from cognitive appreciation to deep engagement, with all imaginable intermediate degrees of emotional involvement. Cognitive appreciation without engagement is conceivable, but engagement without appreciation is not. When identifying a positive act of one person as moral goodness, an observer quickly assesses the morality of a certain behavior but remains emotionally unmoved and un-elevated (Diessner et al., 2006, Güsewell and Ruch, 2012). It seems that the emotional process influences the moral goodness and moral beauty judgments in the different patterns. An act of moral goodness is cognitively experienced as such without emotional involvement (Diessner et al., 2008). However, when judging the same act of moral goodness as moral beauty, the observer’s emotions are presumed to have been moved and elevated after observing manifest moral virtues in human act (Diessner et al., 2006, Haidt, 2003a, Haidt, 2003b, Haidt, 2007, Keltner and Haidt, 2003, Pohling and Diessner, 2016).

Moral elevation is the emotional response to witnessing acts of moral beauty and is particularly elicited by moral beauty. It triggers a distinctive feeling in the chest of warmth and expansion and causes a generalized desire to become a better person oneself and creates a tendency toward prosocial actions that trigger similar good behaviors in a similar scenario (Algoe and Haidt, 2009, Diessner et al., 2013, Haidt, 2003a, Haidt, 2003b, Haidt and Keltner, 2004, Schnall et al., 2010, Shiota et al., 2014, Van de Vyver and Abrams, 2015, Greene, 2017). Moral elevation directs the focus on others (Haidt, 2003) by understanding their mental states (i.e., theory of mind, ToM) and vicariously experiencing similar feelings (i.e., empathy). With regard to cognitive changes, it was assumed (Haidt, 2003) and found (Freeman et al., 2009) that elevation induces optimistic thoughts of people and humanity (Pohling and Diessner, 2016). Researchers have observed the recruitments of brain regions often implicated in mentalizing (ToM and empathy) behavior (Bzdok et al., 2012, Farrow et al., 2001, Han and Northoff, 2008, Harenski et al., 2012, Johnson et al., 2002, Maratos et al., 2001, Moll et al., 2002, Moll et al., 2005, Reniers et al., 2012, Saxe and Kanwisher, 2003) under the condition of moral elevation, including the medial prefrontal cortex (mPFC) and bilateral temporo-parietal junction (TPJ) implicated in ToM; the midline structures (anterior and posterior cingulate, precuneus) and the anterior insula implicated in empathy (Bzdok et al., 2012, Englander et al., 2012, Immordino-Yang et al., 2009, Lewis, 2014, Piper et al., 2015). According to recent neuropsychological evidence, moral beauty not only involves the OFC (Takahashi et al., 2008), which is activated in both external beauty and moral goodness judgments (Tsukiura and Cabeza, 2011), but also recruits the activity of similar brain regions as moral elevation, including the mPFC, anterior cingulate cortex (ACC), precuneus/PCC, insula and TPJ (Luo et al., 2019, Wang et al., 2015, Wen et al., 2017). However, researchers did not directly conclude a correlation between the activation of these brain regions involved in moral beauty with emotional elevation.

The beauty-is-good stereotype has been the topic of many social cognition studies, however, most focused on the dynamic relationship between the external beauty judgment and moral goodness judgment. The similarity and differences in the neural correlates between the moral beauty judgment and moral goodness judgment remain unknown. To the best of our knowledge, no previous study has directly compared brain activity during both judgments. Therefore, we designed an fMRI experiment to address this issue. During the scan, scene drawings depicting positive moral acts with different behavioral levels in daily life were presented on the computer screens while participants rated the extent of moral goodness (MG) and moral beauty (MB) presented by the main character in scene drawings. Then, we input the rating scores of each participant into the parametric modulators in SPM12 to identify regions in which the activity changed as a function of behavioral ratings. With this parametric approach, we were able to locate the brain regions involved in the moral beauty judgment and the moral goodness judgment and further explore the similarities and differences between them. Correspondingly, two hypotheses were generated. According to the findings reported in the previous literature, the brain activity in the OFC overlapped not only in the external beauty and moral goodness judgments (Tsukiura and Cabeza, 2011) but also in external beauty and moral beauty judgments (Luo et al., 2019, Wang et al., 2015). We initially hypothesized that the OFC would also be a shared brain region associated with moral beauty and moral goodness judgments, because both judgments refer to the same virtuous acts that have been approved by social culture. Virtue acts are “positive stimuli”, which usually lead to positive emotions and then facilitate appropriate behavioral responses to potential rewards (Fredrickson, 1998). According to neuroimaging studies, abstract rewards induced by social approval from others recruit the OFC (Haber and Knutson, 2010). Additionally, because moral beauty triggers emotional elevation and moral elevation activates brain regions involved in ToM and empathic process, we hypothesized that the moral beauty judgment would also recruit these advanced brain regions that are responsible for processing others mental states including mPFC and TPJ, and responsible for sharing others emotions including ACC, precuneus/PCC, and anterior insula.

Chinese astrology, numerological preferences, timing of births and the long-term effect on schooling: People born on auspicious days are more likely to attend college

Numerological preferences, timing of births and the long-term effect on schooling. Cheng Huang, Xiaojing Ma, Shiying Zhang, Qingguo Zhao. Journal of Population Economics, December 2 2019. https://link.springer.com/article/10.1007/s00148-019-00758-1

Abstract: Cultural beliefs may affect demographic behaviors. According to traditional Chinese astrology, babies born on auspicious days will have good luck in their lifetime, whereas those born on inauspicious days will have bad luck. Using administrative data from birth certificates in Guangdong, China, we provide empirical evidence on the short-term effects of such numerological preferences. We find that approximately 3.9% extra births occur on auspicious days and 1.4% of births are avoided on inauspicious days. Additionally, there is a higher male/female sex ratio for births on auspicious days. Since such manipulation of the birthdate is typically performed through scheduled C-sections, C-section births increase significantly on auspicious days. Moreover, we use a second dataset to examine the long-term effect of numerological preferences and find that people born on auspicious days are more likely to attend college.

Keywords: Numerological preferences Birthdate Timed births Chinese astrology

Singapore: Individuals’ attention to media messages about climate change and elaboration of these messages were positively related to the illusion of knowing

Does media exposure relate to the illusion of knowing in the public understanding of climate change? Xiaodong Yang, Liang Chen, Shirley S. Ho. Public Understanding of Science, September 30, 2019. https://doi.org/10.1177/0963662519877743

Abstract: By acknowledging that people are cognitive misers, this study proposes that people may rely on the illusion of knowing as cognitive devices for attitudinal or behavioral change, in addition to factual knowledge. Accordingly, this study shifted the focus of inquiry from assessing media effects in increasing factual knowledge to assessing how media consumption relates to the illusion of knowing. Using a nationally door-to-door survey in Singapore (N = 705), the results revealed that individuals’ attention to media messages about climate change and elaboration of these messages were positively related to the illusion of knowing. Furthermore, elaboration had moderating effects on the relationship between media attention and the illusion of knowing. These findings suggest that media consumption of climate change messages could drive the illusion of knowing, which is speculated to account for pro-environmental behaviors in addressing climate change. Theoretical and practical implications were discussed.

Keywords: elaboration, media attention, the illusion of knowing


6. Discussion 
Consistent with the cognitive miser model, this study notes that people are cognitive misers who rely on the illusion of knowing for attitude formation and behavioral change. By acknowledging the illusion of knowing as cognitive heuristics, this study offered a distinct perspective on factors that shape individuals’ behavior in addition to factual knowledge. Moreover, instead of considering the illusion of knowing as a negative phenomenon, this study stresses its positive effects in enhanc-ing motivation and promoting behavioral change, as the positive illusion model suggested. Given a dearth of studies looking at the illusion of knowing as a media effect, this study explored the factors relating to the illusion of knowing from a communication perspective.

The findings revealed that media attention was positively associated with the illusion of know-ing. The results showed that when people paid more attention to media messages about climate change, they tended to believe that they were becoming more knowledgeable about this issue when  in fact they may not gain substantial factual understanding. This is consistent with previous research, which reported that media work better in increasing audiences’ perceived knowledge than in increasing their factual knowledge (Hollander, 1995; Mondak, 1995; Park, 2001). As an impersonal issue, climate change lies beyond one’s physical life and rarely exerts any direct effect on one’s life (Kahlor et al., 2006). Thus, when paying attention to information about climate change, one tends to learn basic facts about it rather than making an effort to figure out its underly-ing mechanism. Driven by the need to maintain a positive self-image, people are led by a self-serving bias to perceive that they are becoming knowledgeable about an issue after paying attention to media messages, though they do not gain factual understanding. Thus, the more attention people pay to media messages about climate change, the more illusion of knowing they would develop.Also, this study found that media attention was positively related to the illusion of knowing in knowledge comparisons with others. When people paid more attention, they tended to perceive that they knew much more about climate change than others. This is consistent with what is sug-gested by self-enhancement bias in social comparison (Alicke, 1985). To maintain good feelings about the self, one tends to develop illusory superiority in social comparisons. With the need of maintaining a positive self-image, people tend to amplify the benefits of the media in increasing one’s self-knowledge than in increasing the knowledge of others. The more attention one pays to media messages, the more the effects of this amplification would be developed in one’s perception. This finding matters in the context of climate change, as people tend to compare themselves with others in the decision-making of engaging in collective action, such as fighting climate change. Overconfidence in the comparison would enhance individuals’ self-efficacy and motivation to take action (Taylor and Brown, 1988).Unlike other studies emphasizing factual knowledge, this study recognized the functional value of the illusion of knowing. Our empirical findings regarding the positive relationship between media attention and the illusion of knowing offer a new angle to look at the role of media in climate change communication.2 The media have been criticized for not adequately promoting factual knowledge (Park, 2001; Tichenor et al., 1970). Although the media may not be very efficient in increasing the public’s factual knowledge about climate change, they could promote public engage-ment with climate change by bringing about the illusion of knowing. The complex nature of cli-mate change makes it difficult for laymen to acquire factual knowledge. In this particular case, the illusion of knowing may serve as an alternative route for promoting public engagement with cli-mate change. Practically, policy makers and practitioners should take advantage of the media’s role in bringing about the illusion of knowing to promote public action on climate change. For those who have difficulties in acquiring factual knowledge, the illusion of knowing would be a useful device.This study also provides empirical evidence for the positive relationship between elaboration and the illusion of knowing. First, people tend to have more illusion of knowing in self-reported perceived knowledge when they elaborate media messages about climate change. As suggested by prior studies, elaborative processing of media messages contributes to both factual and perceived knowledge acquisition (Eveland, 2001; Lee and Ho, 2015). It is possible that gaining factual knowledge would enhance one’s confidence about one’s expertise. Thus, people would develop the illusion of knowing. Second, the results indicated that the more people elaborated the information about climate change, the more illusory superiority they would develop. As discussed, gaining factual understanding from elaboration would possibly increase one’s confidence in cognition. Driven by the need to maintain the confidence, people are motivated to exhibit self-enhancement bias in social comparison. Thus, people would possibly develop illusory superiority, which in turn would give rise to the false perception that they are more knowledgeable than others.Moreover, this study examined the interaction between elaboration and media attention on the illu-sion of knowing. We found that elaboration would magnify the positive relationship between media attention and the illusion of knowing. This is consistent with previous studies on elaboration, which suggest that elaborative processing would result in greater media effects (Perse, 1990). In this case, it is possible that elaboration will accelerate the media effects in increasing the illusion of knowing.Although many studies have confirmed the role of elaboration in increasing factual knowledge, very few have shed any light on its relationship with the illusion of knowing. This study filled in the research gap by empirically investigating the relationship. Besides, a further examination of the mod-erating effects of elaboration on the relationship between media attention and the illusion of knowing offers more insights, which contributes to the theoretical literature about information processing.By examining the relationship between elaboration and the illusion of knowing, findings from this study are helpful for communication practitioners to tailor specific media messages. For instance, messages should be tailored to associate with high elaborative processing, due to its posi-tive role in promoting the illusion of knowing about climate change. The information processing literature has specified that elaboration values source credibility and message quality (Chu and Kamal, 2008). Thus, messages should be designed featuring high credibility and quality. Besides, personal interests and involvement promote elaborative processing of media messages (Lee and Kim, 2016). Accordingly, media coverage of climate change should be tailored to engage the audi-ence by triggering their interests and concerns.In terms of theoretical contribution, this study examined how media consumption relates to inaccurate perceptions. There are many cases where people may make decisions based on percep-tion rather than on reality. Unlike the common sense that perception is unreliable and negative, inaccurate perception may have positive effects (Taylor and Brown, 1988). This suggests that com-munication researchers have to disregard the limit of focusing on media effects in increasing fac-tual understanding of the reality. Our study offers a new angle to explore media effects. More importantly, we examined two types of the illusion of knowing. By acknowledging the collective action nature of climate change issue, this study proposed that the illusion of knowing in knowl-edge comparisons with others should be considered in addition to the illusion of knowing in self-reported perceived knowledge. This proposition contributes to the theoretical literature on the illusion of knowing.As with all studies, this one too has its limitations. First, our research was conducted in a single country in mind, which may limit the generalizability of our findings. In Singapore, the public generally acknowledges the scientific consensus on human-caused climate change. Compared with citizens from the contexts without such a consensus, Singaporeans show much more concern about climate change and are more knowledgeable. Correspondingly, a different pattern of the illusion of knowing about climate change is expected in other contexts. What is more, people in contexts with climate change skepticism may not pay attention to media messages about climate change at all. Thus, future studies based in areas with climate change skepticism are necessary for a comprehen-sive understanding of the illusion of knowing about climate change.Second, although we recognized the functional value of the illusion of knowing, we have not examined its attitudinal or behavioral outcomes in this study. Thus, future research is suggested to explore how the illusion of knowing about climate change is related to individuals’ pro-environ-mental behavior. In particular, a thorough examination of the illusion of knowing and factual knowledge jointly would provide much more compelling findings.
 

Nulliparous women remember survival scenarios involving a son better than those of non-related children; the authors find here evidence that memory has been sculpted by evolutionary processes

“Survival Processing of the Selfish Gene?”: Adaptive Memory and Inclusive Fitness. Patrick Bonin, Margaux Gelin, Betty Laroche, Alain Méot. Evolutionary Psychological Science, December 2 2019. https://link.springer.com/article/10.1007/s40806-019-00220-1

Abstract: The survival processing advantage in memory is the finding that items encoded in survival scenarios are remembered better than words encoded in survival-irrelevant scenarios or in deep encoding situations (e.g., pleasantness). Whether this mnemonic advantage, which is generally found in scenarios involving personal survival, can also be observed in scenarios involving the survival of other people, and in particular, genetically related others, has received little attention. In the present study, we asked nulliparous women to imagine being stranded in the grasslands of a foreign land without any basic survival items and to consider either their personal survival, the survival of their biological child, or the survival of an orphan. Compared to a pleasantness (control) condition, a survival processing advantage was found for the child survival group, which did not differ reliably from personal survival. Both the child and the personal survival conditions yielded better recall than the orphan condition, which did not reliably differ from the pleasantness condition. These findings provide further evidence for the view that memory has been sculpted by evolutionary processes such as inclusive fitness.

Keywords: Adaptive memory Survival processing advantage Inclusive fitness

Discussion

Anecdotal evidence suggests that we do not provide the samelevel of help to strangers as we do to our relatives. It is hard to imagine a world in which parents faced with a life-or-death situation, such as a shipwreck in which the number of life boats is limited, would hesitate to save the life of their own biological child rather than that of an unrelated child. To our knowledge, such a world does not exist: We do not behave altruistically in an undifferentiated manner (Buss2019). Helping has been found to vary as a function of genetic relatedness (Burnstein et al.1 994; Fitzgerald and Whitaker 2009; Stewart-Williams 2007,2008). That being said, human beings belong to a highly cooperative species since we have lived in small and interdependent groups for most of our evolution (Hrdy2009). Social species exhibit an impressive array of altruistic behaviors, some of which are directed to unrelated others (e.g., reciprocal altruism, Trivers1971), and such altruistic behaviors have perhaps evolved to solve issues related to group living (Marsh2016).

Turning to the adaptive memory literature, the survivalprocessing advantage initially discovered by Nairne et al.(2007) is now a well-established finding. However, the ques-tion of whether this memory advantage extends to the survivalof other people (Kostic et al.2012; Krause et al. 2019; Seitz et al. 2018; Weinstein et al. 2008) or is restricted to personal survival (Cunningham et al.2013; LedingandToglia2018) is a matter of debate because of discrepant findings in the literature. In the present study, we put forward the hypothesis thata survival processing advantage should be observed with an imaginary scenario involving the survival of a biological child because, from an evolutionary perspective, offspring are vehi-cles for their parents’ genes (Buss2019). More importantly,we also hypothesized that, in line with the inclusive fitness and kin-selection theories (Hamilton1964), the survival effectin memory, if any, should be smaller when the recipient of helping behaviors is a non-biological child, namely an orphan.The findings were clear-cut. First of all, we were able to replicate the original survival processing advantage (Nairne et al.2007): Encoding words in relation to a personal survival situation yielded better memory performance than rating words for their pleasantness. It should be remembered that pleasantness has often been used as a control condition to evaluate survival processing because it is a deep processing task (e.g., Kazanas and Altarriba 2017; Nairne and Pandeirada 2010;O lds et al. 2014). When survival was directed to a biological child, a survival processing advantage was also found. In linewith Seitz et al.’s (2018) parenting scenario involving the survival of the participants’babies in the grasslands, we found a reproductive advantage in memory that did not differ reli-ably from a personal survival advantage. More importantly, in accordance with the inclusive fitness theory (Hamilton1964)and kin selection theory (Smith1964), the level of recall var-ied as a function of genetic relatedness: More words wererecalled in a survival situation involving a biological childthan in a survival situation involving an orphan, and the latter condition did not differ reliably from the pleasantness condition.

From this pattern of findings, a critical question arises.Why did Krause et al. (2019) fail to find a differential survival processing advantage in their experiments comparing different types of kin vs. non-kin conditions? In Krause et al.’s research, different survival conditions were compared: personal survival, survival of family members,a youngest blood relative, unrelated people. It should be remembered that the authors found that compared to thepleasantness control group, the recall rates were similar across the “kin,” “friend,” and “famous” groups. Thus,the pattern of recall did not show that the survival of kin produced a memory advantage compared to that of non-kin. It remains possible that the social relationships that Krause et al. took into account were not specific enough for differential effects on memory performance to emerge. Finally, it cannot be excluded that our scenarios and Krause et al.’s scenarios are too different to be compared directly. In effect, Krause et al. modified the original survival scenario only slightly, and the different scenarios were similar, whereas in the current study, more modifications were made to the original survival scenario. The strength of our study is that we took care to be specific by comparing children who are biologically relatedto their mothers with children who are not biologically related (we did not therefore include a generic“unrelated other”condition such as“strangers”). Moreover, we decided to include only female participants because it hasbeen found that, in general, women are more interested in children than men are (Cárdenas et al.2013; Charles et al.2013; Maestripieri and Pelka2002), and they also take more time to care for them (Babchuk et al.1985; Buss 2007).

Some readers might be concerned about the finding that nosurvival processing advantage occurred with an orphan. Does this finding mean that people do not behave “altruistically” towards orphans? Certainly not. First of all, our findings concern memory performance and tell us nothing about the emotional responses people may have towards orphans. As suggested by Marsh (2016): “Parental care is such an ordinary phenomenon that we often fail to think of it as altruism. But itclearly meets the definition, which is a behavior that improvesthe welfare of another individual at the expense of the altruist.”(p. 62). Parental care is often provided to distantly relatedchildren or to unrelated children (Hrdy2009), including chil-dren who are adopted. According to Marsh (2016), care-based altruism results from the co-option of systems that initially evolved to support parental care. Here, we have shown that there are differences at a cognitive level in the way things are remembered when they have been processed in a survival situation involving a biological child compared to a biologically unrelated child. Second, if we take a closer look at the memory performance, we observe that the words processed in a survival situation in the orphan condition were recalled well because they were recalled at the same level as the words that were encoded deeply, that is for their pleasantness. One important aspect worth noting is that the words in the orphan condition were not recalled less well than the words in the pleasantness condition—a deep encoding condition. Thus,there is still some level of altruistic behavior at a cognitive level that is deployed in the case of orphans.

Our findings have strong theoretical implications since they show for the first time that the survival processingadvantage has to do with inclusive fitness and kin selection (Hamilton1964;Smith1964). From a general stand-point, they reinforce the evolutionary view of memory according to which our memory is still peculiarly attunedtoward processing issues that our ancestors faced during the distant past, such as finding food, drinking water, andprotection from predators both for themselves and also for their kin, and in particular for their children. It is already clear that more work will be needed to investigate further whether human memory is tuned to encode things better for different types of kin relationships such as sibling, parental, and grandparental relationships. We are aware of the fact that having only nulliparous women as participants constitutes a limitation of our study. Perhaps a different pattern of results would have been found if our participants had been mothers. Furthermore, it remains an avenue for future research to conduct the same studyon young men. It is possible to anticipate that different findings will emerge because men and women faced dif-ferent reproductive issues in the distant past. While women are 100% sure of their parenthood, ancestral men were(and indeed modern men still are) confronted with the problem of paternity uncertainty due to cryptic ovulation(Buss2019). This reproductive problem faced specifically by men would account for their lesser interest (e.g.,Cárdenas et al.2013; Charles et al.2013;  Maestripieri and Pelka2002) and investment in children compared to women (e.g., Babchuk et al.1985;Buss2007). Based on these findings, we anticipate that men will recall more words in both the personal survival scenario and childs cenario than in the pleasantness condition, but that even when the risk of not being the biological father is low,men will recall less words in the child condition than will nulliparous women. However, in a survival scenario inwhich men have to imagine that there is a high risk that the child they must take care of is not their biological child, our prediction is that the recall rate will be closeto that found here in the orphan condition. Finally, in th efuture, it would be interesting to test whether grandparents’memory for items in survival situations involving their grandchildren differs as a function of the certainty of genetic relatedness. Likewise, a maternal grandmother is more genetically certain of her grandchildren than a paternal grandfather and, as found by DeKay (1995,reported by Buss2019), maternal grandmothers are closerto and invest more resources in the grandchild than pater-nal grandfathers.

The question of the proximate mechanisms that underpin the survival processing advantage in memory is an issue which has given rise to a large number of studies. Different proximate mechanisms have been put forward and, according to Krause (2015), at least eight candidate mechanisms couldcontribute to the survival memory advantage. Although it was not the aim of our study to address this issue, our findings nevertheless suggest that self-reference, even though it certainly plays a role in this memory effect (Cunningham et al.2013), is not the sole proximate mechanism involved. Indeed,if this were the case, survival-processing effects should have been restricted to the personal survival condition, unless bio-logical children are considered to be part of the parents’self. In line with the latter claim, the literature on the self-reference effect reports that the mnemonic difference between self-reference and other-reference conditions is attenuated or eliminated when the other-reference conditions correspond to aparent or to a best-friend (Bower and Gilligan1979; Symons and Johnson1997).

Elaboration is a basic memory mechanism which certainlyplays a role in the survival processing effect as suggested bycertain studies (e.g., Bell et al.2015;Röeretal.2013; Wilson2016). Nairne et al. (2017a,b) initially reported that survival processing increases not only true memories but also false memories. Here also, we found a higher number of extra-list intrusions in the personal survival scenario than in the pleasantness condition. This type of finding—namely an increase in both true and false memories—was later extended by Howe and Derbish (2010,2014) and Otgaar and Smeets (2010),while other studies have failed to find significant effects ofs urvival processing on extra-list intrusions (Bonin et al.2019b; Gelin et al.2017). According to Howe and Derbish (2014), because elaboration is known to increase both true and false memories, the effect of survival processing sometimes found on both true and false memories may be due to the need for greater levels of elaboration in order to rate words for their survival values. Interestingly, in the current study, we found that the greatest number of extra-list intrusions occurred in theorphan condition. Overall, the current pattern of findings accords with the idea that elaboration underpins (at least inpart) the survival processing advantage.5

As found in some previous studies (e.g., Seitz et al.2018),mean relevance ratings were higher in the pleasantness condi-tion than in any of the other conditions. However, and morespecifically, recall performance was higher for both the per-sonal and the child survival scenario than for the pleasantnesscontrol condition. It is also interesting to note that the ratingsin the child condition were significantly higher than in thepersonal survival, but this difference did not translate into arecall difference between these two conditions. The pattern offindings for relevance ratings and recall rates suggests that theoverall difference in recall rates across conditions was not aresult of differences in depth of processing or in congruity,since it is generally accepted that words that are processed ata deeper level (Craik and Tulving1975) or that are morecongruent in a given encoding context (Butler et al.2009;Craik2002) are often recalled more accurately than words thatare processed more superficially or that are rated as being lesscongruent (Seitz et al.2018).

There are studies which suggest a relationship between empathy and altruistic behaviors (Marsh2016). However,the analyses performed with the scores obtained from the Basic Empathy Scale (Carré et al.2013) did not reveal thatthis dispositional trait played a role in the memory performance observed in the different encoding conditions that we considered. Emotional closeness has been found to be a prox-imal cause of altruism that partially mediates the impact of genetic relatedness on the willingness to act altruistically(Korchmaros and Kenny2001). In the current study, we didnot assess our participants’willingness to help as a function ofdifferent types of social relationship in an ancestral survivalsituation. However, because we were interested in knowingwhether willingness to help an orphan versus a biological child would mirror memory performance as indexed by recallrates, as described in more detail in the Supplementary Material, we designed a questionnaire using LimeSurvey(www.limesurvey.org) and this was completed online by apool of 84 undergraduates (only nulliparous women weretaken into account). We collected ratings using Likert scalesof willingness to help in a survival situation—by providingfood, drinking water, and protection for both an orphan and abiological child who were said to be weak. In addition, wecollected ratings for other types of kin (e.g., mother, sister,cousin) and non-kin (friend, neighbor, stranger) relationships.The findings (see the Results section in the Supplementary Material) turned out to be in line with those reported in theliterature on altruistic behaviors (e.g., Burnstein et al.1994;Fitzgerald and Whitaker2009; Stewart-Williams2007,2008),that is to say, in a hypothetical survival scenario, women were more willing to aid close kin (e.g., child, mother) than distant kin (e.g., cousin), more willing to help distant kin than neighbors,“acquaintances,”or“strangers”(see Figure1A in the Supplementary Material). Interestingly, the level of help for “friend” was comparable to that of“cousin”(Figure1A).More importantly, as far as the comparison between biologicalchild and orphan is concerned, women chose to help theirbiological child more than an orphan. However, the level of help for“orphan”was close to that for“cousin”(Figure1A).To conclude, Krause et al. (2019) made the strong claim that kin selection is one more fitness-relevant scenario that hasbeen found to be either unrelated or irrelevant to the survival processing advantage. The findings from the present study suggest just the contrary, namely that the survival processing advantage varies as a function of genetic relatedness, at least when the kin in question are biological children who, from an evolutionary point of view, ensure the perpetuation of our genes. Thus, our findings are in agreement with the claim put forward by Nairne et al. (2007) that “mnemonic processes likely operate more efficiently when dealing with fitness-relevant problems."

Human beliefs have remarkable robustness in the face of disconfirmation; this author thinks this can arise from purely rational principles when the reasoner has recourse to ad hoc auxiliary hypotheses

How to never be wrong. Samuel J. Gershman. Psychonomic Bulletin & Review, February 2019, Volume 26, Issue 1, pp 13–28. https://link.springer.com/article/10.3758/s13423-018-1488-8

Abstract: Human beliefs have remarkable robustness in the face of disconfirmation. This robustness is often explained as the product of heuristics or motivated reasoning. However, robustness can also arise from purely rational principles when the reasoner has recourse to ad hoc auxiliary hypotheses. Auxiliary hypotheses primarily function as the linking assumptions connecting different beliefs to one another and to observational data, but they can also function as a “protective belt” that explains away disconfirmation by absorbing some of the blame. The present article traces the role of auxiliary hypotheses from philosophy of science to Bayesian models of cognition and a host of behavioral phenomena, demonstrating their wide-ranging implications.

Keywords: Bayesian modeling Computational learning theories Philosophy of science

Introduction

Since the discovery of Uranus in 1781, astronomers were troubled by certain irregularities in its orbit, which appeared to contradict the prevailing Newtonian theory of gravitation. Then, in 1845, Le Verrier and Adams independently completed calculations showing that these irregularities could be entirely explained by the gravity of a previously unobserved planetary body. This hypothesis was confirmed a year later through telescopic observation, and thus an 8th planet (Neptune) was added to the solar system. Le Verrier and Adams succeeded on two fronts: they discovered a new planet, and they rescued the Newtonian theory from disconfirmation.

Neptune is a classic example of what philosophers of science call an ad hoc auxiliary hypothesis (Popper, 1959; Hempel, 1966). All scientific theories make use of auxiliary assumptions that allow them to interpret experimental data. For example, an astronomer makes use of optical assumptions to interpret telescope data, but one would not say that these assumptions are a core part of an astronomical theory; they can be replaced by other assumptions as the need arises (e.g., when using a different measurement device), without threatening the integrity of the theory. An auxiliary assumption becomes an ad hoc hypothesis when it entails unconfirmed claims that are specifically designed to accommodate disconfirmatory evidence.

Ad hoc auxiliary hypotheses have long worried philosophers of science, because they suggest a slippery slope toward unfalsifiability (Harding, 1976). If any theory can be rescued in the face of disconfirmation by changing auxiliary assumptions, how can we tell good theories from bad theories? While Le Verrier and Adams were celebrated for their discovery, many other scientists were less fortunate. For example, in the late 19th century, Michelson and Morley reported experiments apparently contradicting the prevailing theory that electromagnetic radiation is propagated through a space-pervading medium (ether). FitzGerald and Lorentz attempted to rescue this theory by hypothesizing electrical effects of ether that were of exactly the right magnitude to produce the Michelson and Morley results. Ultimately, the ether theory was abandoned, and Popper (1959) derided the FitzGerald–Lorentz explanation as “unsatisfactory” because it “merely served to restore agreement between theory and experiment.”

Ironically, Le Verrier himself was misled by an ad hoc auxiliary hypothesis. The same methodology that had served him so well in the discovery of Neptune failed catastrophically in his “discovery” of Vulcan, a hypothetical planet postulated to explain excess precession in Mercury’s orbit. Le Verrier died convinced that Vulcan existed, and many astronomers subsequently reported sightings of the planet, but the hypothesis was eventually discredited by Einstein’s theory of general relativity, which accounted precisely for the excess precession without recourse to an additional planet.

The basic problem posed by these examples is how to assign credit or blame to central hypotheses vs. auxiliary hypotheses. An influential view, known as the Duhem–Quine thesis (reviewed in the next section), asserts that this credit assignment problem is insoluble—central and auxiliary hypotheses must face observational data “as a corporate body” (Quine, 1951). This thesis implies that theories will be resistant to disconfirmation as long as they have recourse to ad hoc auxiliary hypotheses.

Psychologists recognize such resistance as a ubiquitous cognitive phenomenon, commonly viewed as one among many flaws in human reasoning (Gilovich, 1991). However, as the Neptune example attests, such hypotheses can also be instruments for discovery. The purpose of this paper is to discuss how a Bayesian framework for induction deals with ad hoc auxiliary hypotheses (Dorling, 1979; Earman, 1992; Howson and Urbach, 2006; Strevens, 2001), and then to leverage this framework to understand a range of phenomena in human cognition. According to the Bayesian framework, resistance to disconfirmation can arise from rational belief-updating mechanisms, provided that an individual’s “intuitive theory” satisfies certain properties: a strong prior belief in the central hypothesis, coupled with an inductive bias to posit auxiliary hypotheses that place high probability on observed anomalies. The question then becomes whether human intuitive theories satisfy these properties, and several lines of evidence suggest the answer is yes.1 In this light, humans are surprisingly rational. Human beliefs are guided by strong inductive biases about the world. These biases enable the development of robust intuitive theories, but can sometimes lead to preposterous beliefs.

[...]


The true self
Beliefs about the self provide a particularly powerful example of resistance to disconfirmation. People make a distinction between a “superficial” self and a “true” self, and these selves are associated with distinct patterns of behavior (Strohminger, Knobe, & Newman, 2017). In particular, people hold a strong prior belief that the true self is good (the central hypothesis h in our terminology). This proposition is supported by several lines of evidence. First, positive, desirable personality traits are viewed as more essential to the true self than negative, undesirable traits (Haslam, Bastian, & Bissett, 2004). Second, people feel that they know someone most deeply when given positive information about them Christy et al., (2017). Third, negative changes in traits are perceived as more disruptive to the true self than positive changes (Molouki and Bartels, 2017; De Freitas et al., 2017).

The key question for our purposes is what happens when one observes bad behavior: do people revise their belief in the goodness of the actor’s true self? The answer is largely no. Bad behavior is attributed to the superficial self, whereas good behavior is attributed to the true self (Newman, Bloom, & Knobe, 2014). This tendency is true even of individuals who generally have a negative attitude toward others, such as misanthropes and pessimists (De Freitas et al., 2016). And even if people are told explicitly that an actor’s true self is bad, they are still reluctant to see the actor as truly bad (Newman, De Freitas, & Knobe, 2015). Conversely, observing positive changes in behavior (e.g., becoming an involved father after being a deadbeat) are perceived as indicating “self-discovery” (Bench et al., 2015; De Freitas et al., 2017).

These findings support the view that belief in the true good self shapes the perception of evidence about other individuals: evidence that disconfirms this belief tends to be discounted. The Bayesian framework suggests that this may occur because people infer alternative auxiliary hypotheses, such as situational factors that sever the link between the true self and observed behavior (e.g., he behaved badly because is mother just died). However, this possibility remains to be studied directly.

[...]

Conceptual change in childhood

Children undergo dramatic restructuring of their knowledge during development, inspiring analogies with conceptual change in science (Carey, 2009; Gopnik, 2012). According to this “child-as-scientist” analogy, children engage in many of the same epistemic practices as scientists: probabilistically weighing evidence for different theories, balancing simplicity and fit, inferring causal relationships, carrying out experiments. If this analogy holds, then we should expect to see signs of resistance to disconfirmation early in development. In particular, Gopnik and Wellman (1992) have argued that children form ad hoc auxiliary hypotheses to reason about anomalous data until they can discover more coherent alternative theories.

For example, upon being told that the earth is round, some children preserve their preinstructional belief that the earth is flat by inferring that the earth is disk-shaped (Vosniadou & Brewer, 1992). After being shown two blocks of different weights hitting the ground at the same time when dropped from the same height, some middle-school students inferred that they hit the ground at different times but the difference was too small to observe, or that the blocks were in fact (contrary to the teacher’s claims) the same weight (Champagne et al., 1985). Children who hold a geometric-center theory of balancing believe that blocks must be balanced in the middle; when faced with the failure of this theory applied to uneven blocks, children declare that the uneven blocks are impossible to balance (Karmiloff-Smith & Inhelder, 1975).

Experimental work by Schulz, Goodman, Tenenbaum, and Jenkins (2008) has illuminated the role played by auxiliary hypotheses in children’s causal learning. In these experiments, children viewed contact interactions between various blocks, resulting in particular outcomes (e.g., a train noise or a siren noise). Children then made inferences about novel blocks based on ambiguous evidence. The data suggest that children infer abstract laws that describe causal relations between classes of blocks (see also Schulz and Sommerville, 2006; Saxe et al., 2005). Schulz and colleagues argue for a connection between the rapid learning abilities of children (supported by abstract causal theories) and resistance to disconfirmation: the explanatory scope of abstract causal laws confer a strong inductive bias that enables learning from small amounts of data, and this same inductive bias confers robustness in the face of anomalous data by assigning responsibility to auxiliary hypotheses (e.g., hidden causes). A single anomaly will typically be insufficient to disconfirm an abstract causal theory that explains a wide range of data.

The use of auxiliary hypotheses has important implications for education. In their discussion of the educational literature, Chinn and Brewer (1993) point out that anomalous data are often used in the classroom to spur conceptual change, yet “the use of anomalous data is no panacea. Science students frequently react to anomalous data by discounting the data in some way, thus preserving their preinstructional theories” (p. 2). They provide examples of children employing a variety of discounting strategies, such as ignoring anomalous data, excluding it from the domain of the theory, holding it in abeyance (promising to deal with it later), and reinterpreting it. Careful attention to these strategies leads to pedagogical approaches that more effectively produce theory change. For example, Chinn and Brewer recommend helping children construct necessary background knowledge before introduction of the anomalous data, combined with the presentation of an intelligible and plausible alternative theory. In addition, bolstering the credibility of the anomalous data, avoiding ambiguities, and using multiple lines of evidence can be effective at producing theory change.


Sunday, December 1, 2019

Sustaining environments hypothesis is the idea that long-term success of early educational interventions depends on the quality of the subsequent learning environment; study finds no such relation

Bailey, Drew H., Jade M. Jenkins, and Daniela Alvarez-Vargas. (2019). Complementarities between Early Educational Intervention and Later Educational Quality? A Systematic Review of the Sustaining Environments Hypothesis. (EdWorkingPaper: 19-99). Annenberg Institute at Brown University, Sep 2019. https://doi.org/10.26300/8tz9-sh62

Abstract: The sustaining environments hypothesis refers to the popular idea, stemming from theories in developmental, cognitive, and educational psychology, that the long-term success of early educational interventions is contingent on the quality of the subsequent learning environment. Several studies have investigated whether specific kindergarten classroom and other elementary school factors account for patterns of persistence and fadeout of early educational interventions. These analyses focus on the statistical interaction between an early educational intervention – usually whether the child attended preschool – and several measures of the quality of the subsequent educational environment. The key prediction of the sustaining environments hypothesis is a positive interaction between these two variables. To quantify the strength of the evidence for such effects, we meta-analyze existing studies that have attempted to estimate interactions between preschool and later educational quality in the United States. We then attempt to establish the consistency of the direction and a plausible range of estimates of the interaction between preschool attendance and subsequent educational quality by using a specification curve analysis in a large, nationally representative dataset that has been used in several recent studies of the sustaining environments hypothesis. The meta-analysis yields small positive interaction estimates ranging from approximately .00 to .04, depending on the specification. The specification curve analyses yield interaction estimates of approximately 0. Results suggest that the current mix of methods used to test the sustaining environments hypothesis cannot reliably detect realistically sized effects. Our recommendations are to combine large sample sizes with strong causal identification strategies, and to study combinations of interventions that have a strong probability of showing large main effects.

Keywords: education, achievement, meta-analysis, persistence and fadeout, intervention


4) Heterogeneity across treatments, contexts, and children.
One possibility is that, although these interactions averaged out to approximately 0, some of them were reliably positive, consistent with complementarity between early educational intervention and later education quality, and others were reliably negative, consistent with substitutability. We find mixed evidence for this, with a statistically significant test for heterogeneity in the meta-analysis and more than 5% statistically significant estimates in the specification curve analysis, but an inferential specification curve consistent with a relatively homogenous effect of approximately 0. Importantly, this occurs despite our inclusion of a heterogeneous set of definitions of early childhood intervention and later educational quality in our analysis, methods that might reasonably be expected to increase the heterogeneity of estimates. Additionally, although the meta-analysis indicated a moderate amount of heterogeneity in interaction estimates, the prediction we thought most directly followed from the sustaining environments hypothesis – namely, that interactions would only be positive when the main effects of early and later quality were positive – was not supported. Still, perhaps the most compelling argument for heterogeneity is that it is real but not well observed in these data, because we did not measure the “right” later educational moderators of early educational intervention effects. We will discuss this possibility below.

Affirmative Action, Major Choice, and Long-Run Impacts: California after ending college AA admissions policy

Bleemer, Zachary, Affirmative Action, Major Choice, and Long-Run Impacts (2019). SSRN, Nov 20: http://dx.doi.org/10.2139/ssrn.3484530

Abstract: Estimation of the impact of race-based affirmative action (AA) on the medium- and long-run outcomes of underrepresented minority (URM) university applicants has been frustrated by limited data availability. This study presents a highly-detailed novel database of University of California (UC) applications in the years before and after the end of its AA admissions policy, linked to national educational records and a California employment database. Using a difference-in-difference design to compare URM and non-URM freshman applicants' outcomes two years before and after UC's affirmative action policies ended in 1998, I identify substantial and persistent educational and labor market deterioration after 1998 among URM applicants: each of UC's 10,000-per-year URM freshman applicants' likelihood of earning a Bachelor's degree within six years declined by 1.3 percentage points, their likelihood of earning any graduate degree declined 1.4 p.p., and their likelihood of earning at least $100,000 annual between ages 30 and 37 declined by about 1 p.p. per year. These results suggest that affirmative action's end decreased the number of age 30-to-34 URM Californians earning over $100,000 by at least 2.5 percent. Turning to targeted students' major choice, I link the application records to five universities' detailed course transcript data and find no evidence –despite considerable statistical power– that more-selective university enrollment under AA lowered URM students' performance or persistence in core physical, biological, or mathematical science courses. These findings suggest that state prohibitions on university affirmative action policies have modestly exacerbated American socioeconomic inequities.

Keywords: Higher Education; Affirmative Action; University Selectivity; Major Choice
JEL Classification: I24, J24, J31, H75

Did Early Twintieth-Century Alcohol Prohibition Laws Reduce Mortality?

Did Early Twintieth-Century Alcohol Prohibition Affect Mortality? Marc T. Law  Mindy S. Marks. Economic Inquiry, November 28 2019. https://doi.org/10.1111/ecin.12868

Abstract: We investigate the contemporaneous mortality consequences of alcohol prohibition laws introduced in America between 1900 and 1920. We improve on existing studies by constructing a time‐varying measure of prohibition at the state level that corrects for the timing of prohibition enforcement and accounts for the presence of dry counties. Using summary indices that aggregate alcohol‐related mortality due to disease and poor decisions, we find that prohibition significantly reduced mortality rates. These findings are corroborated with an area‐level analysis that exploits data on deaths in urban areas that were wet prior to statewide or federal prohibition and nonurban areas that were partially dry. (JEL I18, N4, K2)


Moral Outrage Porn, vanity projects, competition to be outraged

Moral Outrage Porn. C. Thi Nguyen & Bekka Williams. Forthcoming in Journal of Ethics and Social Philosophy. Nov 2019. https://philarchive.org/archive/NGUMOP

We offer an account of the generic use of the term “porn”, as seen in recent usages such as “food porn” and “real estate porn”. We offer a definition adapted from earlier accounts of sexual pornography. On our account, a representation is used as generic porn when it is engaged with primarily for the sake of a gratifying reaction, freed from the usual costs and consequences of engaging with the represented content. We demonstrate the usefulness of the concept of generic porn by using it to isolate a new type of such porn: moral outrage porn. Moral outrage porn is, as we understand it, representations of moral outrage, engaged with primarily for the sake of the resulting gratification, freed from the usual costs and consequences of engaging with morally outrageous content. Moral outrage porn is dangerous because it encourages the instrumentalization of one’s empirical and moral beliefs, manipulating their content for the sake of gratification. Finally, we suggest that when using porn is wrong, it is often wrong because it instrumentalizes what ought not to be instrumentalized.

---
This worry parallels, in some significant ways, Tosi and Warmke’s (2016) complaint against moral grandstanding.19 Moral grandstanding is morally problematic, they argue, in large part because grandstanders are treating moral discourse as a “vanity project”:
In using public moral discourse to promote an image of themselves to others, grandstanders turn their contributions to moral discourse into a vanity project. Consider the incongruity between, say, the moral gravity of a world-historic injustice, on the one hand, and a group of acquaintances competing for the position of being most morally offended by it, on the other. Such behavior, we think, is not the sort of thing we should expect from a virtuous person (215- 16).
Note, crucially, that the problem asserted by Tosi and Warmke in this instance isn’t that moral grandstanding has bad results.20 Instead, the problem is that using moral discourse for selfpromotion is problematically egotistical. Tosi and Warmke focus on moral problems associated with using moral outrage for interpersonal jockeying. That is the essence of the notion of moral grandstanding — the use of moral expression for social signaling. Similarly, it seems plausible that the use of moral outrage porn in many cases involves a failure to respect the fundamental role of moral expression. Notice that, where the problem with moral grandstanding is essentially interpersonal and social, the problem with moral outrage porn is personal and hedonistic.21 The problem of moral grandstanding is that we use morality for status; the problem of moral outrage porn is that we are using morality for pleasure. When one indulges in moral outrage porn, one uses what by one’s own lights is morally outrageous for one’s own enjoyment.22 It is, loosely speaking, to make morality about oneself, when it clearly is not. Furthermore, it is no accident, we think, that the features of moral outrage porn relevant to the “bad faith” problem mirror Michael Tanner’s (1976/77) account of the problems of sentimentality. In his discussion of Oscar Wilde and the sentimental, Tanner says, “the feelings which constitute [the sentimental] are in some important way unearned, being had on the cheap, come by too easily…” (1976/77: 128). The use of moral outrage porn, if one accepts our definition, involves an attempt to be gratified by a representation of the endresult of moral engagement without taking on the consequences or effort of actually engaging. This seems a paradigmatic case of getting a feeling on the cheap. What we’ve sketched thus far are a number of considerations that weigh in favor of a serious moral strike against the use of moral outrage porn. There are also a number of consequentialist considerations that we might adduce. Tanner (1976/77: 134) argues that the intrinsically sentimental tends toward passivity (139). Sentimental emotions, Tanner suggests, can themselves encourage inaction.
[I]t also seems to me that some of my feelings are of a kind that inhibit action, because they themselves are enjoyable to have, but if acted upon, one would cease to have them, and one doesn’t want to. Such a feeling does seem to me intrinsically sentimental… (Tanner 1976/77: 139).