Tuesday, December 3, 2019

Females are more proactive, males are more reactive: neural basis of the gender-related speed/accuracy trade-off in visuo-motor tasks

Females are more proactive, males are more reactive: neural basis of the gender-related speed/accuracy trade-off in visuo-motor tasks. V. Bianco et al. Brain Structure and Function, December 3 2019. https://link.springer.com/article/10.1007/s00429-019-01998-3

Abstract: In the present study, we investigated neural correlates associated with gender differences in a simple response task (SRT) and in a discriminative response task (DRT) by means of event-related potential (ERP) technique. 120 adults participated in the study, and, based on their sex, were divided into two groups matched for age and education level. Behavioral performance was assessed with computing response speed, accuracy rates and response consistency. Pre- and post-stimulus ERPs were analyzed and compared between groups. Results indicated that males were faster than females in all tasks, while females were more accurate and consistent than males in the more complex tasks. This different behavioral performance was associated with distinctive ERP features. In the preparation phase, males showed smaller prefrontal negativity (pN) and visual negativity (vN), interpreted as reduced cognitive preparation to stimulus occurrence and reduced reliance on sensory proactive readiness, respectively. In the post-stimulus phase, gender differences were present over occipital (P1, N1, P2 components) and prefrontal (pN1, pP1, pP2 components) areas, suggesting allocation of attentional resources at distinct stages of information processing in the two groups. Overall, the present data provide evidence in favor of a more proactive and cautious cognitive processing in females and a more reactive and fast cognitive processing in males. In addition, we confirm that (1) gender is an important variable to be considered in ERP studies on perceptual processing and decision making, and (2) the pre-stimulus component analysis can provide useful information concerning neural correlates of upcoming performance.

Keywords: Gender differences Speed–accuracy trade-off Motor behavior Proactive control Decision making Predictive brain

Present data suggest that in simple and complex visuo-motor tasks, males and females allocate their cortical resources in diverse ways, possibly leading to the well-documented gender-related speed/accuracy trade-off in visuo-motor per-formance. When the task is very simple, both preparatory (the BP) and reactive (the pP1, P2 and P3) cortical process-ing are enhanced in males with respect to females, leading to faster responses. When the task is more complex (implying stimulus discrimination and response selection), females’ proactive allocation of more cortical resources at both pre-frontal (pN) and sensory (vN) level, as well as several reac-tive stages after stimulus onset (the pN1, the P1, and the P3), leads to relatively slow and very accurate responses. In contrast, males allocate a reduced level of pre-stimulus sustained attention to the task (smaller pN and vN), possi-bly compensating with enhanced reactive attention at visual level processing (larger N1 and P2). Even though the neu-ral processing associated with S–R mapping (the pP2) is generally enhanced in males (for both target and non-target stimuli), signals associated to different stimulus categories are less distinguishable in males than females, as indicated by dpP2 effect, possibly facilitating female accuracy in a complex task.

Present research provides evidence that gender is an important variable to be considered in neurocognitive studies of perceptual decision making; this variable should be considered while planning experimental designs or interpreting the results because, per se, could explain the speed/accuracy trade-off in visuo-motor performance and relative differences in brain functions. In contrast, some studies excluded females from their samples or ignored gender as a factor in their findings (for review see Mendrek 2015), possibly jeopardizing their results’ interpretation.

It has been shown that a gamble is judged to be more attractive when its zero outcome is designated as “losing $0” rather than “winning $0,” an instance of what we refer to as the mutable-zero effect

The framing of nothing and the psychology of choice. Marc Scholten, Daniel Read, Neil Stewart. Journal of Risk and Uncertainty, December 3 2019. https://link.springer.com/article/10.1007/s11166-019-09313-5

Abstract: Zero outcomes are inconsequential in most models of choice. However, when disclosing zero outcomes they must be designated. It has been shown that a gamble is judged to be more attractive when its zero outcome is designated as “losing $0” rather than “winning $0,” an instance of what we refer to as the mutable-zero effect. Drawing on norm theory, we argue that “losing $0” or “paying $0” evokes counterfactual losses, with which the zero outcome compares favorably (a good zero), and thus acquires positive value, whereas “winning $0” or “receiving $0” evokes counterfactual gains, with which the zero outcome compares unfavorably (a bad zero), and thus acquires negative value. Moreover, we propose that the acquired value of zero outcomes operates just as the intrinsic value of nonzero outcomes in the course of decision making. We derive testable implications from prospect theory for mutable-zero effects in risky choice, and from the double-entry mental accounting model for mutable-zero effects in intertemporal choice. The testable implications are consistently confirmed. We conclude that prevalent theories of choice can explain how decisions are influenced by mutable zeroes, on the shared understanding that nothing can have value, just like everything else.

Keywords: Descriptive invariance Norm theory Counterfactuals Zero outcomes Risk and time Prospect theory Double-entry mental accounting model
JEL Classifications: D00 D90 D91

4 General discussion
The valence of a zero event depends on its “irrelevant” description: It “feels better” to
lose or pay nothing than to win or receive nothing. A negative wording (lose, pay) sets
up a norm of negative events, with which the zero event compares favorably, while a
positive wording (win, receive) sets up a norm of positive events, with which the zero
event compares unfavorably, so that a negative wording acquires a more positive tone
than a positive wording. Descriptive invariance requires from us that this should not
affect our decisions, but we have shown that it does, among a fair number of us at least.
To others among us, the framing of zero events may actually be irrelevant. The
mutable-zero effect is indeed small; yet, it is a reliable phenomenon. And if one thinks
of the alternative descriptions of a zero outcome as a minimal manipulation, the small
effect may actually be considered quite impressive (Prentice and Miller 1992).
Descriptive invariance, along with dominance, is an essential condition of rational
choice (Tversky and Kahneman 1986), and it has seen a number of violations,
commonly referred to as framing effects. A stylized example of framing is the adage
that optimists see a glass of wine as half full, while pessimists see it as half empty. And
if the wine glass is half full, and therefore half empty, then these are complementary
descriptions of the same state of the world, so that, normatively, using one or the other
should not matter for judgment and choice (Mandel 2001)—but it does.

4.1 Counterfactuals versus expectations
Life often confronts us with zero events. A bookstore may offer us “free shipping.” Our
employermay grant us “no bonus.”We are pleased to pay $0 to the bookstore, and this may be
because we expected to pay something but did not.We are not pleased to receive $0 from our
employer, and this may be because we expected to receive something but did not (Rick and
Loewenstein 2008). In norm theory, Kahneman and Miller (1986) suggested that reasoning
may not only flow forward, “from anticipation and hypothesis to confirmation or revision,” but
also backward, “from the experience to what it reminds us of or makes us think about” (p.
137). In the latter case, “objects or events generate their own norms by retrieval of similar
experiences stored in memory or by construction of counterfactual alternatives” (p. 136). Thus,
“free shipping” may sound pleasant because it reminds us of occasions on which we were
charged shipping fees, and “no bonus” may sound unpleasant because it reminds us of
occasions on which we were granted bonuses; not so much because we expected to pay or
receive something. Of course, both norms and expectations may influence our feelings, and
may be difficult to disentangle in many real-life situations. Kahneman and Miller’s (1986)
intention with norm theory was “not to deny the existence of anticipation and expectation but
to encourage the consideration of alternative accounts for some of the observations that are
routinely explained in terms of forward processing” (p. 137, emphasis added). Our intention
was to compile a set of observations that cannot reasonably be explained in terms of forward
processing, which therefore constitute the clearest exposure of norms.

4.2 Expectations in decision theory
We have incorporated counterfactuals into theories of choice, so as to predict the effects
of mutable zeroes when people face risk and when people face time. Traditionally,
decision theory has ignored counterfactuals, but expectations play a role in most
theories of decision under risk. While prospect theory sacrifices the expectation
principle from EU, by assigning a decision weight w(p) to probability p of an outcome
occurring, other formulations have maintained the expectation principle but modified
the utility function. For instance, the utility function has been expanded with anticipated
regret and rejoicing as they result from comparisons between the possible outcomes
of a gamble and those that would occur if one were to choose differently (Bell 1982,
1983; Loomes and Sugden 1982). Similarly, the utility function has been expanded
with anticipated emotions as they result from comparisons between the possible
outcomes of a gamble with the expected value of the gamble: Anticipated disappointment
when “it could come out better,” and anticipated elation when “it could come out
worse” (Bell 1985; Loomes and Sugden 1986). Zero outcomes acquire value in the
same way as nonzero outcomes do: Either from between-gamble or within-gamble
comparisons. Thus, a zero outcome acquires negative value (by regret or disappointment)
if the comparison is with a gain, and positive value (by rejoicing or elation) if the
comparison is with a loss. In our analysis, however, zero outcomes are unique, in that
only they elicit counterfactual gains and losses, which will then serve as a reference
point for evaluating the zero outcomes themselves. Nonetheless, in Experiment 3,
dealing with zero outcomes in intertemporal choice, we obtained a result suggesting
that between-prospect comparisons of zero and nonzero outcomes also affected choice.

4.3 The framing of something, the framing of nothing
The investigation of framing effects in judgment and decision making began with
Tversky and Kahneman’s (1981) Asian Disease Problem, in which the lives of 600
people are threatened, and life-saving programs are examined. One group of participants
preferred a program that would save 200 people for sure over a program that
would save 600 people with a 1/3 probability, but save no people with a 2/3 probability.
Another group of participants preferred a program that would let nobody die with a
probability of 1/3, but let 600 people die with a 2/3 probability, over a program that
would let 400 people die for sure. Prospect theory ascribes this result to reference
dependence, i.e., v(0) = 0, and diminishing sensitivity, i.e., v is concave over gains, so
that v(600) < 3v(200), which works against the gamble, and convex over losses, so that
v(−600) > 3v(−200), which works in favor of the gamble.
Our interpretation is that some of the action may lie in the zero outcomes, rather than
the nonzero outcomes. Specifically, “save no people” brings to mind saving some
people, with which saving no people compares unfavorably, thus working against the
gamble. Similarly, “let nobody die” brings to mind letting somebody die, with which
letting nobody die compares favorably, thus working in favor of the gamble. Reference
dependence is fine, but designating zero outcomes means that v(0) ≠ 0, because the
reference point is no longer the status quo, but rather something imagined.
There is no shortage of competing views on framing effects (for one of many
discussions, see Mandel 2014), and our norm-theory approach to the Asian Disease
Problem is a partial explanation at best. Indeed, the reversal from an ample majority
(72%) choosing the safe option in the positive frame (saving lives) to an ample majority
(78%) choosing the risky option in the negative frame (giving up lives) is a large effect,
whereas the mutable-zero effect is a small effect, unlikely to be the sole responsible for
Tversky and Kahneman’s (1981) result. However, judgments and decisions are influenced
by the framing of zero outcomes, and we have shown that prevalent theories of
choice, Kahneman and Tversky’s (1979) prospect theory and Prelec and Loewenstein’s
(1998) double-entry mental accounting model, can explain how decisions are influenced
by mutable zeroes, on the shared understanding that nothing can have value, just
like everything else.

Results provide a cautionary tale for the naïve application of VAMs to teacher evaluation and other settings; they point to the possibility of the misidentification of sizable teacher “effects”where none exist

Teacher Effects on Student Achievement and Height: A Cautionary Tale. Marianne Bitler, Sean Corcoran, Thurston Domina, Emily Penner. NBER Working Paper No. 26480, November 2019. https://www.nber.org/papers/w26480

Abstract: Estimates of teacher “value-added” suggest teachers vary substantially in their ability to promote student learning. Prompted by this finding, many states and school districts have adopted value-added measures as indicators of teacher job performance. In this paper, we conduct a new test of the validity of value-added models. Using administrative student data from New York City, we apply commonly estimated value-added models to an outcome teachers cannot plausibly affect: student height. We find the standard deviation of teacher effects on height is nearly as large as that for math and reading achievement, raising obvious questions about validity. Subsequent analysis finds these “effects” are largely spurious variation (noise), rather than bias resulting from sorting on unobserved factors related to achievement. Given the difficulty of differentiating signal from noise in real-world teacher effect estimates, this paper serves as a cautionary tale for their use in practice.

6   Discussion
Schools and districts across the country want to employ teachers who can best help students to learn, grow, and achieve academic success. Identifying such individuals is integral to schools' successbutis also difficult to do in practice. In the face of data and measurement limitations, school leaders and state education departments seek low-cost, unbiased ways to observe and monitor the impact that their teachers have on students. Although many have criticized the use of VAMs to evaluate teachers, they remain a widely-used measure of teacher performance. In part, their popularity is due to convenience-while observational protocols which send observers to every teacher's classroom require expensive training and considerable resources to implement at scale, VAMs use existing data and can be calculated centrally at low cost. Further, VAMs are arguably less biased than many other evaluation methods that districts might use instead (Bacher-Hicks et al. 2017; Harris et al. 2014; Hill et al. 2011).

Yet questions remain about the reliability, validity, and practical use of VAMs. This paper interrogates concerns raised by prior research on VAMs and raises new concerns about the use of VAMs in career and compensation decisions. We explore the bias and reliability of commonlyestimated VAMs by comparing estimates of teacher value-added in mathematics and ELA with parallel estimates of teacher value-added on a well-measured biomarker that teachers should not impact: student height. Using administrative data from New York City, we find estimated teacher “effects”on height that are comparable in magnitude to actual teacher effects on math and ELA achievement, 0.22:compared to 0.29:and0.26:respectively. On its face, such results raise concerns about the validity of these models.

Fortunately, subsequent analysis finds that teacher effects on height are primarily noise, rather than bias due to sorting on unobserved factors. To ameliorate the effect of sampling error on value-added estimates, analysts sometimes “shrink” VAMs, scaling them by their estimated signal-to-noise ratio. When we apply the shrinkage method across multiple years of data from Kane and Staiger (2008), the persistent teacher “effect”on height goes away, becoming the expected (and known) mean of zero. This procedure is not always done in practice, however, and requires multiple years of classroom data for the same teachers to implement. Of course, for making hiring and firing decisions, it seems important to consider that value added measures which require multiple years of data to implement will likely permit identification of persistently bad teachers, but not provide a performance evaluation metric that can be met by teachers trying to improve their performance. In more realistic settings where the persistent effect is not zero, it is less clear that shrinkage would have a major influence on performance decisions, since it has modest effects on the relative rankings of teachers.

Taken together, our results provide a cautionary tale for the naïve application of VAMs to teacher evaluation and other settings. They point to the possibility of the misidentification of sizable teacher “effects”where none exist. These effects may be due in part to spurious variation driven by the typically small samples of children used to estimate a teacher's individual effect.

Political imagery on money relates to less political freedom & more gender inequality; more scientific & agricultural images on money relate to less economic, political freedom, human development & gender equality.

Using currency iconography to measure institutional quality. Kerianne Lawson. The Quarterly Review of Economics and Finance, Volume 72, May 2019, Pages 73-79. https://doi.org/10.1016/j.qref.2018.10.006

•    Countries with more political imagery on their money tend of have less political freedom and more gender inequality.
•    More scientific & agricultural images on money correspond with less economic freedom, political freedom, human development & gender equality.
•    More art related and cultural images on money correspond with higher economic freedom and human development index scores.
•    More religious images on money correspond with less political freedom. And for OPEC countries, also less economic freedom.
•    For non-Commonwealth nations, images of women on money correspond with more political freedom, human development & gender equality.

Abstract: The images on a country’s currency are purposefully chosen by the people or government to be representative of that country. Potentially, one could learn a lot about the economic and political climate of a country by simply looking at the pictures on its money. This paper reports indexes measuring the political, religious, and cultural/scientific content as well as the representation of women on currency notes. The analysis suggests that we can look to the iconography in currency as an indication of the quality of the institutions or socio-economic outcomes in that country.

2. Survey of related literature

The iconographic analysis of currency notes is a much-discussedtopic in disciplines outside economics. Sociologists, historians,anthropologists, and many others have looked at the images on cur-rency notes to discuss the social and political environment within acountry. There is even a ‘Bank Note of the Year’ contest held by theE-mail address: knl0013@mix.wvu.eduInternational Bank Note Society, which is decided by vote. Votersevaluate the “artistic merit, design, use of [color], contrast, bal-ance, and security features of each nomination” (IBNS Banknoteof the Year, 2018). It is widely accepted that the images on a coun-try’s currency hold significance, so they are worthy of discussion.

Most of the iconographic work on money looks at a single coun-try’s currency: Denmark (Sorensen, 2016), Ghana (Fuller, 2008),Indonesia (Strassler, 2009), Laos (Tappe, 2007), Scotland (Penrose& Cumming, 2011), Palestine (Wallach, 2011), Taiwan (Hymans &Fu, 2017), and the former Soviet Union (Cooper, 2009).

While most scholars have looked at a country’s currency at apoint in time, Schwartz (2014) examined the change in imageson Chinese currency from the 1940s to the 1990s. In particu-lar, he examined the use of classical socialist, including Soviet,imagery on the Yuan. An oncoming train, for example, symbolizesthe inevitability of the socialist revolution. Peasants and work-ers looking off into the distance reflected another theme in Sovietart, common people wistfully looking toward a promising socialistfuture. Mao’s absence on the Yuan note until after his death mightbe attributed to communist ideas around money, and Mao him-self said the achievements of an individual should not be glorified.

However, this is in direct contradiction with basically every otherform of media, which painted Mao as practically a divine being.Schwartz argues that keeping his face off of the currency was astrategic decision to dissociate him from the state and maintain hisimage as an ally to the masses.

Hymans (2004) investigated the evolution of currency iconogra-phy in Europe from the 19thcentury to the era of the Euro. He foundthat there were iconographic changes over time that reflected thesocial changes and trends we know from history. And yet, there arefew iconographic differences across the European countries at anypoint in time. Hymans suggests that the images on European coun-tries currencies were probably not used as propaganda towardstheir own citizens, but rather to mirror the values of their neigh-bors to legitimize themselves and fit in with a broader, collectiveEuropean identity.

An unrelated strand of literature within economics has beentrying to find new, unconventional ways to measure social or eco-nomic conditions. Chong, La Porta, Lopez-de-Silanes, and Shleifer,(2014) mailed letters to nonexistent addresses of businesses andthen graded government efficiency by how promptly, if at all, theletters were returned. The goal was to create an objective mea-sure of government efficiency across the 159 countries observed.They found that their measures of efficiency correlated with otherindicators of government quality.

Henderson, Storeygard, and Weil, (2012) used satellite imagesof nighttime lights to estimate economic activity. They found thattheir estimates were different by only a few percentage points fromthe official data. However, their method allowed for more specificregional and international analysis that was not possible from otherconventional ways to collect this data.

Fisman and Miguel (2007) measured government corruption byrecording foreign diplomat parking violations in New York City.Thanks to diplomatic immunity, foreign diplomats may legallyignore parking tickets, but many still do pay them voluntarily.Diplomats from highly corrupt countries did in fact pay less thanthose from other less corrupt countries. Thus, parking ticket pay-ments may serve as a proxy for the cultural norms in the homecountry.

This paper attempts to unite the literature on the iconogra-phy of currency with the literature using unconventional methodsto measure country-level characteristics. The images found on anation’s currency may be indicators for socio-economic conditionsor underlying institutional quality. Unlike the previous literaturewhich looks at why a currency’s iconography has changed overtime in a certain country or region, this project seeks to answer thequestion: is currency iconography a good indicator of institutionalquality for all countries?

Narcissistic individuals were no better at accurately identifying other narcissists, but such individuals demonstrated considerable aversion to narcissistic faces

The Relation Between Narcissistic Personality Traits and Accurate Identification of, and Preference for, Facially Communicated Narcissism. Mary M. Medlin, Donald F. Sacco, Mitch Brown. Evolutionary Psychological Science, December 3 2019. https://link.springer.com/article/10.1007/s40806-019-00224-x

Abstract: When evaluating someone as a potential social acquaintance, people prefer affiliative, pleasant individuals. This necessitates the evolution of perceptual acuity in distinguishing between genuinely prosocial traits and those connoting exploitative intentions. Such intentions can be readily inferred through facial structures connoting personality, even in the absence of other diagnostic cues. We sought to explore how self-reported narcissism, a personality constellation associated with inflated self-views and exploitative intentions, might facilitate one’s ability to detect narcissism in others’ faces as means of identifying social targets who could best satisfy potential exploitative goals. Participants viewed pairs of male and female targets manipulated to connote high and low levels of narcissism before identifying which appeared more narcissistic and indicating their preference among each pair. Narcissistic individuals were no better at accurately identifying other narcissists, but such individuals demonstrated considerable aversion to narcissistic faces. Women higher in exploitative narcissism additionally preferred narcissistic female faces, while men high in exploitative narcissism demonstrated similar patterns of aversion toward narcissistic male faces. Findings provide evidence that narcissistic individuals may adaptively avoid those whom they identify as having similar exploitative behavior repertoire, though when considering the exploitive dimension of narcissism specifically, sex differences emerged.

Keywords: Narcissism Face perception Personality preference Evolutionary psychology

Detecting Personality from Faces
A growing body of literature has demonstrated that people canaccurately determine personality information from facial cues (Little and Perrett2007;Parkinson2005; Sacco and Brown2018). Additionally, individuals are able to make these judgments based on limited exposure to others’faces, sometimesin as little as 50 ms (Borkenau et al. 2009; Penton-Voak et al. 2006; Zebrowitz and Collins 1997). The human ability to inferpersonality traits based on a single, cursory glance at another’s face may have evolved to facilitate efficient identification ofcooperative and exploitative conspecifics to motivate adaptiv eapproach and avoidance behavior, respectively (Borkenauet al. 2004; Sacco and Brown 2018; Zebrowitz and Collins 1997). For example, upon identifying the genuinely affiliative intentions in faces possessing extraverted facial structures,individuals consistently prefer such structures, particularlywhen motivated to seek affiliative opportunities (Brownet al.2019a). Conversely, the recognition of facial structuresconnoting exploitative intentions (e.g., psychopathy) elicits considerable aversion from perceivers (Brown et al.2017). This efficient identification of affiliative and exploitative conspecifics could thus expedite the avoidance of persons withgreater intention to harm others (Haselton and Nettle 2006).

Interpersonal Dynamics of Narcissism
Individuals known to attempt social espionage include thosewith personality types related to more manipulative behavioralrepertoires, including those high in narcissism. In fact, highly narcissistic individuals are particularly motivated to presentthemselves in a positive light toward others in the service of acquiring access to social capital (Rauthmann 2011). These deceptive interpersonal tactics have been shaped by an evolutionary arms race, wherein narcissistic individuals seek to de-ceive group members with such group members subsequently evolving greater capacity to recognize those likely to exploitthem (Cosmides and Tooby1992). Given that narcissistic individuals are especially prone to cheating (Baughman et al.2014), it would thus be adaptive to identify narcissistic individuals preemptively to reduce the likelihood of falling victimto their exploitation. Indeed, narcissism is readily inferred through various interpersonal behaviors, including dressing provocatively (Vazire et al.2008) and heightened selfie-taking (e.g., McCain et al.2016). These inferences are additionally possible through facial features, with narcissismpossessing a specific facial structure (Holtzman 2011). Given narcissistic individuals’ability to mask their intentions, through the absence of clear affective cues of manipulative intent, people may subsequently rely on facial structures todetect any such intention in an attempt to avoid those capable of inflicting considerable interpersonal costs, such as those associated with narcissism.
There is no doubt that associating with narcissistic individuals is costly, especially for cooperative individuals who fullyparticipate in group living. However, an association with anarcissist may be even more costly for another narcissist. Narcissistic individuals are, by their very nature, interpersonally dominant and unlikely to be exploited by others (Chenget al. 2010), which would position them to reap the benefits from social competitions with others (e.g., access to resources;Jonason et al. 2015). This could suggest that one narcissist could be a potential threat to another in their pursuit of socialresources if both individuals have similar exploitative aspirations. This recognition of threat could be particularly critical inthe mating arena, given that the presence of more narcissistic individuals would result in a reduction in short-term mating opportunities (Holtzman & Strube 2010). For this reason, itwould be expected for narcissists to demonstrate aversion forother narcissists in the service of reducing competition forresources and mates among those utilizing similarly exploitative interpersonal strategies.

For years, SAT developers & administrators have declined to say that the test measures intelligence, despite the fact that the SAT can trace its roots through the Army Alpha & Beta tests, & others

What We Know, Are Still Getting Wrong, and Have Yet to Learn about the Relationships among the SAT, Intelligence and Achievement. Meredith C. Frey. J. Intell. 2019, 7(4), 26; December 2 2019, https://doi.org/10.3390/jintelligence7040026

Abstract: Fifteen years ago, Frey and Detterman established that the SAT (and later, with Koenig, the ACT) was substantially correlated with measures of general cognitive ability and could be used as a proxy measure for intelligence (Frey and Detterman, 2004; Koenig, Frey, and Detterman, 2008). Since that finding, replicated many times and cited extensively in the literature, myths about the SAT, intelligence, and academic achievement continue to spread in popular domains, online, and in some academic administrators. This paper reviews the available evidence about the relationships among the SAT, intelligence, and academic achievement, dispels common myths about the SAT, and points to promising future directions for research in the prediction of academic achievement.

Keywords: intelligence; SAT; academic achievement

2. What We Know about the SAT

2.1. The SAT Measures Intelligence

Although the principal finding of Frey and Detterman has been established for 15 years, it bears repeating: the SAT is a good measure of intelligence [1]. Despite scientific consensus around that statement, some are remarkably resistant to accept the evidence of such an assertion. In the wake of a recent college admissions cheating scandal, Shapiro and Goldstein reported, in a piece for the New York Times, “The SAT and ACT are not aptitude or IQ tests” [6]. While perhaps this should not be alarming, as the authors are not experts in the field, the publication reached more than one million subscribers in the digital edition (the article also appeared on page A14 in the print edition, reaching hundreds of thousands more). And it is false, not a matter of opinion, but rather directly contradicted by evidence.
For years, SAT developers and administrators have declined to call the test what it is; this despite the fact that the SAT can trace its roots through the Army Alpha and Beta tests and back to the original Binet test of intelligence [7]. This is not to say that these organizations directly refute Frey and Detterman; rather, they are silent. On the ETS website, the word intelligence does not appear on the pages containing frequently asked questions, the purpose of testing, or the ETS glossary. If one were to look at the relevant College Board materials (and this author did, rather thoroughly), there are no references to intelligence in the test specifications for the redesigned SAT, the validity study of the redesigned SAT, the technical manual, or the SAT understanding scores brochure.
Further, while writing this paper, I entered the text “does the SAT measure intelligence” into the Google search engine. Of the first 10 entries, the first (an advertisement) was a link to the College Board for scheduling the SAT, four were links to news sites offering mixed opinions, and fully half were links to test prep companies or authors, who all indicated the test is not a measure of intelligence. This is presumably because acknowledging the test as measure of intelligence would decrease consumers’ belief that scores could be vastly improved with adequate coaching (even though there is substantial evidence that coaching does little to change test scores). One test prep book author’s blog was also the “featured snippet”, or the answer highlighted for searchers just below the ad. In the snippet, the author made the claims that “The SAT does not measure how intelligent you are. Experts disagree whether intelligence can be measured at all, in truth” [8]—little wonder, then, that there is such confusion about the test.

2.2. The SAT Predicts College Achievement

Again, an established finding bears repeating: the SAT predicts college achievement, and a combination of SAT scores and high school grades offer the best prediction of student success. In the most recent validity sample of nearly a quarter million students, SAT scores and high school GPA combined offered the best predictor of first year GPA for college students. Including SAT scores in regression analyses yielded a roughly 15% increase in predictive power above using high school grades alone. Additionally, SAT scores improved the prediction of student retention to the second year of college [9]. Yet many are resistant to using standardized test scores in admissions decisions, and, as a result, an increasing number of schools are becoming “test optional”, meaning that applicants are not required to submit SAT or ACT scores to be considered for admission. But, without these scores, admissions officers lose an objective measure of ability and the best option for predicting student success.

2.3. The SAT Is Important to Colleges

Colleges, even nonselective ones, need to identify those individuals whose success is most likely, because that guarantees institutions a consistent revenue stream and increases retention rates, seen by some as an important measure of institutional quality. Selective and highly selective colleges further need to identify the most talented students because those students (or, rather, their average SAT scores) are important for the prestige of the university. Indeed, the correlation between average SAT/ACT scores and college ranking in U.S. News & World Report is very nearly 0.9 [10,11].

2.4. The SAT Is Important to Students

Here, it is worth recalling the reason the SAT was used in admissions decisions in the first place: to allow scholarship candidates to apply for admission to Harvard without attending an elite preparatory school [7]. Without an objective measure of ability, admissions officers are left with assessing not just the performance of the student in secondary education, but also the quality of the opportunities afforded to that student, which vary considerably across the secondary school landscape in the United States. Klugman analyzed data from a nationally representative sample and found that high school resources are an important factor in determining the selectivity of colleges that students apply for, both in terms of programmatic resources (e.g., AP classes) and social resources (e.g., socioeconomic status of other students) [12]. It is possible, then, that relying solely on high school records will exacerbate rather than reduce pre-existing inequalities.
Of further importance, performance on the SAT predicts the probability of maintaining a 2.5 GPA (a proxy for good academic standing) [9]. Universities can be rather costly and admitting students with little chance of success until they either leave of their own accord or are removed for academic underperformance—with no degree to show and potentially large amounts of debt—is hardly the most just solution.

3. What We Get Wrong about the SAT

Nearly a decade ago, Kuncel and Hezlett provided a detailed rebuttal to four misconceptions about the use of cognitive abilities tests, including the SAT, for admissions and hiring decisions: (1) a lack of relationship to non-academic outcomes, (2) predictive bias in the measurements, (3) a problematically strong relationship to socioeconomic status, and (4) a threshold in the measures, beyond which individual differences cease to be important predictors of outcomes [13]. Yet many of these misconceptions remain, especially in opinion pieces, popular books, blogs, and more troublingly, in admissions decisions and in the hearts of academic administrators (see [14] for a review for general audiences).

3.1. The SAT Mostly Measures Ability, Not Privilege

SAT scores correlate moderately with socioeconomic status [15], as do other standardized measures of intelligence. Contrary to some opinions, the predictive power of the SAT holds even when researchers control for socioeconomic status, and this pattern is similar across gender and racial/ethnic subgroups [15,16]. Another popular misconception is that one can “buy” a better SAT score through costly test prep. Yet research has consistently demonstrated that it is remarkably difficult to increase an individual’s SAT score, and the commercial test prep industry capitalizes on, at best, modest changes [13,17]. Short of outright cheating on the test, an expensive and complex undertaking that may carry unpleasant legal consequences, high SAT scores are generally difficult to acquire by any means other than high ability.

That is not to say that the SAT is a perfect measure of intelligence, or only measures intelligence. We know that other variables, such as test anxiety and self-efficacy, seem to exert some influence on SAT scores, though not as much influence as intelligence does. Importantly, though, group differences demonstrated on the SAT may be primarily a product of these noncognitive variables. For example, Hannon demonstrated that gender differences in SAT scores were rendered trivial by the inclusion of test anxiety and performance-avoidance goals [18]. Additional evidence indicates some noncognitive variables—epistemic belief of learning, performance-avoidance goals, and parental education—explain ethnic group differences in scores [19] and variables such as test anxiety may exert greater influence on test scores for different ethnic groups (e.g., [20], in this special issue). Researchers and admissions officers should attend to these influences without discarding the test entirely.

Merely Possessing a Placebo Analgesic Reduced Pain Intensity: Preliminary Findings from a Randomized Design

Merely Possessing a Placebo Analgesic Reduced Pain Intensity: Preliminary Findings from a Randomized Design. Victoria Wai-lan Yeung, Andrew Geers, Simon Man-chun Kam. Current Psychology, February 2019, Volume 38, Issue 1, pp 194–203. https://link.springer.com/article/10.1007/s12144-017-9601-0

Abstract: An experiment was conducted to examine whether the mere possession of a placebo analgesic cream would affect perceived pain intensity in a laboratory pain-perception test. Healthy participants read a medical explanation of pain aimed at inducing a desire to seek pain relief and then were informed that a placebo cream was an effective analgesic drug. Half of the participants were randomly assigned to receive the cream as an unexpected gift, whereas the other half did not receive the cream. Subsequently, all participants performed the cold-pressor task. We found that participants who received the cream but did not use it reported lower levels of pain intensity during the cold-pressor task than those who did not receive the cream. Our findings constitute initial evidence that simply possessing a placebo analgesic can reduce pain intensity. The study represents the first attempt to investigate the role of mere possession in understanding placebo analgesia. Possible mechanisms and future directions are discussed.

Keywords: Placebo effect Mere possession Cold pressor Placebo analgesia Pain

Past research has demonstrated that placebo analgesics can increase pain relief. The primary focus was on pain relief that occurred following the use of the placebo-analgesic treatment. We tested the novel hypothesis that merely possessing a placebo analgesic can boost pain relief. Consistent with this hypothesis, participants who received but did not use what they were told was a placebo-analgesic cream reported lower levels of pain intensity in a cold-pressor test than did participants who did not possess the cream. To our knowledge, the present data are the first to extend research on the mere-possession phenomenon (Beggan 1992) to the realm of placebo analgesia.

Traditional placebo studies have included both possessing and consuming: Participants first possess an inert object, then they consume or use it and report diminished pain as a consequence (Atlas et al. 2009; de la Fuente-Fernández et al. 2001; Price et al. 2008; Vase et al. 2003). The current study provided initial evidence that consuming or using the placebo analgesia is unnecessary for the effect. However, it remains possible that the effect would be enhanced were possession to be accompanied by consumption or use. This and related hypotheses could be tested in future studies.

In the current experiment, we measured several different variables (fear of pain, dispositional optimism, desire for control, suggestibility, and trait anxiety) that could be considered as potential moderators of the observed placebo-analgesia effect. However, none of them proved significant. Although we remain unsure of the processes responsible for the mere possession effect we observed, a previously offered account may be applicable. Specifically, participants’ pain reduction may have been induced by a positive expectation of pain relief that was mediated by an elevated perception of self efficacy in coping with pain (see Peck and Coleman 1991; Spanos et al. 1989). To directly test this possibility in further research, it would be important to measure participants’ self-perceived analgesic efficacy in relation to the mere-possession effect.

It is possible that the mere possession of what participants were told was an analgesic cream induced a positive affect through reception of a free gift. The affect may have influenced participants’ perceived pain intensity. In order to test this possibility, we looked more closely at an item in the State-Anxiety Subscale (Spielberger et al. 1983), specifically, BI feel happy^. Participants in the mere-possession condition did not feel happier (M = 2.47, SD = .96) than those in the nopossession condition (M = 2.80, SD = .70), t(37) = 1.22, p = .23, d = .38, CI95% = [−0.24, 1.00]. Nevertheless, since the participants completed the State-Anxiety Subscale after they received the cream and following the pain-perception test, in order to strictly delineate the effect of affect from other factors, future research should measure participants’ mood after they receive the cream and prior to the pain-perception test. In our study, participants’ pain reduction could not be attributed to the mere-exposure effect because participants in both conditions were initially exposed to the sample of the cream simultaneously. The only difference between the two conditions was that participants in the mere-possession condition were subsequently granted ownership of the sample cream, but participants in the no-possession condition did not.

A significant group difference in pain perception appeared in the analysis of the MPQ results but not those from the VAS. There are at least two possible reasons for this outcome. First, prior researchers had demonstrated that the VAS is sensitive to changes in perceived pain when participants are asked to continuously report their pain intensity (Joyce et al. 1975; Schafer et al. 2015).

In our study, participants reported their pain intensity only once. Whether a significant group difference would be observed if the VAS was to be administered several times within the 1-min immersion duration is presently unknown. Second, it should be noted that VAS may not be sensitive to Asians’ pain perception (Yokobe et al. 2014). No similar observation has been made about results from the use of the MPQ.

Our findings add to the placebo-analgesia literature by indicating potential directions for further research, including limitations of our study that will need to be considered. First, we induced participants to seek the reduction of pain and to anticipate the effectiveness of the placebo. Doing so may have optimized the incidence of the mere-possession effect. Second, although our data demonstrated that the effect we observed was not due to a positive feeling in response to receiving a free gift, future studies might involve a control condition in which the gift is not purported to relieve pain. Third, our participants were healthy university students of Chinese ethnicity. Prior research has shown that cultural background influences pain perception (Callister 2003; Campbell and Edwards 2012). Future researchers may extend the ethnic and cultural range of the participants in an effort to generalize the current findings. Moreover, it seems critical to conduct future research with clinical patients who are in demonstrable pain. Lastly, it is unclear whether the mere-possession effect extends to other types of pain-induction tasks, such as those involving heat (e.g., Mitchell et al. 2004; Duschek et al. 2009) or loud noise (e.g., Brown et al. 2015; Rose et al. 2014).

A message coming from behind is interpreted as more negative than a message presented in front of a listener; social information presented from behind is associated with uncertainty and lack of control

Rear Negativity:Verbal Messages Coming from Behind are Perceived as More Negative. Natalia Frankowska  Michal Parzuchowski  Bogdan Wojciszke  Michał Olszanowski  Piotr Winkielman. European Journal of Social Psychology, 29 November 2019. https://doi.org/10.1002/ejsp.2649

Abstract: Many studies have explored the evaluative effects of vertical (up/down) or horizontal (left/right) spatial locations. However, little is known about the role of information that comes from the front and back. Based on multiple theoretical considerations, we propose that spatial location of sounds is a cue for message valence, such that a message coming from behind is interpreted as more negative than a message presented in front of a listener. Here we show across a variety of manipulations and dependent measures that this effect occurs in the domain of social information. Our data are most compatible with theoretical accounts which propose that social information presented from behind is associated with uncertainty and lack of control, which is amplified in conditions of self‐relevance.


General Discussion

Rear Negativity Effect in Social Domain
The present series of studies document a “rear negativity effect” – a phenomenon where perceivers evaluate social information coming from a source located behind them as more negative than identical information coming from a source located in front of them. We observed this effect repeatedly for a variety of verbal messages (communications in a language incomprehensible to the listeners, neutral communications, positive or negative words spoken in participants’ native language), for a variety of dependent variables (ratings, reaction times), and among different subject populations (Poland, US). Specifically, in Study 1, Polish subjects interpreted Chinese sentences as more negative when presented behind the listener. In Study 2, Polish subjects evaluated feedback from a bogus test as indicative of poorer results when it was presented behind, rather than in front of them. In Study 3, Polish subjects evaluated the Chinese sentences as the most negative when they were played from behind and when they supposedly described in-group (i.e., Polish) members. In Study 4, US subjects judged negative traits more quickly when the traits were supposedly describing self-relevant information and were played behind the listener.

Explanations of the effect

The current research extends previous findings that ecological, naturally occurring, sounds are detected quicker and induce stronger negative emotions when presented behind participants (Asutay & Västfjäll, 2015). Critically, the current studies document this effect in the domain of social information and show it to be stronger or limited to processing of self-relevant information, whether this relevance was induced by reference of messages to the self or to an in-group. Our characterization of the “rear negativity” effect in the social domains is compatible with several considerations and theoretical frameworks. Most generally, the effect is consistent with a notion common in many cultures that things that take place “behind one’s back” are generally negative. However, the accounts of why this is vary – ranging from metaphor theory, simple links between processing ease and evaluation, affordance and uncertainty theories, attentional as well as emotion-appraisal accounts.

Spatial metaphors. People not only talk metaphorically, but also think metaphorically, activating mental representations of space to scaffold their thinking in a variety of non-spatial domains, including time (Torralbo et al., 2006), social dominance (Schubert, 2005), emotional valence (Meier & Robinson, 2004), similarity (Casasanto, 2008), and musical pitch (Rusconi et al., 2006). Thus, it is interesting to consider how our results fit with spatial metaphor theories. Specifically, perhaps when people hear a message, they activate a metaphor and, as a result, evaluate the information as being more unpleasant, dishonest, disloyal, false, or secretive when coming from behind than from the front. Our results suggest that reasons for the rear negativity of verbal information go beyond simple metaphorical explanation. This is because this negativity occurs solely or is augmented for information that is personally relevant to the listener, and that it occurs even in paradigms that require fast automatic processing, leaving little time for activation of a conceptual metaphor. Of course, once the valence-to-location mapping is metaphorically established, it could manifest quickly and be stronger for personally-important information. In short, it would be useful to conduct further investigation of the metaphorical account, perhaps by manipulating the degree of metaphor activation, its specific form or its relevance.

Associative learning and cultural interpretations. The valence-location link could have metaphorical origins but could also result from an individual’s personal experiences that create a mental association (Casasanto, 2009). One could potentially examine an individual’s personal history and her cultural setting and see whether a rear location has linked to negative events. Specifically, everyday experiences could lead to a location-valence association. For example, during conversations an individual may have encountered more high-status people in front of her rather than behind, thus creating an association of respect and location. Or, the individual may have experienced more sounds from behind that are associated with criticism or harassment rather than compliments. Beyond an individual’s own associative history, there is also culture. For example, European cultures used to have a strong preference for facing objects of respect (e.g., not turning your back to the monarch, always facing the church altar). As a result, sounds coming from behind may be interpreted as coming from sources of less respect. More complex interpretative processes may also be involved. As discussed in the context of Study 3, hearing from behind from an out-group about one’s own group can increase the tendency to attribute negative biases to the outgroup. It can lead then to interpreting the outgroup’s utterances as being more critical or even threatening, especially when such utterances are negative (e.g. Judd et al., 2005; Yzerbyt, Judd, & Muller, 2009). However, these speculations are clearly post-hoc and further research is needed to understand the full pattern of results. Simila [cut here!!!!]

Fluency. One simple mechanistic explanation of the current results draws on the idea that difficult (disfluent) processing lowers stimulus evaluations, while easy (fluent) processing enhances evaluations (Winkielman et al., 2003). People usually listen to sounds positioned in front. So, it is possible that sounds coming from behind are perceived as more negative because they are less fluent (or less familiar). However, fluency, besides increasing the experience of positive affect, is also manifested through the speed of processing (i.e. fluent stimuli are recognized faster). Yet, it is worth mentioning that in Study 4 we did not observe the effect of location on overall reaction times. Moreover, previous research suggests that if anything, information presented from behind is processed faster (Asutay & Västfjäll, 2015). For these reasons, and because the effect is limited to self-relevant information, the fluency approach does not explain the presented effects. However, future research may consider a potential of fluency manipulations to reduce the rear negativity effect.

Affordances. Yet another possible explanation draws on classic affordance theory suggesting that the world is perceived not only in terms of objects and their spatial relationships, but also in terms of one’s possible actions (Gibson, 1950, 1966). Thus, verbal information located in the back may restrict possible actions to the listener and hence may cause negative evaluation. However, this explanation is weakened by our observations that reward negativity effect also appears when participants are seated and blindfolded, so they cannot see in the front. Further examination of this account could include a set-up that involves restricting participants’ hands or using virtual reality to manipulate perspective and embodied affordances.