Tuesday, December 3, 2019

For years, SAT developers & administrators have declined to say that the test measures intelligence, despite the fact that the SAT can trace its roots through the Army Alpha & Beta tests, & others

What We Know, Are Still Getting Wrong, and Have Yet to Learn about the Relationships among the SAT, Intelligence and Achievement. Meredith C. Frey. J. Intell. 2019, 7(4), 26; December 2 2019, https://doi.org/10.3390/jintelligence7040026

Abstract: Fifteen years ago, Frey and Detterman established that the SAT (and later, with Koenig, the ACT) was substantially correlated with measures of general cognitive ability and could be used as a proxy measure for intelligence (Frey and Detterman, 2004; Koenig, Frey, and Detterman, 2008). Since that finding, replicated many times and cited extensively in the literature, myths about the SAT, intelligence, and academic achievement continue to spread in popular domains, online, and in some academic administrators. This paper reviews the available evidence about the relationships among the SAT, intelligence, and academic achievement, dispels common myths about the SAT, and points to promising future directions for research in the prediction of academic achievement.

Keywords: intelligence; SAT; academic achievement

2. What We Know about the SAT

2.1. The SAT Measures Intelligence

Although the principal finding of Frey and Detterman has been established for 15 years, it bears repeating: the SAT is a good measure of intelligence [1]. Despite scientific consensus around that statement, some are remarkably resistant to accept the evidence of such an assertion. In the wake of a recent college admissions cheating scandal, Shapiro and Goldstein reported, in a piece for the New York Times, “The SAT and ACT are not aptitude or IQ tests” [6]. While perhaps this should not be alarming, as the authors are not experts in the field, the publication reached more than one million subscribers in the digital edition (the article also appeared on page A14 in the print edition, reaching hundreds of thousands more). And it is false, not a matter of opinion, but rather directly contradicted by evidence.
For years, SAT developers and administrators have declined to call the test what it is; this despite the fact that the SAT can trace its roots through the Army Alpha and Beta tests and back to the original Binet test of intelligence [7]. This is not to say that these organizations directly refute Frey and Detterman; rather, they are silent. On the ETS website, the word intelligence does not appear on the pages containing frequently asked questions, the purpose of testing, or the ETS glossary. If one were to look at the relevant College Board materials (and this author did, rather thoroughly), there are no references to intelligence in the test specifications for the redesigned SAT, the validity study of the redesigned SAT, the technical manual, or the SAT understanding scores brochure.
Further, while writing this paper, I entered the text “does the SAT measure intelligence” into the Google search engine. Of the first 10 entries, the first (an advertisement) was a link to the College Board for scheduling the SAT, four were links to news sites offering mixed opinions, and fully half were links to test prep companies or authors, who all indicated the test is not a measure of intelligence. This is presumably because acknowledging the test as measure of intelligence would decrease consumers’ belief that scores could be vastly improved with adequate coaching (even though there is substantial evidence that coaching does little to change test scores). One test prep book author’s blog was also the “featured snippet”, or the answer highlighted for searchers just below the ad. In the snippet, the author made the claims that “The SAT does not measure how intelligent you are. Experts disagree whether intelligence can be measured at all, in truth” [8]—little wonder, then, that there is such confusion about the test.

2.2. The SAT Predicts College Achievement

Again, an established finding bears repeating: the SAT predicts college achievement, and a combination of SAT scores and high school grades offer the best prediction of student success. In the most recent validity sample of nearly a quarter million students, SAT scores and high school GPA combined offered the best predictor of first year GPA for college students. Including SAT scores in regression analyses yielded a roughly 15% increase in predictive power above using high school grades alone. Additionally, SAT scores improved the prediction of student retention to the second year of college [9]. Yet many are resistant to using standardized test scores in admissions decisions, and, as a result, an increasing number of schools are becoming “test optional”, meaning that applicants are not required to submit SAT or ACT scores to be considered for admission. But, without these scores, admissions officers lose an objective measure of ability and the best option for predicting student success.

2.3. The SAT Is Important to Colleges

Colleges, even nonselective ones, need to identify those individuals whose success is most likely, because that guarantees institutions a consistent revenue stream and increases retention rates, seen by some as an important measure of institutional quality. Selective and highly selective colleges further need to identify the most talented students because those students (or, rather, their average SAT scores) are important for the prestige of the university. Indeed, the correlation between average SAT/ACT scores and college ranking in U.S. News & World Report is very nearly 0.9 [10,11].

2.4. The SAT Is Important to Students

Here, it is worth recalling the reason the SAT was used in admissions decisions in the first place: to allow scholarship candidates to apply for admission to Harvard without attending an elite preparatory school [7]. Without an objective measure of ability, admissions officers are left with assessing not just the performance of the student in secondary education, but also the quality of the opportunities afforded to that student, which vary considerably across the secondary school landscape in the United States. Klugman analyzed data from a nationally representative sample and found that high school resources are an important factor in determining the selectivity of colleges that students apply for, both in terms of programmatic resources (e.g., AP classes) and social resources (e.g., socioeconomic status of other students) [12]. It is possible, then, that relying solely on high school records will exacerbate rather than reduce pre-existing inequalities.
Of further importance, performance on the SAT predicts the probability of maintaining a 2.5 GPA (a proxy for good academic standing) [9]. Universities can be rather costly and admitting students with little chance of success until they either leave of their own accord or are removed for academic underperformance—with no degree to show and potentially large amounts of debt—is hardly the most just solution.

3. What We Get Wrong about the SAT

Nearly a decade ago, Kuncel and Hezlett provided a detailed rebuttal to four misconceptions about the use of cognitive abilities tests, including the SAT, for admissions and hiring decisions: (1) a lack of relationship to non-academic outcomes, (2) predictive bias in the measurements, (3) a problematically strong relationship to socioeconomic status, and (4) a threshold in the measures, beyond which individual differences cease to be important predictors of outcomes [13]. Yet many of these misconceptions remain, especially in opinion pieces, popular books, blogs, and more troublingly, in admissions decisions and in the hearts of academic administrators (see [14] for a review for general audiences).

3.1. The SAT Mostly Measures Ability, Not Privilege

SAT scores correlate moderately with socioeconomic status [15], as do other standardized measures of intelligence. Contrary to some opinions, the predictive power of the SAT holds even when researchers control for socioeconomic status, and this pattern is similar across gender and racial/ethnic subgroups [15,16]. Another popular misconception is that one can “buy” a better SAT score through costly test prep. Yet research has consistently demonstrated that it is remarkably difficult to increase an individual’s SAT score, and the commercial test prep industry capitalizes on, at best, modest changes [13,17]. Short of outright cheating on the test, an expensive and complex undertaking that may carry unpleasant legal consequences, high SAT scores are generally difficult to acquire by any means other than high ability.

That is not to say that the SAT is a perfect measure of intelligence, or only measures intelligence. We know that other variables, such as test anxiety and self-efficacy, seem to exert some influence on SAT scores, though not as much influence as intelligence does. Importantly, though, group differences demonstrated on the SAT may be primarily a product of these noncognitive variables. For example, Hannon demonstrated that gender differences in SAT scores were rendered trivial by the inclusion of test anxiety and performance-avoidance goals [18]. Additional evidence indicates some noncognitive variables—epistemic belief of learning, performance-avoidance goals, and parental education—explain ethnic group differences in scores [19] and variables such as test anxiety may exert greater influence on test scores for different ethnic groups (e.g., [20], in this special issue). Researchers and admissions officers should attend to these influences without discarding the test entirely.

No comments:

Post a Comment