Thursday, July 29, 2021

A valid evaluation of the theory of multiple intelligences is not yet possible: Problems of methodological quality for intervention studies

A valid evaluation of the theory of multiple intelligences is not yet possible: Problems of methodological quality for intervention studies. Marta Ferrero, Miguel A. Vadillo, Samuel P.León. Intelligence, Volume 88, September–October 2021, 101566. https://doi.org/10.1016/j.intell.2021.101566

Highlights

• A meta-analysis of the impact of MIT-inspired interventions on learning was performed.

• The qualitative analysis showed that the primary studies have important methodological flaws.

• The reported effect sizes were remarkably larger than the usual in education.

• MIT-inspired interventions for enhancing academic achievement learning is not recommended.

Abstract: Since Gardner suggested that human beings hold multiple intelligences, numerous teachers have adapted and incorporated the multiple intelligence theory (MIT) into their daily routine in the classroom. However, to date, the efficacy of MIT–inspired methodologies remains unclear. The focus of the present study was to perform a systematic review and a meta–analysis to assess the impact of these interventions on academic achievement through reading, maths, or science tests. The inclusion criteria for the review required that studies should estimate quantitatively the impact of an MIT–based intervention on the academic performance and that they followed a pre–post design with a control group. The final sample included 39 articles comprising data from 3009 pre-school to high school students, with diverse levels of achievement, from 14 different countries. The results showed that the studies had important methodological flaws, like small sample sizes or lack of active control groups; they also reported insufficient information about key elements, such as the tools employed to measure the outcomes or the specific activities performed during training, and revealed signs of publication or reporting biases that impeded a valid evaluation of the efficacy of MIT applied in the classroom. The educational implications of these results are discussed.

Keywords: InterventionMultiple intelligencesSystematic reviewMeta–analysis

4. Discussion

Since Gardner developed his theory about the existence of multiple intelligences, a growing number of teachers have adapted and incorporated the theory into their daily routine in the classroom (White, 2004). In spite of this unexpected success, as Gardner himself has recurrently recognized, there is no solid data about the effectiveness of applying MIT–inspired interventions in the academic achievement of students. To date there are only two meta–analyses on this matter and, as we have discussed above, both of them present important methodological shortcomings, such as an absence of any assessment of the quality of the studies included or a lack of control for publication bias. The aim of the present systematic review was to assess the quality of the studies testing the impact of MIT–inspired instructional methodologies on academic achievement of learners, overcoming the existing flaws of previous reviews as much as possible.

In general, the qualitative analysis of the results showed that the studies included in this review have important methodological flaws and report insufficient information about essential elements to make a critical appraisal of the methods, such as whether participants and instructors were blind to experimental manipulation, or whether the measures employed were reliable and valid. Perhaps more importantly, only a handful of studies described the intervention undertaken in sufficient detail to allow its replication. In other words, there is no way of knowing what the interventions consisted of and how the dependent variable was measured. When methodological information was given, many of the studies failed to meet important quality criteria, such as the randomisation of participants or the inclusion of an active control group. In fact, only a couple of quality criteria were clearly fulfilled by the majority of studies.

The quantitative analysis of the data replicates the results of previous meta–analyses, but with important caveats. As explained in the introduction, Bas (2016) and Batdi (2017) reported large effect sizes for MIT–based interventions (d = 1.077 and 0.95, respectively). Consistent with them, we find remarkably large effect sizes of gΔ = 1.49 and gp = 1.15. The sheer size of these effects should, on its own, be sufficient reason for skepticism (Pashler, Rohrer, Abramson, Wolfson, & Harris, 2016). To put these effect sizes in proper context, Fig. 7 shows the distribution of gΔ and gp from the studies included in the present meta–analysis, together with the effect sizes (standardized mean differences) of two large sets of high–quality educational studies commissioned by the Education Endowment Foundation (EEF) in the UK and the National Center for Educational Evaluation and Regional Assistance (NCEE) in the USA (Lortie-Forgues & Inglis, 2019). It is clear that the effects reported for the MIT–based interventions reviewed here are remarkably larger than the effects reported by the studies funded by these two institutions. They are also much larger than the typical effect sizes reported in psychological research (Funder & Ozer, 2019Rubio-Aparicio, Marín-Martínez, Sánchez-Meca, & López-López, 2018).


Fig. 7

What factors could explain the striking difference between the effect sizes found in the present studies and those reported in other areas of educational research? The funnel plots depicted in Fig. 6 offer a plausible response to this question. As can be seen, the largest effect sizes come from the studies with the lowest precision, that is, with the smallest number of participants. This pattern of results suggests that the average effect size is probably inflated by the (large) results of the lowest–quality studies.

In addition, all the studies commissioned by the EEF and the NCEE are required to meet the highest methodological standards, including the use of powerful sample sizes, active control groups, reliable and valid outcome measures, preregistered methods and analyses, and unconditional publication regardless of outcome (Lortie-Forgues & Inglis, 2019). In comparison, Fig. 2 shows that only a handful of the studies reviewed here complied with these standards. Only one of the studies included an active control group. This is unfortunate, because the available evidence shows that educational studies relying on passive control groups yield grossly overestimated effect sizes (Sala & Gobet, 2017). In fact, the inclusion of an active control group has been considered a decisive measure to test the efficacy of educational interventions (e.g. Datta, 2007), as long as the expectations of students in an active control group is guaranteed to be the same as the ones of those in the experimental group (Boot, Simons, Stothart, & Stutts, 2013).

None of the studies were preregistered, which, again, is an essential protection against biases in research (Kaplan & Irvin, 2015Warren, 2018) as it reduces researchers' degree of freedom and questionable research practices, such as the selective publication of analyses that “worked” (Simmons, Nelson, & Simonsohn, 2011). Similarly, measurement error can inflate effect sizes when a population effect size is estimated across small sample sizes (Loken & Gelman, 2017), a bias whose impact on the present studies is difficult to estimate because most of them failed to report psychometric information about the dependent measures. Fig. 2 also shows that none of the articles reviewed explicitly stated that participants and instructors were blind to experimental manipulation, which means that the results of the interventions could be entirely due to the positive expectations of participants, as mentioned above (Boot et al., 2013). Although difficult, it is possible to blind participants and instructors through the use of active control groups where the actors involved do not know whether they are being trained by the intervention under study or under an alternative one.

Given these caveats (and other problems highlighted in Fig. 2), the fact that the effect sizes reported in this literature are large is unsurprising. In our opinion, this literature should not be taken as evidence that MIT–based interventions work. All in all, although the majority of studies included in the present work suggested that MIT–inspired interventions yielded significant improvements in the academic achievement of students, it is imperative to interpret these results in the light of critical shortcomings that have emerged in the qualitative and quantitative analyses of the data.

To put these results in context, it is also important to note that the main tenet of MIT about the existence of multiple intelligences is not supported by the scientific community. Research in cognitive psychology has systematically pointed out the existence of a single intelligence, or general factor, that explains most of the variance in cognitive performance in different tasks (Lubinski, 2004Visser et al., 2006a). Most relevant for this study, the central claim regarding the application of MIT in schools lacks sound evidence. Presumably, all the intelligences should be used as channels when presenting new materials so that students experience the material via their best intelligence, and thus understanding will be promoted. However, studies in the field of learning psychology have shown that the best way to learn something is usually defined by the content itself, and not by the particular abilities or, in terms of Gardner, the specific intelligences profile of learners (Willingham, 2004). In other words, according to the best evidence available so far, teaching should be subordinated to the object of learning, not to the characteristics of individual learners.

Aside from these important gaps in the theory and its translation into classroom practice, any attempt to test the efficacy of MIT–inspired interventions in the future should address the methodological flaws of the existing literature that we have highlighted in the present review. Ideally, these studies should adopt experimental designs, use large samples, guarantee the blinding of participants and instructors, include an active control group, and follow detailed reporting guidelines, including precise information about the sample, procedure and materials employed in study, so that the results can be replicated by independent researchers.

MIT might have contributed to rethinking some important questions among educators, such as the fact that children are unique and valuable regardless of their capacities and that schools are responsible for helping all of them bring out their best and find their real interests and strengths. Or the fact that, too often, schools have exclusively focused on purely academic skills, such as reading or mathematics, at the expense of other skills, such as music or corporal expression, leading many children to fail in finding their real interests and strengths. Bearing this undeniable contribution to education in mind, it is understandable that many teachers have embraced MIT-inspired interventions in the classroom with great enthusiasm. However, as shown in the present study, the evidence gathered to date on the effectiveness of these educational actions does not allow for a valid assessment of their impact on learning. Due to the importance of implementing class well-grounded methods of instruction (Cook & Cook, 2004), it is imperative to perform high-quality research on the effectiveness of MIT-based intervention before its use in the classroom can be recommended or promoted.

Preferences for younger female mates is observed in species that exhibit long-term reproductive pair bonds (humans & hamadryas baboons); preference for older females is observed in species that mate promiscuously (chimps & savannah baboons)

The Effects of Age on Mate Choice Across Primate Species and its Correlation to Mating Systems. Ece Kremers. BSc Thesis, Univ. of Minnesota, Spring 2021. https://static1.squarespace.com/static/593ffd7ac534a5e73da04ccf/t/608c24f148a2f16ef8409d8f/1619797234523/

Age can be an important factor in mate choice as it affects experience, access to resources and reproductive value. The hypothesis of this thesis is that in species with long-term mating bonds, males will prefer to mate with younger females whereas in species that mate promiscuously, male will prefer to mate older females. Tackling this question of effect age has on mate choice will help contribute to knowledge on mating behavior and mate choice in primates. The methods for this thesis included gathering evidence from the scientific literature on mate patterns and mate choice in humans (Homo sapiens), chimpanzees (Pan troglodytes), hamadryas baboons (Papio hamadryas), and savannah baboons. Due to the large variation in human behavior across the globe, a cross-cultural analysis is used to draw conclusions regarding how men tend to perceive attractiveness in terms of age and how they choose potential reproductive partners. Similarities and difference in mating patterns and perception of attractiveness in primate are examined. Cross-cultural analysis concluded men generally find youthfulness attractive. Preferences for younger female mates is observed in species that exhibit long-term reproductive pair bonds (humans and hamadryas baboons), whereas preference for older females is observed in species that mate promiscuously (chimpanzees and savannah baboons). Females across all species prefer to mate in way that increase the survival of their offspring. For species with long term pair bonds, this means a preferences for males of high social status, independent of what age. In species that don’t experience long-term pair bonds, this means mating with many males in order to confuse paternity and reduce the risk of infanticide. However, in most primate species (those other than humans), it can be challenging to determine female mate preferences are because of the suppression of female choice through sexual coercion and male-male competition.


The social facilitation of eating: Why does the mere presence of others cause an increase in energy intake?

The social facilitation of eating: Why does the mere presence of others cause an increase in energy intake? Helen K. Ruddock, Jeffrey M. Brunstrom, Suzanne Higgs. Physiology & Behavior, July 28 2021, 113539. https://doi.org/10.1016/j.physbeh.2021.113539

Highlights

• People eat more when eating with friends and family, relative to when eating alone

• This is known as the ‘social facilitation of eating’

• We discuss gaps in the understanding of this phenomenon, and highlight areas for future research

Abstract: There is strong evidence that people eat more when eating with friends and family, relative to when eating alone. This is known as the ‘social facilitation of eating’. In this review, we discuss several gaps in the current scientific understanding of this phenomenon, and in doing so, highlight important areas for future research. In particular, we discuss the need for research to establish the longer-term consequences of social eating on energy balance and weight gain, and to examine whether people are aware of social facilitation effects on their own food intake. We also suggest that future research should aim to establish individual and contextual factors that moderate the social facilitation of eating (e.g. sex/gender), and it should clarify how eating socially causes people to eat more. Finally, we propose a novel evolutionary framework in which we suggest that the social facilitation of eating reflects a behavioural strategy that optimises the evolutionary fitness of individuals who share a common food resource.

Key words: Social facilitation of eatingSocial influencesEating behaviourEvolution


Both laypeople & working professionals (fraud investigators and auditors) use suspects’ angry responses to accusations as cues of guilt, but such anger is an invalid cue of guilt and is instead a valid cue of innocence

Anger Damns the Innocent. Katherine A. DeCelles et al. Psychological Science, July 28, 2021. https://doi.org/10.1177/0956797621994770

Abstract: False accusations of wrongdoing are common and can have grave consequences. In six studies, we document a worrisome paradox in perceivers’ subjective judgments of a suspect’s guilt. Specifically, we found that people (including online panelists, n = 4,983, and working professionals such as fraud investigators and auditors, n = 136) use suspects’ angry responses to accusations as cues of guilt. However, we found that such anger is an invalid cue of guilt and is instead a valid cue of innocence; accused individuals (university students, n = 230) and online panelists (n = 401) were angrier when they are falsely relative to accurately accused. Moreover, we found that individuals who remain silent are perceived to be at least as guilty as those who angrily deny an accusation.

Keywords: accusations, deception, guilt, affect, decision making, open data, open materials, preregistered