Sunday, August 5, 2018

Symptom misinformation: Participants were provided with feedback for some symptoms (targets), misleadingly suggesting that a slight majority of their peers experienced these targets on a regular basis; symptom ratings went down for control but not for target symptoms

Merckelbach, H., Dalsklev, M., van Helvoort, D., Boskovic, I., & Otgaar, H. (2018). Symptom self-reports are susceptible to misinformation. Psychology of Consciousness: Theory, Research, and Practice, http://dx.doi.org/10.1037/cns0000159

Abstract: We examined whether self-reported symptoms are affected by explicit and implicit misinformation. In Experiment 1, undergraduates (N = 60) rated how often they experienced somatic and psychological symptoms. During a subsequent interview, they were exposed to misinformation about 2 of their ratings: One was inflated (upgraded misinformation), whereas another was deflated (downgraded misinformation). Close to 82% of the participants accepted the upward symptom misinformation, whereas 67% accepted the downward manipulation. Also, 27% confabulated reasons for upgraded symptom ratings, whereas 8% confabulated reasons for downgraded ratings. At a follow-up test, some days later, participants (n = 55) tended to escalate their symptom ratings in accordance with the upgraded misinformation. Such internalization was less clear for downgraded misinformation. There was no statistically significant relation between dissociativity and acceptance or internalization of symptom misinformation. In Experiment 2, a more subtle and implicit form of misinformation was employed. Undergraduates (N = 50) completed a checklist of symptoms and were provided with feedback for some symptoms (targets), misleadingly suggesting that a slight majority of their peers experienced these targets on a regular basis. Next, participants rated the checklist again. Overall, symptom ratings went down for control but not for target symptoms. Taken together, our results demonstrate that symptom reports are susceptible to misinformation. The systematic study of symptom misinformation may help to understand iatrogenic effects in psychotherapy.

The effects of color bands on zebra finch behavior, physiology, life history, and fitness: Common knowledge that is NON-REPRODUCIBLE

Irreproducible text‐book “knowledge”: The effects of color bands on zebra finch fitness. Daiping Wang. Wolfgang Forstmeier, Malika Ihle, Mehdi Khadraoui, Sofia Jerónimo, Katrin Martin, Bart Kempenaers. Evolution, https://doi.org/10.1111/evo.13459

Abstract: Many fields of science—including behavioral ecology—currently experience a heated debate about the extent to which publication bias against null findings results in a misrepresentative scientific literature. Here, we show a case of an extreme mismatch between strong positive support for an effect in the literature and a failure to detect this effect across multiple attempts at replication. For decades, researchers working with birds have individually marked their study species with colored leg bands. For the zebra finch Taeniopygia guttata, a model organism in behavioral ecology, many studies over the past 35 years have reported effects of bands of certain colors on male or female attractiveness and further on behavior, physiology, life history, and fitness. Only eight of 39 publications presented exclusively null findings. Here, we analyze the results of eight experiments in which we quantified the fitness of a total of 730 color‐banded individuals from four captive populations (two domesticated and two recently wild derived). This sample size exceeds the combined sample size of all 23 publications that clearly support the “color‐band effect” hypothesis. We found that band color explains no variance in either male or female fitness. We also found no heterogeneity in color‐band effects, arguing against both context and population specificity. Analysis of unpublished data from three other laboratories strengthens the generality of our null finding. Finally, a meta‐analysis of previously published results is indicative of selective reporting and suggests that the effect size approaches zero when sample size is large. We argue that our field—and science in general—would benefit from more effective means to counter confirmation bias and publication bias.