Sunday, February 5, 2023

Rolf Degen summarizing... The average effect sizes in a “null field” such as homeopathy are a good indicator of the extent to which the tunnel vision of the researchers involved alone can conjure up positive results

Homeopathy can offer empirical insights on treatment effects in a null field. Matthew K. Sigurdson, Kristin L. Sainani & John P.A. Ioannidis. Journal of Clinical Epidemiology, February 01, 2023. https://doi.org/10.1016/j.jclinepi.2023.01.010

Abstract

Objectives: A “null field” is a scientific field where there is nothing to discover and where observed associations are thus expected to simply reflect the magnitude of bias. We aimed to characterize a null field using a known example, homeopathy (a pseudoscientific medical approach based on using highly diluted substances), as a prototype.

Study design: We identified 50 randomized placebo-controlled trials of homeopathy interventions from highly-cited meta-analyses. The primary outcome variable was the observed effect size in the studies. Variables related to study quality or impact were also extracted.

Results: The mean effect size for homeopathy was 0.36 standard deviations (Hedges’ g; 95% CI: 0.21, 0.51) better than placebo, which corresponds to an odds ratio of 1.94 (95% CI: 1.69, 2.23) in favor of homeopathy. 80% of studies had positive effect sizes (favoring homeopathy). Effect size was significantly correlated with citation counts from journals in the Directory of Open Access Journals and CiteWatch. We identified common statistical errors in 25 studies.

Conclusion: A null field like homeopathy can exhibit large effect sizes, high rates of favorable results, and high citation impact in the published scientific literature. Null fields may represent a useful negative control for the scientific process.


While overall income inequality rose over the past 5 decades, the rise in overall consumption inequality was small; the declining quality of income data likely contributes to these differences for the bottom of the distribution

Consumption and Income Inequality in the United States Since the 1960s. Bruce D. Meyer and James X. Sullivan. Journal of Political Economy, Feb 2023. https://doi.org/10.1086/721702

Abstract: Recent research concludes that the rise in consumption inequality mirrors, or even exceeds, the rise in income inequality. We revisit this finding, constructing improved measures of consumption, focusing on its well-measured components that are reported at a high and stable rate relative to national accounts. While overall income inequality rose over the past 5 decades, the rise in overall consumption inequality was small. The declining quality of income data likely contributes to these differences for the bottom of the distribution. Asset price changes likely account for some of the differences in recent years for the top of the distribution.


Messages generated by AI are persuasive across a number of policy issues, including weapon bans, a carbon tax, and a paid parental-leave program; participants rated the author of AI messages as being more factual and logical, but less angry & unique

Bai, Hui, Jan G. Voelkel, Johannes C. Eichstaedt, and Robb Willer. 2023. “Artificial Intelligence Can Persuade Humans on Political Issues.” OSF Preprints. February 5. doi:10.31219/osf.io/stakv

Abstract: The emergence of transformer models that leverage deep learning and web-scale corpora has made it possible for artificial intelligence (AI) to tackle many higher-order cognitive tasks, with critical implications for industry, government, and labor markets in the US and globally. Here, we investigate whether the currently most powerful, openly-available AI model – GPT-3 – is capable of influencing the beliefs of humans, a social behavior recently seen as a unique purview of other humans. Across three preregistered experiments featuring diverse samples of Americans (total N=4,836), we find consistent evidence that messages generated by AI are persuasive across a number of policy issues, including an assault weapon ban, a carbon tax, and a paid parental-leave program. Further, AI-generated messages were as persuasive as messages crafted by lay humans. Compared to the human authors, participants rated the author of AI messages as being more factual and logical, but less angry, unique, and less likely to use story-telling. Our results show the current generation of large language models can persuade humans, even on polarized policy issues. This work raises important implications for regulating AI applications in political contexts, to counter its potential use in misinformation campaigns and other deceptive political activities.


Continuing education workshops do not produce sustained skill development—quite the opposite; any modest improvement in performance erodes over time without further coaching

The implications of the Dodo bird verdict for training in psychotherapy: prioritizing process observation. Henny A. Westra. Psychotherapy Research, Dec 16 2022. https://doi.org/10.1080/10503307.2022.2141588

Abstract: Wampold et al.’s 1997 meta-analysis found that the true differences between bona fide psychotherapies is zero, supporting the Dodo bird conjecture that “All have won and must have prizes”. Two and half decades later, the field continues to be slow to absorb this and similar uncomfortable discoveries. For example, entirely commensurate with Wampold’s conclusion is the meta-analytic finding that adherence to a given model of psychotherapy is unrelated to therapy outcomes (Webb et al., 2010). Despite the clear implication that theoretical models should not be the main lens through which psychotherapy is viewed if we are aiming to improve outcomes, therapists continue to identify themselves primarily by their theoretical orientation. And a major corollary of Wampold’s conclusions is that despite the evidence for non superiority of a given model, our focus in training continues to be model-driven. This article seeks to elaborate the training implications of Wampold et al.’s conclusion, with a rationale and appeal to incorporate process-centered training.

Consider these similarly uncomfortable findings regarding the state of training. We assume, rather than verify the efficacy of our training programs. Yet, there is no evidence that continuing education workshops for example, produce sustained skill development—quite the opposite. Large effects on self-report are found but any modest improvement in performance erodes over time without further coaching (Madson et al., 2019). Perhaps most concerning, psychotherapists do not appear to improve with experience and in fact, the evidence suggests that skills may decline slightly over time (Goldberg et al., 2016). Not surprisingly then, while the number of model-based treatments has proliferated, the rate of client improvement has not followed suit (Miller et al., 2013). Could stagnant training methods may be related to stagnant patient outcomes?

We need innovations in training that better align our training foci and methods with factors empirically supported as influencing client outcomes. Process researchers have long observed that trained process coders (typically for research purposes) make better therapists due to their enhanced attunement (e.g., Binder & Strupp, 1997). While such training is not yet available in training programs, it arguably should be based on emerging developments in the science of expertise (Ericsson & Pool, 2016) and the urgent need to bring outcome information forward in real time so that it can be used to make responsive adjustments to the process of therapy. In fact, such information could be considered “routine outcome monitoring in real time” (Westra & Di Bartolomeo, 2022).

To elaborate, Tracey et al. (2014) provocatively argued that acquiring expertise in psychotherapy may not even be possible. This is because the ability to predict outcomes is crucial to shaping effective performance. Yet there is a lack of feedback available to therapists regarding the outcomes of their interventions and such information, if it comes at all, comes too late to make a difference in the moment. Therapists are essentially like blind archers attempting to shoot at a target. The development of Routine Outcome Monitoring (ROM) measures capable of forecasting likely outcomes is a major advance in correcting this blindness and improving predictive capacity. However, in order to be effective for skill development, feedback needs to occur more immediately so that the relationship between the therapist action and the client response (or nonresponse) can be quickly ascertained and adjustments made in real time. Interestingly, while ROM has been helpful in improving failing cases, it has not been effective in enhancing clinical skills more generally (Miller et al., 2013).

Learning to preferentially attend to, extract and continuously integrate empirically supported process data may prove to be the elusive immediate feedback that has been lacking in psychotherapy training but that is crucial to developing expertise. Observable process data that has been validated through process science as differentiating good from poor patient outcomes, could be considered “little outcomes”; which in turn are related to session outcomes and ultimately treatment outcome (Greenberg, 1986). Moreover, thin-slicing research supports that it is possible to make judgements about important outcomes from even tiny slices of expressive behavior (Ambady & Rosenthal, 1992). If one considers real time process information as micro-outcomes, properly trained clinicians, just like expert-trained process coders, may no longer have to be blind. For example, a therapist trained to identify and monitor resistance and signals of alliance ruptures, can be continuously tracking these important phenomena and responsively adjusting to safeguard the alliance. Or a therapist who is sensitive to markers of low and high levels of experiencing (Pascual-Leone & Yeryomenko, 2017) and client ambivalence (Westra & Norouzian, 2018) can not only optimize the timing of their interventions but also continuously watch the client for feedback on the success of their ongoing efforts.

Being steeped in process research gives one a unique perspective on the promise of process observation to advance clinical training. Our lab recently took our first foray into studying practicing community therapists. As we coded the session videotapes, we became aware that we possessed a unique skill set that was absent in therapist’s test interviews. Therapists seemed to be guided solely by some model of how to bring about change but failed to simultaneously appreciate the ebb and flow of the relational context of the work. They seemed absorbed in their own moves (their model) but not aware that they were in a dance and must continually track and coordinate the process with their partner. It seemed that we had incidentally trained ourselves to detect and use these process signals. Our training was different and very unique; it was more akin to deliberate practice focused on discrimination training for detecting empirically supported processes.

In short, information capable of diagnosing the health of the process and critically, of forecasting eventual outcomes is arguably hiding in plain sight if one can acquire the requisite observational capacity to harvest it. And transforming an unpredictable environment into a predictable one makes expertise possible to acquire (Kahneman, 2011). Importantly, extracting such vital information relies on observational skill, rather than patient report, end of session measures, or longer-term outcome; thus, such real time data extraction is immediately accessible and can complement existing outcome monitoring (Westra & Di Bartolomeo, 2022). Moreover, process markers are often opaque; requiring systematic observational training for successful detection. Without proper discrimination and perceptual acuity training, this gilded information remains obscured. Thus, heeding Wampold et al.’s call to refocus our efforts must include innovations in training; innovations that harness outcome information. We need more process research to further uncover the immediately observable factors capable of differentiating poor and good outcomes, but existing process science gives us a good start. And since process-centered training is transtheoretical, it can exist alongside models of therapy—learning to see while doing (Binder & Strupp, 1997). Training in psychotherapy has primarily prioritized intervention (models) and now it may be time to emphasize observation.

Psychotherapeutic experience seems to be unrelated to patients’ change in pathology

Germer, S., Weyrich, V., Bräscher, A.-K., Mütze, K., & Witthöft, M. (2022). Does practice really make perfect? A longitudinal analysis of the relationship between therapist experience and therapy outcome: A replication of Goldberg, Rousmaniere, et al. (2016). Journal of Counseling Psychology, 69(5), 745–754. Jan 2023. https://doi.org/10.1037/cou0000608

Abstract: Experience is often regarded as a prerequisite of high performance. In the field of psychotherapy, research has yielded inconsistent results regarding the association between experience and therapy outcome. However, this research was mostly conducted cross-sectionally. A longitudinal study from the U.S. recently indicated that psychotherapists’ experience was not associated with therapy outcomes. The present study aimed at replicating Goldberg, Rousmaniere, et al. (2016) study in the German healthcare system. Using routine evaluation data of a large German university psychotherapy outpatient clinic, the effect of N = 241 therapists’ experience on the outcomes of their patients (N = 3,432) was assessed longitudinally using linear and logistic multilevel modeling. Experience was operationalized using the number of days since the first patient of a therapist as well as using the number of patients treated beforehand. Outcome criteria were defined as change in general psychopathology as well as response, remission, and early termination. Several covariates (number of sessions per case, licensure, and main diagnosis) were also examined. Across all operationalizations of experience (time since first patient and number of cases treated) and therapy outcome (change in psychopathology, response, remission, and early termination), results largely suggest no association between therapists’ experience and therapy outcome. Preliminary evidence suggests that therapists need fewer sessions to achieve the same outcomes when they gain more experience. Therapeutic experience seems to be unrelated to patients’ change in psychopathology. This lack of findings is of importance for improving postgraduate training and the quality of psychotherapy in general.


Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey

Co-Writing with Opinionated Language Models Affects Users' Views. Maurice Jakesch, Advait Bhat, Daniel Buschek, Lior Zalmanson, Mor Naaman. arXiv Feb 1 2023. https://arxiv.org/abs/2302.00560


Abstract: If large language models like GPT-3 preferably produce a particular point of view, they may influence people's opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write - and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully.