Monday, November 27, 2017

Patient outcome's variability is weakly or not related to competence, training nor adherence of therapists

Common versus specific factors in psychotherapy: opening the black box. RogerMulder, Greg Murray, Julia Rucklidge. The Lancet Psychiatry, Volume 4, Issue 12, December 2017, Pages 953-962. https://doi.org/10.1016/S2215-0366(17)30100-1

Summary: Do psychotherapies work primarily through the specific factors described in treatment manuals, or do they work through common factors? In attempting to unpack this ongoing debate between specific and common factors, we highlight limitations in the existing evidence base and the power battles and competing paradigms that influence the literature. The dichotomy is much less than it might first appear. Most specific factor theorists now concede that common factors have importance, whereas the common factor theorists produce increasingly tight definitions of bona fide therapy. Although specific factors might have been overplayed in psychotherapy research, some are effective for particular conditions. We argue that continuing to espouse common factors with little evidence or endless head-to-head comparative studies of different psychotherapies will not move the field forward. Rather than continuing the debate, research needs to encompass new psychotherapies such as e-therapies, transdiagnostic treatments, psychotherapy component studies, and findings from neurobiology to elucidate the effective process components of psychotherapy.

---
Additionally, the dissemination of findings leads to further bias. Negative trials are less likely to be reported, thereby inflating effect sizes. Low-quality studies often result in larger effect sizes. Trial registration is poor, so we cannot know whether outcomes are selectively reported, particularly by groups with a strong allegiance to the treatments. Findings from a 2017 systematic review showed that only 12% of psychotherapy trials were prospectively registered with clearly defined primary outcome measures.

One obvious approach to the dodo bird problem is to test whether different therapies do lead to different outcomes. Head-to-head comparisons generally suggest small differential effects, which are smaller and non-significant after researcher allegiance is controlled for. However, this literature has substantial limitations. Most studies have investigated cognitive therapy or CBT as one of the treatment groups, so specific strengths of other approaches are poorly understood. Only a narrow range of treatment outcome measures have been systematically examined, most typically acute symptom reduction; longer-term effects, including relapse prevention measures for common chronic conditions, might differentiate some therapies for some problems. Differences might be revealed if a wider range of treatment outcome measures were used, including functioning, quality of life, and individualised measures of treatment outcome. However, such trials are expensive and rarely undertaken. Differences might also be larger if moderating factors such as individual differences between patients were accounted for in outcome modelling.

Another way to test the specific factor model is through therapist adherence. Improved adherence to theory-specified factors in evidence-supported therapies should improve patient outcomes, if these specific factors are important to the success of the therapy. However, the evidence has not generally supported this hypothesis, with findings from a meta-analysis showing that neither variability in competence nor adherence was related to patient outcome, suggesting that these variables are relatively inert therapeutic agents. The broader literature is split on this question, with some investigators finding no effect of treatment integrity on outcomes, some a positive effect, and some a negative effect (potentially due to an overly rigid application of technique, which could be detrimental to the therapeutic alliance for some clients). Extent of training might also not be relevant to outcome, as suggested by the work of Stanley and colleagues. Indeed, therapeutic alliance, a common factor, might be a more important variable to instigate change than therapeutic adherence, although even these effect sizes are modest (mean alliance–outcome correlation 0·26).

Regardless, common factor researchers argue that outcome studies do not answer the most important outstanding question in psychotherapy—namely, what are the mechanisms of change? Although the importance of specific factors has been estimated from effect sizes of targeted therapies compared with plausible controls, the importance of common factors has been estimated correlationally through the association between therapy outcomes and patient reports of rapport and engagement. Although the effect sizes of targeted therapies compared with controls permit causal correlations, correlation between therapy outcomes and patient engagement does not, and will be confounded by an overlap between the success of therapy and the client’s satisfaction with the therapist. Therapeutic alliance is fundamentally dyadic (ie, a reciprocal working relationship), which sits uncomfortably with the more medical notion of patient as recipient of the therapist’s activities.

Finally, psychotherapy research is difficult and expensive to conduct, and—without the commercial investment that occurs in pharmacotherapy research—deficits of the existing evidence base are attributable simply to the low power and small number of studies. For example, although the effectiveness of behavioural therapy for obsessive compulsive disorder is similar to that of pharmacological treatment, investigators of a meta-analysis of psychotherapy and pharmacotherapy for obsessive compulsive disorder found 15 psychotherapy trials with a total 705 patients, by contrast with 32 pharmacotherapy trials with a total of 3588 patients.

No comments:

Post a Comment