Search This Blog

Wednesday, September 21, 2022


Uncertainty is a major theme of The Politics of Autism.  In the concluding section, I write:
A key question in autism policy evaluation is simple to pose, hard to answer: How do autistic people benefit? How much better off are they as a result of government action? While there are studies of the short-term impact of various therapies, there is surprisingly little research about the long term, which is really what autistic people and their families care about. As we saw in chapter 4, few studies have focused on the educational attainment of autistic youths. For instance, we do not know much about what happens to them in high school, apart from the kinds of classes that they take. One study searched the autism literature from 1950 through 2011 and found just 13 rigorous peer reviewed studies evaluating psychosocial interventions for autistic adults. The effects of were largely positive, though the main finding of the review is that there is a need for further development and evaluation of treatments for adults.

 Giacomo Vivanti has a commentary at Autism Research titled, "What does it mean for an autism intervention to be evidence-based?"


Although there is consensus in the field that individuals on the autism spectrum should receive interventions that are evidence-based, the concept of “evidence-based” is multifaceted and subject to ongoing development and debate. In this commentary, we review historical developments, methodological approaches, as well as areas of controversies and research directions in the establishment of an evidence base for autism intervention.

From the commentary:

A relevant dimension of what is tested in a trial is the distinction between efficacy trials and effectiveness trials. Efficacy studies are designed to determine whether an intervention produces beneficial effects under optimal circumstances, that is, well-controlled settings in which a variety of potential confounds are controlled for. For example, in an efficacy trial the intervention might be delivered by highly trained clinicians, with frequent fidelity checks and corrective feedback/re-training in case of low adherence to intervention protocols, often supported by University-based research grants. Additionally, participants might be selected to be homogenous across multiple dimensions, such as age, IQ, absence of specific comorbidities, availability to receive intervention at home or in a clinic for multiple hours per week, and the intervention might be delivered in research settings on top of the usual care that is normally available through community services. By maximizing homogeneity in standards of intervention delivery, participant features, and intervention features (e.g., duration, intensity), efficacy trials allow for causal inference on the internal validity of an intervention (i.e., they address the question “can the intervention work under ideal circumstances?”). The inherent limitation of efficacy trials is that participants, resources, settings, and interventionists might not resemble those in real-world settings, limiting generalizability of the results.

To address this issue, efficacious interventions should then be subject to tests of effectiveness, which measure the degree to which the beneficial effects documented in efficacy trials are obtained when the same intervention is delivered by non-University-based practitioners within normative “usual care” contexts, for example, community clinical or educational settings, and across the populations that those settings have the mandate to serve, for example, individuals who have multiple diagnoses in addition to autism. In these contexts, it is often unfeasible to assign participants to intervention conditions at random without interfering with regulatory constraints, mandates, and performance standards by which agencies are held accountable. Therefore, especially for interventions previously shown to be efficacious in tightly controlled RCT, effectiveness trials might use a quasi-experimental design (Handley et al., 2018), whereby participants are not randomly assigned to different conditions (e.g., a study comparing outcomes of two preschool programs, using children who have previously enrolled in those programs as participants; Boyd et al., 2014; Vivanti, Prior, et al., 2014; Vivanti, Paynter, et al., 2014). Although the lack of randomization in this type of studies increases the risk of bias (for example, one setting might be only accessible to more resourceful families, introducing a systematic bias), the combination of efficacy and effectiveness trials has the potential to accomplish both indication of internal validity and impact of the intervention in real world settings. Importantly, despite the previously mentioned challenges, there is a small but growing literature of effectiveness studies that use RCT designs (e.g., Kaale et al., 2012; Vivanti et al., 2019).