Search This Blog

Sunday, June 5, 2016

Replication

Uncertainty is a major theme of The Politics of Autism.  One source of uncertainty is the absence of replication studies.

At Remedial and Special Education, Matthew C. Makel have an article titled "Replication of Special Education Research: Necessary but Far Too Rare." The abstract:
Increased calls for rigor in special education have often revolved around the use of experimental research design. However, the replicability of research results is also a central tenet to the scientific research process. To assess the prevalence, success rate, and authorship history of replications in special education, we investigated the complete publication history of every replication published in the 36 journals categorized by ISI Web of Knowledge Journal Citation Report as special education. We found that 0.5% of all articles reported seeking to replicate a previously published finding. More than 80% of these replications reported successfully replicating previous findings. However, replications where there was at least one author overlapping with the original article (which happens about two thirds of the time) were statistically significantly more likely to find successful results.
The incentive structure of academia may contribute to the problem.  A few years ago, cognitive scientists Gary Marcus wrote:
For many reasons, science has become a race for the swift, but not necessarily the careful. Grants, tenure, and publishing all depend on flashy, surprising results. It is difficult to publish a study that merely replicates a predecessor, and it’s difficult to get tenure (or grants, or a first faculty jobs) without publications in elite journals. From the time a young scientist starts a Ph. D. to the time they’re up for tenure is typically thirteen years (or more), at the end of which the no-longer young apprentice might find him or herself out of a job. It is perhaps, in hindsight, no small wonder that some wind up cutting corners. Instead of, for example, rewarding scientists largely for the number of papers they publish—which credits quick, sloppy results that might not be reliable—we might reward scientists to a greater degree for producing solid, trustworthy research that other people are able to successfully replicate and then extend.