Three Reasons 90% of EdTech Studies Fail to Show Results

Chimp.jpg

Collectively, education organizations across the nation spend approximately $10B per year on educational technology (edtech), with the hope that these edtech tools will improve important student outcomes (e.g., achievement, engagement, 21st-century skills). Yet, nine out of 10 rigorous studies on educational interventions find that these treatments produce little or no impact on their intended outcomes (1). Because immense resources (e.g., time and money) are invested in edtech tools, and because these tools promise to revolutionize education, these results (or lack thereof) are certainly disconcerting. At Lea(R)n, we affirm boldly that results matter, so whom should we hold accountable for these disappointing findings? The entire educational ecosystem. Why? Because the problem is systemic, and the solution requires a healthy collaboration among various members (e.g., edtech developers, researchers, and education organizations) of the ecosystem. Below, I discuss the top three reasons why edtech products are failing to demonstrate results.

Top 3 reasons quote.png

Product Development Must be Informed by Theory and Science

Educational interventions that fail to incorporate science into their design and development will likely fail to demonstrate results. Learning is a complex phenomenon, but the field of learning science has gathered a multidisciplinary field of researchers (e.g., psychologists, physiologists, neuroscientists) to produce explanations of its various intricacies. Thus, leveraging sound scientific principles, we can understand and explain how the learning process works. Therefore, the development of learning solutions should be informed by sound learning theories, and developers should be able to clarify the explanatory mechanism(s) through which their product will produce a change in a given outcome. Not only should developers be clear about the aforementioned theory of change, but they should also seek to collect various levels of evidence throughout the development process. For instance, developers can elucidate the theory of change in planning stages, collect user feedback in design stages, examine usage patterns in later development stages, and test efficacy after implementation. Ultimately, developers should be able to explain the learning science that informs their product’s purported impact, and they should continuously test and evaluate the extent to which the product is producing its intended effects.

Research Must be both Rigorous and Relevant

The perfect research study is unquestionably elusive, and even if an intervention is effective, one or more methodological flaws or limitations (e.g., small sample, omitted variables, invalid measures, improper analyses, violated statistical assumptions) could bias the results of a study enough for it to fail to demonstrate results. This is precisely why researchers so scrupulously demand rigor when designing or critiquing any given study. However, even when proper research is designed with rigor in mind, if the methods are inappropriate for the research question(s) under investigation, then null results could be falsely attributed to the intervention. There is no one-size-fits-all solution that addresses the multitude of questions faced by edtech developers and education organizations. There is no “silver bullet,” and even the “gold standard” (i.e., randomized controlled trial, or RCT) has been shown to suffer from threats to validity (2), especially when the research question or context requires an alternative design. Science is an art; creatively crafting an adaptable methodology that leverages multiple techniques enables the researcher to generate the types of evidence necessary to address the myriad inquiries present in the education context.

Proper Attention Must be Paid to Implementation

If an edtech product is not implemented as intended (i.e., fidelity is not met), then it will likely fail to demonstrate results. Products should be designed with specific recommendations for dosage—that is, developers should know how much usage is required to produce a given outcome. Assuming the dosage recommendation is supported by evidence, if it is not met, then it is unfair to hold the product responsible for a lack of impact. Ideally, a product will have the aforementioned dosage recommendation as well as explicit instructions on how to optimize implementation, and ideally the education organization will have adequate resources available as well as an environment conducive to proper implementation. Unfortunately, these conditions often go unmet, and the appropriately impartial nature of research will report zero impact irrespective of who is to blame. Said plainly, it is important to consider fidelity of implementation when determining who should be held accountable for a product’s impact (or lack thereof).

Conclusion

The finding that nine out of ten educational interventions demonstrate little or no impact on educational outcomes is disappointing. However, we can change this pattern if we work collaboratively. Edtech developers must design and develop products with sound theory and learning science, researchers must design studies that use appropriate methods that are informed by thoughtful research questions, and education organizations must do their best to implement products as intended. If these all happen in harmony, then high-quality products will be built that improve important student outcomes in authentic education settings, and evidence of that impact will be demonstrated and disseminated by the research community.

 1. Coalition for Evidence-Based Policy, Randomized Controlled Trials Commissioned by the Institute of Education Sciences Since 2002: How Many Found Positive Versus Weak or No Effects, July 2013

2. Ginsburg, A., & Smith, M.S. (2016, March 15). Do randomized controlled trials meet the ‘gold standard’? American Enterprise Institute. Retrieved March 18, 2016, from http://www.aei.org/publication/do-randomized-controlled-trials-meet-the-gold-standard/

 

Dr. Daniel Stanhope