Working together with educators, administrators, technologists, and researchers, we built LearnPlatform to help education organizations (e.g., schools, districts, states, and education networks) manage, measure, and improve their edtech efforts.  Education organizations across the country use the IMPACT function within LearnPlatform to generate reports that help them understand the costs of their edtech initiatives, the extent to which edtech products are being used, and the impact that the edtech products are having on education outcomes. IMPACT also generates the types of evidence that education organizations need to meet ESSA requirements.


Building a Continuum of Evidence

Historically, the randomized controlled trial (RCT) has been considered the gold standard for research designs. Although RCTs are appropriate for addressing some research questions, they fail to inform many of the practical questions and decisions that education organizations face throughout any given day or year. Thus, a continuum of evidence is needed that gathers evidence that is both practical and rigorous. Education organizations use LearnPlatform to conduct studies that fall along that full continuum. Further, the core research designs facilitated by IMPACT produce evidence that aligns with the levels of evidence required by ESSA.

Framework for Rapid EdTech Evaluation 2.0

IMPACT’s Automated Research Design Feature

The research design refers to the methods and procedures used to conduct a study or evaluation. When an organization conducts an IMPACT Analysis, the platform automatically determines the appropriate research design based on available data, and it classifies the analysis as one of three types of research design: (a) control, (b) comparative, or (c) correlative.

Control Study Design

To be classified as a control design, the sample must consist of (a) students who were assigned to use the edtech product and (b) students who were assigned to not use the product. The control design indicates that there is a true control group. This means the students do not choose whether or not to use the product, which alleviates concerns about self-selection. When practical, education organizations may elect to randomly select and assign students to control and treatment groups, which further alleviates concerns about sampling bias and selection effects. With randomization, the evidence gathered from the control design meets what ESSA determines as strong (see ESSA Guidance page 8). Without randomization, the control study generates evidence that at least meets what ESSA determines as moderate, but may also be considered strong based on other considerations (e.g., sample size, effect size, strength of covariates).

Comparative Study Design

For a comparative design, the sample consists of students who used the edtech product along with a comparison group (or virtual control group) of students who did not use the product. Unlike a control study, the comparative design does not have a designated control group (i.e., group of students assigned to not use the product), so the platform automatically creates a comparison group based on available data. Numerous covariates (e.g., demographics, prior achievement, socioeconomic status) are included in the analysis to ensure the comparison group is properly matched and functionally equivalent to the treatment group. Evidence gathered from comparative studies meets what ESSA considers as moderate (see ESSA Guidance pages 8-9).

Correlative Study Design

For a correlative design, there are not control and treatment groups; rather, only a sample of students who used the edtech product is necessary to conduct the analysis. The analysis allows one to determine the strength and direction of the relationship between using the product and growth on an educational outcome of interest. The analysis accounts for, or holds constant, the effect of covariates to rule out plausible alternatives. Evidence generated through correlative studies meets what ESSA considers as promising (see ESSA Guidance pages 8-9).

Finally, ESSA determines that a program demonstrates a rationale if it has a logic model (i.e., theory to support why it should work) and is examining existing research that is studying a similar intervention, but does not meet the aforementioned three levels. Education organizations can collect evidence that meets this level by running Trials on LearnPlatform, in which educators grade edtech products on the eight core criteria considered when they try, buy, and use educational technologies, as well as qualitative feedback. Organizations can also collect this level of evidence by examining the results of rapid cycle evaluations that other schools and districts have conducted on LearnPlatform.

Ultimately, the reports and dashboards generated through LearnPlatform’s research-driven framework for rapid cycle evaluation helps education organizations demonstrate and document evidence that directly aligns with ESSA guidelines.

The process remains the same regardless of the design: Users upload their data and the adaptive platform automatically identifies and applies the appropriate research design. Thus, with LearnPlatform, schools and districts of any size can make evidence-based, data-driven decisions that also comply with federal reporting requirements