FAQs: IMPACT Analysis 

For answers and information on many common topics, simply select the question below. 

Of course, you can always contact us if the information provided doesn’t answer your question or you'd like additional information. 


What is an IMPACT™ Analysis?

Developed by Lea(R)n, an IMPACT™ Analysis is an evidence-based analytical methodology that integrates data from multiple sources (e.g., usage, achievement, demographics, cost) to generate reports and dashboards that provide insights into the implementation and IMPACT™ of educational interventions. An IMPACT™ Analysis includes both qualitative and quantitative data, maximizing insight by analyzing data on product efficacy and teacher feedback. This state-of-the-art methodology employs sophisticated analytics that help schools and districts better understand how edtech is being used in their organizations and which products contribute to meaningful education outcomes (e.g., engagement, achievement, 21st century skills). This informs critical instructional, operational, and financial decisions, allowing administrators to identify and implement the most effective educational interventions for their classrooms.

Why did Lea(R)n develop IMPACT™ Analysis?

As schools and districts integrate and incorporate educational technologies, questions arise about which products are used, how much they are used, and whether or not they are working. IMPACT™ Analysis was designed and developed to address these questions. It integrates data from multiple sources (e.g., usage, achievement, pricing) to produce evidence-based reports and dashboards on student engagement and product efficacy, providing insights on both the implementation and IMPACT™ of educational interventions.

How are reports from the Lea(R)n IMPACT™ Analysis different from others?

IMPACT™ Analysis reports are driven by a scientific methodology designed to deliver practical, on-demand insights that inform instructional, operational, and financial decisions. The research-backed methodology includes a proprietary grading rubric, scoring algorithms and sophisticated analytics developed with key stakeholders (e.g., educators and administrators), and vetted by psychometricians and applied scientists. A rigorous psychometric approach was used to develop the Lea(R)n grading rubric, which educators use to evaluate edtech products and differentiate effective and ineffective technologies. Further, rigorous scientific approaches were used to develop the analytics engine that drives the IMPACT™ Analysis, which leverages multiple research methods and flexibly adapts the specific research design (i.e., controlled, comparative, or correlative) based on the data inputted into the system.

How does IMPACT™ Analysis work? What is the methodology?

Once data are uploaded, the advanced IMPACT™ analytics engine generates insights into product engagement and IMPACT™. A backend clustering algorithm is used to group students into natural usage clusters and identify patterns across student groups with differing levels of usage. A quantile analysis then partitions students into subgroups based on levels of prior performance and examines the efficacy of the product to improve education outcomes for each performance group. A fidelity analysis partitions students based on the extent to which they achieved the recommended dosage, and then examines the efficacy of the product based on each group. The built-in covariate analysis allows IMPACT™ Analysis to account for differences such as student demographics (e.g., gender, ethnicity, socioeconomic status), grade level, and prior achievement when identifying the efficacy of an edtech product. A cost analysis provides information on the total cost of ownership, cost-effectiveness of an edtech product, and amount of money spent on different usage clusters and fidelity groups. The on-demand analytics dashboards display edtech product insights in a transparent and easy-to-use way. For an in-depth description of the research methodologies used in the IMPACT™ Analysis click here.

What types of data are needed to run a Lea(R)n IMPACT™ Analysis?

IMPACT™ Analysis generates multiple types of reports and dashboards depending on user needs and data availability. For example, if a user wants to know how certain products are being used, usage data (e.g., logins, modules completed, syllabus progress) are required at the targeted level of analysis (e.g., student or school). If the user wants to know about ROI, usage and pricing data (e.g., price per student or site) are needed, with the addition of recommended dosage amounts. If a user wants to understand product IMPACT™, usage and achievement data (e.g., test scores, course grades, nonacademic outcomes) are needed. In all of the aforementioned analyses, we highly recommend including additional student- and school-level covariates (e.g., student demographics, prior achievement, school urbanicity) and publicly available data to add to the depth and breadth of insight offered by the analysis. The addition of covariates adds accuracy to the results by enabling the analysis to statistically control for potentially confounding variables and by helping achieve baseline equivalence among students prior to the analysis. For more information on covariate data, see below.

How can I access usage and engagement data?

Product companies collect data on the extent to which their products are used and they are responsible for providing that information to administrators. However, the quality, accessibility, and comprehensiveness of data provided by edtech companies varies from product to product. For information on how to access usage data, visit the product company’s website or contact a product company representative. Links to product company websites can be found in Lea(R)n’s product library.

How can I find recommended dosage information?

Product companies provide recommended dosages for their products. Ideally, this dosage information should be backed by research. If recommended dosage information is available, it may be accessed on the product company’s website or by contacting a product company representative. Schools and districts are also encouraged to establish their own dosage recommendations when they have rationale for requiring specific levels of usage.

How do you determine which achievement outcome to use for a given product?

The achievement outcome in the IMPACT™ Analysis should be a precise measure of the educational construct that the product aims to IMPACT™. Further, the achievement outcome should attempt to match the specificity of the educational construct — how narrow or broad the predictor is should dictate how narrow or broad the outcome is. For example, if a product purports to improve a student’s proficiency in algebra, then the achievement outcome should be a metric that assesses a student’s proficiency in algebra, instead of a metric that examines a student’s proficiency in statistics or their overall proficiency in mathematics. Although the latter will likely show some degree of correlation with the algebra metric, it’s ideal to have a measure that matches the specificity of the product’s desired effect. The user should determine the exact outcome that the product is supposed to IMPACT™, and then determine a measure that best represents that precise outcome.

Are there any requirements for the type of metric that can be used for the achievement outcome?

IMPACT™ Analysis is agnostic with regard to how the achievement metric is defined for each analysis. The achievement metric should be the educational outcome that the edtech product purports to IMPACT™, which allows the analysis to measure whether the edtech produces the intended effect. The only requirement to run the analysis is that the achievement criterion is a quantitative metric (e.g., test scores, percentiles, numeric ratings). The analysis can handle achievement metrics that are continuous (e.g., test scores), ordinal (e.g., proficiency level), or binary (e.g., pass/fail, retained/not retained, improved/not improved). Some examples of the types of achievement metrics that can be used are end-of-grade test scores, content area test scores, course grades, retention rates, graduation rates, self-efficacy, or 21st century skills.

What is a trial (or pilot) and how is it integrated into the IMPACT™ Analysis?

A trial (or pilot) uses a research-backed survey to help users gather feedback and insight from educators regarding edtech IMPACT™. It allows stakeholders to generate qualitative data (educator insights and comments) and quantitative data (product grades on the eight core criteria) from educators across an entire school, district, or state. In addition to product feedback sourced from verified educators in LearnCommunity, trial results are integrated as one section of the IMPACT™ Analysis report, allowing users to better understand how their educators and those in the LearnCommunity — evaluate the product on the core criteria deemed most important when trying, buying, or using an edtech product.

What is the LearnCommunity?

The LearnCommunity is a free resource that allows educators to share trusted recommendations and best practices with an online community of more than 100,000 verified educators. Using Lea(R)n’s research-backed rubric, data integrations, and powerful filters, LearnCommunity allows educators to access tailored insights that improve instruction and outcomes.

How was the rubric created for grading edtech products?

Using rigorous scientific and psychometric methods, Dr. Daniel S. Stanhope led the research to identify the eight core criteria that are most important for educators when they try, buy, and use educational technologies. Based on these criteria, Lea(R)n developed a rubric and protocol that educators can use to grade products. When educators grade products on LearnPlatform, they do so on a sliding scale from F to A+. Individual grades are assigned for each of the core criteria and a holistic (or overall) grade is derived from Lea(R)n’s proprietary grading algorithm.

What are usage clusters and how are they formed?

Usage clusters are subsets of students grouped together based on how much they use an edtech product (e.g., low use, moderate use, high use). IMPACT™ Analysis statistically generates these clusters based on natural usage patterns using an advanced clustering algorithm. The algorithm identifies the optimal number of clusters based on similarities in their usage trends. IMPACT™ compares student achievement and product efficacy across these usage clusters.

What is an effect size?

An effect size is a measure derived from a statistical analysis that aims to quantify the difference between two groups, and is often used to quantify the effect of a given intervention. An effect size can be used to infer the extent to which an intervention was effective for one group (treatment group) versus another group (control/comparison group). The larger the effect size, the more IMPACT™ an edtech intervention had on the treatment group (e.g., sample of students who were assigned to use an edtech product). A negative effect size implies that the treatment group performed worse on the given achievement outcome than did the comparison group (e.g., students who were not assigned to use the edtech product). In an IMPACT™ Analysis, the effect size can be interpreted as a measure of the extent to which an edtech product (or intervention) had an IMPACT™ (positive or negative) on the specified achievement outcome. By including student- and school-level covariates (e.g., student demographics, poverty, school urbanicity), IMPACT™ makes statistical adjustments to the effect size in order to control for potential confounds and extraneous factors.

What is the difference between a treatment group and a control group?

In experimental methodology, the treatment group consists of participants who receive an experimental stimulus or manipulation (in this case an edtech intervention). The control (or comparison) group consists of participants who do not receive the experimental stimulus, which is used as a baseline or counterfactual. By comparing the educational outcomes of the treatment and control groups, IMPACT™ Analysis identifies the extent to which the product had an IMPACT™ on the given outcome. Both groups should be representative of the same target population, and researchers should do their best to confirm baseline equivalence. Ideally, the treatment and control groups are determined using random selection and random assignment.

How does the IMPACT™ Analysis divide the sample into treatment and control groups?

Treatment and control (or comparison) groups are determined by the school or district. If a school or district randomly assigns students to the treatment and control groups, then these pre-defined groups are used in the IMPACT™ Analysis. However, many schools and districts choose to run widespread edtech implementations rather than conduct a trial (or pilot) via experimental design. As another alternative, schools and districts may provide historical data to evaluate edtech usage and IMPACT™ without having previously employed a research design. In cases like these, treatment groups consist of students who used the edtech product, and comparison groups consist of students who a) did not use the product, b) are representative of the same target population, and c) don’t differ discernibly from students in the treatment group in a confounding way. If the aforementioned conditions can’t be met, then no comparison group is used and the default effect size calculation is replaced by a correlation coefficient in which engagement is correlated with growth in the education outcome (statistically controlling for covariates).

How does Lea(R)n differentiate between quantitative and qualitative data? What is the difference between the two?

Quantitative data consists of data that can be measured and represented in numbers (e.g., number of students, percentage correct, time on product). Qualitative data consists of data that can be observed (e.g., open-ended comments, interviews, or observations). Both quantitative and qualitative data are important to measuring the effectiveness of edtech, and both types of data are included in an IMPACT™ Analysis.

How is the product grade determined?

There are multiple product grades: one based on product IMPACT™ and two based on teacher insights. First, there is an overall product grade that is determined based on the magnitude of the overall effect size. Products with higher effect sizes will receive higher grades, with the respective range of grades being based on best practices and past research on effect sizes in the education context. Second, there is a trial-specific grade that is based on insights from educators at the respective school or district running the trial. Third, there is a community-based grade that is driven by insights from the LearnCommunity of educators. The two grades driven by teacher insights result from verified educator systematic product reviews. Verified educators evaluate products using a grading protocol and rubric, consisting of the eight core criteria most important for educators when they try, buy, and use educational technologies. The eight core criteria were developed through rigorous research led by Dr. Daniel Stanhope using sound research methodologies and psychometric standards.

What is a quantile?

Quantiles are groups formed by partitioning a sample into roughly equal sizes based on a given distribution. In IMPACT™ Analysis, pre-achievement quantiles are formed, such that the overall sample of students is partitioned into subgroups based on a pre-achievement metric (e.g., pre-intervention achievement levels, cumulative GPA, pre-intervention test scores). The analysis partitions students into multiple groups of roughly equal size, ranging from “lowest performing students” to “highest performing students.” These groups are examined to determine how they differ in usage and IMPACT™.

What are covariates and how do you account for them in your analysis?

Covariates are factors that have the potential to affect student achievement or educational outcomes, but are not necessarily a target variable under investigation in the edtech intervention. An important assumption in control trials is that covariate levels are identical for students in the treatment group and control group — groups are assumed to demonstrate baseline equivalence so any post-intervention differences can be attributable to the edtech intervention. If students in the treatment group and control group differ as a function of covariates, it’s difficult to determine with confidence that the educational intervention was the sole factor responsible for change in the educational outcome. Common covariates in educational research include socioeconomic status, ethnicity, gender, grade, prior performance, and urbanicity. IMPACT™ Analysis implements a technique that statistically controls for the IMPACT™ of covariates in order to hold them constant when evaluating product efficacy.

In addition to the effects of the edtech on student achievement, there are other factors that IMPACT™ the effectiveness of any given intervention, such as quality of instruction and student differences. How do you account for these additional variables?

IMPACT™ Analysis has the ability to account for student-, class-, and school-level variables such as grade level, previous performance, quality of instruction, student demographics, school urbanicity, school size, IMPACT™ of additional products, and many other factors. The analytics engine accounts for all covariates included in the data, and will statistically adjust the effect size accordingly.

How does effect size within performance quantiles inform decisions on closing the achievement gap?

An examination of performance quantiles allows the IMPACT™ Analysis to determine whether an edtech product demonstrates the ability to close the achievement gap. First, students are partitioned into groups (i.e., quantiles) based on their levels of achievement on a previous achievement metric (e.g., GPA prior to the intervention, previous test score, academic performance the prior year), and then each group is divided into their respective treatment or control group. Finally, an effect size is computed for each group, which demonstrates how well an edtech product works for each performance group (i.e., performance quantile). Edtech products that show a large effect size for historically low performing students are products that help close the achievement gap. For example, if the IMPACT™ Analysis finds that effect sizes for an edtech product are higher for students in the “low achievement” quantiles, then this product would be demonstrating effectiveness at closing the achievement gap.

Can IMPACT™ analysis integrate data from our SIS or LMS?

IMPACT™ Analysis has the ability to integrate data from myriad sources, including edtech products, SIS systems, and LMS systems.

How are the results shared with stakeholders?

Administrators have complete flexibility and control to share results across their organizations and with key stakeholders. Lea(R)n offers administrators the ability to share IMPACT™ Analysis reports, trial (or pilot) results, and engagement dashboards by logging into LearnPlatform. Administrators can set login permissions to allow each type of user to access results relevant to them/their role. In addition, all graphics and visual displays in the IMPACT™ Analysis can be exported (e.g., PNG, JPEG, or SVG).

Who should I contact if I need help?

Please don’t hesitate to contact a member of Lea(R)n’s implementation team if you have any questions about LearnPlatform or would like any additional information regarding IMPACT™ analysis.