Frequently Asked Questions

For answers and information on many common topics, simply select the question below. 

Of course, you can always contact us if you can’t find what you’re looking for or you’d like additional information.


EdTech Management & LearnPlatform

What is edtech management?

Edtech management encompasses all stages of finding, buying, organizing, implementing, evaluating and measuring instructional technology, and all of the processes therein. A fairly new concept, edtech management aims to maximize the value of the vastly increasing edtech product market, so that educators and administrators in K12 and higher education organizations have oversight of their resource allocations with regard to teaching hardware and software.

What is an edtech management platform?

An edtech management platform is a central system for finding, buying, organizing, implementing, evaluating and measuring instructional technology. Via one login from any location, users can use this software as a service to streamline processes, maintain an approved product portfolio, manage contracts and purchasing, and solicit and review edtech feedback from their educators.

Is an edtech management platform the same as a learning management system (LMS) or student information system (SIS)?

An edtech management platform is different from a LMS and a SIS. A LMS is a software application for the administration, documentation, tracking, reporting and delivery of educational courses or training programs - it deals primarily with the hosting and delivery of instruction. A SIS is an information system with which education establishments manage student data - it is a repository for student demographics, grades, etc. In contrast, an edtech management platform is a central system for finding, buying, organizing, implementing, evaluating and measuring instructional technology. A management platform does not host and deliver instruction, nor does it manage comprehensive student data that is not related to edtech usage.

What is LearnPlatform?

LearnPlatform connects the largest network of educators, administrators and organizations improving student outcomes through data-driven instructional, budget and resource decisions. Designed with, by and for educators and administrators, the research-based, configurable LearnPlatform manages all aspects of edtech with a single system to organize, streamline and analyze all learning tools.

What is Lea(R)n?

A benefit corporation founded in 2014, Lea(R)n is a team of educators, administrators, researchers and technologists who designed and developed LearnPlatform (an expansion of LearnTrials). Working with learning organizations across the United States, the team continues improvement on the platform, supports successful implementation and data analysis, and participates in research collaboratives, panels and forums to advance efficacy and data-driven decision making. The Lea(R)n team establishes and elevates standards of practice that drive personalized learning at scale, student achievement and equitable access to education technologies.

Does LearnPlatform integrate with other systems?

LearnPlatform simply and easily integrates with SIS and other school systems. The Lea(R)n technical support and implementation teams can quickly collaborate with the customer’s designated point of contact to coordinate the appropriate level of connectivity.

How do we know if we need an edtech management system?

If your educators are using digital teaching tools, instructional technologies and classroom software and hardware — paid licenses or LearnPlatformOER — you could benefit from managing those in one easily accessible system. Centralizing an LearnPlatformapproved product library, aligned to your student improvement plans, achievement goals and compliance requirements, provides your teachers with an appropriate portfolio from which to choose edtech for their students. Streamlining also vastly simplifies LearnPlatformvetting, procurement, RFIs and RFPs, contract storage and renewals management, providing a system of record for all products (with purchase, pricing and utilization history) to save time and money.

If we don’t know how much edtech we’re using, how would LearnPlatform help us?

A lot of learning organizations use LearnPlatform to “start somewhere.” Schools and districts can shift from using spreadsheets that often are stored nowhere and everywhere to a single, configurable on-demand system. Administrators can enter all known current products from spreadsheets, then invite teachers to contribute tools to the product library. Administrators can also set various user permissions, and leverage workflows to test, request, approve and decline products. Additionally, installing and activating the free Google Chrome Extensions, LearnPlatform for Educators and LearnPlatform for Students, provides edtech usage information without any adjustment to teacher practice. G Suite for Education organizations can install and activate the district-wide extensions to capture information on what edtech sites are accessed, how much, and for how long. Edtech usage reports can be delivered to the requestor’s (or their designee’s) inbox each month - at no cost.

If we are not using much edtech, do we really need an edtech management platform?

The U.S. alone is spending $8 billion each year on edtech - an industry growing at 17% annually. Even if your teachers are just using a few tools, the growth of edtech programs, applications and products will continue increasing and adoption will continue shifting from tech savvy early adopters to more mainstream users every year. As more experienced educators look toward a well-deserved retirement, and new teachers shift from being students to leading students, instructional technology use will increase to better support personalized learning at scale, close the achievement gap and meet various school improvement goals. Getting a head start on organizing, streamlining and analyzing all edtech when there are four, six or 12 products creates a sustainable, system-led foundation and process to carve a path forward (before it’s dozens, then hundreds of tools). Using LearnPlatform, you can set compliance and usage data requirements, manage purchases and contract renewals, and maintain an institutional (rather than tribal) record of your instructional tools with visibility to utilization and measurement across your entire organization.

Where can we find budget to pay for edtech management?

Edtech management is new enough, and intangible enough, that we are often asked how schools and districts should pay for something that doesn’t have a historical line item in their previous budgets. There are a number of avenues for funding edtech management - and Lea(R)n’s mission and commitment to equitable access provide still another route.

Typically, the savings in unused or underutilized licenses (identified in an edtech assessment or IMPACT Analysis report) frees up dollars for schools to pursue programs that work for them (a small portion of which can actually fund the LearnPlatform). Organizations typically see between 9-15x ROI in their initial engagement, freeing resources for the subscription and appropriate support for adjusted implementation and professional development, and any supplemental tools or materials.

With regard to ESSA requirements, Title I and Title IV dollars explicitly allow and encourage funds (up to 15%) to be used to manage and measure evidence to meet ESSA standards. As stated in the Department of Education guidance on the use of funds for a provision of the ESSA, a Local Education Agency (LEA) can use funds "to purchase or create a system that improves the procurement and evaluation process for identifying solutions and implementations that match the context of the SEA or LEA.”

PTAs, community groups and boards also have funds for inventorying and evaluating instructional tools, understanding that getting a handle on edtech budgets and making data-driven decisions to use what works improves practices. We never want cost to be a deterrent to discovering what you have, what works and what that means for student achievement. We work with schools, districts, states and networks of all sizes and can customize LearnPlatform features and contracts to meet those needs.

Which learning organizations can benefit from LearnPlatform?

All learning organizations using or planning to use free and/or paid edtech tools can benefit from a LearnPlatform subscription. LearnPlatform community participation empowers (and is always free for) verified educators to share their voice on what edtech works in classrooms, collaborate and share with peers, and search the largest contextual edtech product library for tools to fit their needs. LearnPlatform is on-demand and configurable to meet the unique needs of a single school; organization-wide needs of an entire district or institution of higher education; and overarching specifications of a member network or whole state. Because LearnPlatform is a comprehensive and configurable system, organizations can take advantage of the components that meet their highest priorities first, and access other facets as their needs evolve.

In what ways is LearnPlatform configurable?

LearnPlatform’s technological, functional, user interface, support and reporting features are all configurable. From defining primary engagement goals, to working with the Lea(R)n customer success team, to creating a user experience aligned to your learning organization (so it’s yours and familiar, instead of looking like another program), to defining a customized product library with approval specifications and achievement goals, to including role types and processes, LearnPlatform can be tailored for you. Systems are only as good as the practical needs they can meet without creating more work for the people who use them. Our customer success and implementation, research and analysis, and product and technology teams connect with users continuously to gather feature and functionality requests and improve LearnPlatform.

Where does LearnPlatform get the data for rapid cycle edtech evaluation (IMPACT Analysis)?

Schools and districts have the data (usage, achievement, pricing) used to produce evidence-based IMPACT Analysis reports and dashboards on student engagement and product efficacy. Product companies should provide schools with usage or utilization data as part of their license or contract agreement. For complete FAQs on IMPACT Analysis, the underlying research and methodology, and types of analyses, see IMPACT FAQs.

What does LearnPlatform do with my data?

LearnPlatform’s research and analytics team can assist with data merger from various sources, formatting and cleaning. Once complete, the data file is loaded into the platform to generate IMPACT Analysis reports. Learning organizations own their data.

Is my data, including student information, safe and private?

Lea(R)n takes data integrity and student safety and privacy very seriously. LearnPlatform is aligned to all compliance requirements, and the team regularly audits practices and systems to ensure safe use. View our privacy policy for more information.


IMPACT™ Analysis

What is an IMPACT™ Analysis?

Developed by Lea(R)n, an IMPACT™ Analysis is an evidence-based analytical methodology that integrates data from multiple sources (e.g., usage, achievement, demographics, cost) to generate reports and dashboards that provide insights into the implementation and IMPACT™ of educational interventions. An IMPACT™ Analysis includes both qualitative and quantitative data, maximizing insight by analyzing data on product efficacy and teacher feedback. This state-of-the-art methodology employs sophisticated analytics that help schools and districts better understand how edtech is being used in their organizations and which products contribute to meaningful education outcomes (e.g., engagement, achievement, 21st century skills). This informs critical instructional, operational, and financial decisions, allowing administrators to identify and implement the most effective educational interventions for their classrooms.

Why did Lea(R)n develop IMPACT™ Analysis?

As schools and districts integrate and incorporate educational technologies, questions arise about which products are used, how much they are used, and whether or not they are working. IMPACT™ Analysis was designed and developed to address these questions. It integrates data from multiple sources (e.g., usage, achievement, pricing) to produce evidence-based reports and dashboards on student engagement and product efficacy, providing insights on both the implementation and IMPACT™ of educational interventions.

How are reports from the Lea(R)n IMPACT™ Analysis different from others?

IMPACT™ Analysis reports are driven by a scientific methodology designed to deliver practical, on-demand insights that inform instructional, operational, and financial decisions. The research-backed methodology includes a proprietary grading rubric, scoring algorithms and sophisticated analytics developed with key stakeholders (e.g., educators and administrators), and vetted by psychometricians and applied scientists. A rigorous psychometric approach was used to develop the Lea(R)n grading rubric, which educators use to evaluate edtech products and differentiate effective and ineffective technologies. Further, rigorous scientific approaches were used to develop the analytics engine that drives the IMPACT™ Analysis, which leverages multiple research methods and flexibly adapts the specific research design (i.e., controlled, comparative, or correlative) based on the data inputted into the system.

How does IMPACT™ Analysis work? What is the methodology?

Once data are uploaded, the advanced IMPACT™ analytics engine generates insights into product engagement and IMPACT™. A backend clustering algorithm is used to group students into natural usage clusters and identify patterns across student groups with differing levels of usage. A quantile analysis then partitions students into subgroups based on levels of prior performance and examines the efficacy of the product to improve education outcomes for each performance group. A fidelity analysis partitions students based on the extent to which they achieved the recommended dosage, and then examines the efficacy of the product based on each group. The built-in covariate analysis allows IMPACT™ Analysis to account for differences such as student demographics (e.g., gender, ethnicity, socioeconomic status), grade level, and prior achievement when identifying the efficacy of an edtech product. A cost analysis provides information on the total cost of ownership, cost-effectiveness of an edtech product, and amount of money spent on different usage clusters and fidelity groups. The on-demand analytics dashboards display edtech product insights in a transparent and easy-to-use way. For an in-depth description of the research methodologies used in the IMPACT™ Analysis click here.

What types of data are needed to run a Lea(R)n IMPACT™ Analysis?

IMPACT™ Analysis generates multiple types of reports and dashboards depending on user needs and data availability. For example, if a user wants to know how certain products are being used, usage data (e.g., logins, modules completed, syllabus progress) are required at the targeted level of analysis (e.g., student or school). If the user wants to know about ROI, usage and pricing data (e.g., price per student or site) are needed, with the addition of recommended dosage amounts. If a user wants to understand product IMPACT™, usage and achievement data (e.g., test scores, course grades, nonacademic outcomes) are needed. In all of the aforementioned analyses, we highly recommend including additional student- and school-level covariates (e.g., student demographics, prior achievement, school urbanicity) and publicly available data to add to the depth and breadth of insight offered by the analysis. The addition of covariates adds accuracy to the results by enabling the analysis to statistically control for potentially confounding variables and by helping achieve baseline equivalence among students prior to the analysis. For more information on covariate data, see below.

How can I access usage and engagement data?

Product companies collect data on the extent to which their products are used and they are responsible for providing that information to administrators. However, the quality, accessibility, and comprehensiveness of data provided by edtech companies varies from product to product. For information on how to access usage data, visit the product company’s website or contact a product company representative. Links to product company websites can be found in Lea(R)n’s product library.

How can I find recommended dosage information?

Product companies provide recommended dosages for their products. Ideally, this dosage information should be backed by research. If recommended dosage information is available, it may be accessed on the product company’s website or by contacting a product company representative. Schools and districts are also encouraged to establish their own dosage recommendations when they have rationale for requiring specific levels of usage.

How do you determine which achievement outcome to use for a given product?

The achievement outcome in the IMPACT™ Analysis should be a precise measure of the educational construct that the product aims to IMPACT™. Further, the achievement outcome should attempt to match the specificity of the educational construct — how narrow or broad the predictor is should dictate how narrow or broad the outcome is. For example, if a product purports to improve a student’s proficiency in algebra, then the achievement outcome should be a metric that assesses a student’s proficiency in algebra, instead of a metric that examines a student’s proficiency in statistics or their overall proficiency in mathematics. Although the latter will likely show some degree of correlation with the algebra metric, it’s ideal to have a measure that matches the specificity of the product’s desired effect. The user should determine the exact outcome that the product is supposed to IMPACT™, and then determine a measure that best represents that precise outcome.

Are there any requirements for the type of metric that can be used for the achievement outcome?

IMPACT™ Analysis is agnostic with regard to how the achievement metric is defined for each analysis. The achievement metric should be the educational outcome that the edtech product purports to IMPACT™, which allows the analysis to measure whether the edtech produces the intended effect. The only requirement to run the analysis is that the achievement criterion is a quantitative metric (e.g., test scores, percentiles, numeric ratings). The analysis can handle achievement metrics that are continuous (e.g., test scores), ordinal (e.g., proficiency level), or binary (e.g., pass/fail, retained/not retained, improved/not improved). Some examples of the types of achievement metrics that can be used are end-of-grade test scores, content area test scores, course grades, retention rates, graduation rates, self-efficacy, or 21st century skills.

What is a trial (or pilot) and how is it integrated into the IMPACT™ Analysis?

A trial (or pilot) uses a research-backed survey to help users gather feedback and insight from educators regarding edtech IMPACT™. It allows stakeholders to generate qualitative data (educator insights and comments) and quantitative data (product grades on the eight core criteria) from educators across an entire school, district, or state. In addition to product feedback sourced from verified educators in LearnCommunity, trial results are integrated as one section of the IMPACT™ Analysis report, allowing users to better understand how their educators and those in the LearnCommunity — evaluate the product on the core criteria deemed most important when trying, buying, or using an edtech product.

What is the LearnCommunity?

The LearnCommunity is a free resource that allows educators to share trusted recommendations and best practices with an online community of more than 100,000 verified educators. Using Lea(R)n’s research-backed rubric, data integrations, and powerful filters, LearnCommunity allows educators to access tailored insights that improve instruction and outcomes.

How was the rubric created for grading edtech products?

Using rigorous scientific and psychometric methods, Dr. Daniel S. Stanhope led the research to identify the eight core criteria that are most important for educators when they try, buy, and use educational technologies. Based on these criteria, Lea(R)n developed a rubric and protocol that educators can use to grade products. When educators grade products on LearnPlatform, they do so on a sliding scale from F to A+. Individual grades are assigned for each of the core criteria and a holistic (or overall) grade is derived from Lea(R)n’s proprietary grading algorithm.

What are usage clusters and how are they formed?

Usage clusters are subsets of students grouped together based on how much they use an edtech product (e.g., low use, moderate use, high use). IMPACT™ Analysis statistically generates these clusters based on natural usage patterns using an advanced clustering algorithm. The algorithm identifies the optimal number of clusters based on similarities in their usage trends. IMPACT™ compares student achievement and product efficacy across these usage clusters.

What is an effect size?

An effect size is a measure derived from a statistical analysis that aims to quantify the difference between two groups, and is often used to quantify the effect of a given intervention. An effect size can be used to infer the extent to which an intervention was effective for one group (treatment group) versus another group (control/comparison group). The larger the effect size, the more IMPACT™ an edtech intervention had on the treatment group (e.g., sample of students who were assigned to use an edtech product). A negative effect size implies that the treatment group performed worse on the given achievement outcome than did the comparison group (e.g., students who were not assigned to use the edtech product). In an IMPACT™ Analysis, the effect size can be interpreted as a measure of the extent to which an edtech product (or intervention) had an IMPACT™ (positive or negative) on the specified achievement outcome. By including student- and school-level covariates (e.g., student demographics, poverty, school urbanicity), IMPACT™ makes statistical adjustments to the effect size in order to control for potential confounds and extraneous factors.

What is the difference between a treatment group and a control group?

In experimental methodology, the treatment group consists of participants who receive an experimental stimulus or manipulation (in this case an edtech intervention). The control (or comparison) group consists of participants who do not receive the experimental stimulus, which is used as a baseline or counterfactual. By comparing the educational outcomes of the treatment and control groups, IMPACT™ Analysis identifies the extent to which the product had an IMPACT™ on the given outcome. Both groups should be representative of the same target population, and researchers should do their best to confirm baseline equivalence. Ideally, the treatment and control groups are determined using random selection and random assignment.

How does the IMPACT™ Analysis divide the sample into treatment and control groups?

Treatment and control (or comparison) groups are determined by the school or district. If a school or district randomly assigns students to the treatment and control groups, then these pre-defined groups are used in the IMPACT™ Analysis. However, many schools and districts choose to run widespread edtech implementations rather than conduct a trial (or pilot) via experimental design. As another alternative, schools and districts may provide historical data to evaluate edtech usage and IMPACT™ without having previously employed a research design. In cases like these, treatment groups consist of students who used the edtech product, and comparison groups consist of students who a) did not use the product, b) are representative of the same target population, and c) don’t differ discernibly from students in the treatment group in a confounding way. If the aforementioned conditions can’t be met, then no comparison group is used and the default effect size calculation is replaced by a correlation coefficient in which engagement is correlated with growth in the education outcome (statistically controlling for covariates).

How does Lea(R)n differentiate between quantitative and qualitative data? What is the difference between the two?

Quantitative data consists of data that can be measured and represented in numbers (e.g., number of students, percentage correct, time on product). Qualitative data consists of data that can be observed (e.g., open-ended comments, interviews, or observations). Both quantitative and qualitative data are important to measuring the effectiveness of edtech, and both types of data are included in an IMPACT™ Analysis.

How is the product grade determined?

There are multiple product grades: one based on product IMPACT™ and two based on teacher insights. First, there is an overall product grade that is determined based on the magnitude of the overall effect size. Products with higher effect sizes will receive higher grades, with the respective range of grades being based on best practices and past research on effect sizes in the education context. Second, there is a trial-specific grade that is based on insights from educators at the respective school or district running the trial. Third, there is a community-based grade that is driven by insights from the LearnCommunity of educators. The two grades driven by teacher insights result from verified educator systematic product reviews. Verified educators evaluate products using a grading protocol and rubric, consisting of the eight core criteria most important for educators when they try, buy, and use educational technologies. The eight core criteria were developed through rigorous research led by Dr. Daniel Stanhope using sound research methodologies and psychometric standards.

What is a quantile?

Quantiles are groups formed by partitioning a sample into roughly equal sizes based on a given distribution. In IMPACT™ Analysis, pre-achievement quantiles are formed, such that the overall sample of students is partitioned into subgroups based on a pre-achievement metric (e.g., pre-intervention achievement levels, cumulative GPA, pre-intervention test scores). The analysis partitions students into multiple groups of roughly equal size, ranging from “lowest performing students” to “highest performing students.” These groups are examined to determine how they differ in usage and IMPACT™.

What are covariates and how do you account for them in your analysis?

Covariates are factors that have the potential to affect student achievement or educational outcomes, but are not necessarily a target variable under investigation in the edtech intervention. An important assumption in control trials is that covariate levels are identical for students in the treatment group and control group — groups are assumed to demonstrate baseline equivalence so any post-intervention differences can be attributable to the edtech intervention. If students in the treatment group and control group differ as a function of covariates, it’s difficult to determine with confidence that the educational intervention was the sole factor responsible for change in the educational outcome. Common covariates in educational research include socioeconomic status, ethnicity, gender, grade, prior performance, and urbanicity. IMPACT™ Analysis implements a technique that statistically controls for the IMPACT™ of covariates in order to hold them constant when evaluating product efficacy.

In addition to the effects of the edtech on student achievement, there are other factors that IMPACT™ the effectiveness of any given intervention, such as quality of instruction and student differences. How do you account for these additional variables?

IMPACT™ Analysis has the ability to account for student-, class-, and school-level variables such as grade level, previous performance, quality of instruction, student demographics, school urbanicity, school size, IMPACT™ of additional products, and many other factors. The analytics engine accounts for all covariates included in the data, and will statistically adjust the effect size accordingly.

How does effect size within performance quantiles inform decisions on closing the achievement gap?

An examination of performance quantiles allows the IMPACT™ Analysis to determine whether an edtech product demonstrates the ability to close the achievement gap. First, students are partitioned into groups (i.e., quantiles) based on their levels of achievement on a previous achievement metric (e.g., GPA prior to the intervention, previous test score, academic performance the prior year), and then each group is divided into their respective treatment or control group. Finally, an effect size is computed for each group, which demonstrates how well an edtech product works for each performance group (i.e., performance quantile). Edtech products that show a large effect size for historically low performing students are products that help close the achievement gap. For example, if the IMPACT™ Analysis finds that effect sizes for an edtech product are higher for students in the “low achievement” quantiles, then this product would be demonstrating effectiveness at closing the achievement gap.

Can IMPACT™ analysis integrate data from our SIS or LMS?

IMPACT™ Analysis has the ability to integrate data from myriad sources, including edtech products, SIS systems, and LMS systems.

How are the results shared with stakeholders?

Administrators have complete flexibility and control to share results across their organizations and with key stakeholders. Lea(R)n offers administrators the ability to share IMPACT™ Analysis reports, trial (or pilot) results, and engagement dashboards by logging into LearnPlatform. Administrators can set login permissions to allow each type of user to access results relevant to them/their role. In addition, all graphics and visual displays in the IMPACT™ Analysis can be exported (e.g., PNG, JPEG, or SVG).

Who should I contact if I need help?

Please don’t hesitate to contact a member of Lea(R)n’s implementation team if you have any questions about LearnPlatform or would like any additional information regarding IMPACT™ analysis.