The value of Pre-K education has been widely acknowledged through decades of research. However, much of that research has come from studying older and often smaller programs than those currently being implemented across the country. To determine how programs are affecting children now, more evidence is required. The use of lottery-based assignment systems—a process which randomly selects children in oversubscribed programs—has the potential to generate important new evidence.
A recent working paper for the National Bureau of Economic Research and Annenberg outlines the parameters of research conducted on five lottery-based early learning programs. The collaborative network of the research teams was convened by the University of Michigan's Education Policy Initiative (EPI) in the autumn of 2021 and represents efforts to study programs in Boston, Washington, DC, New Orleans, New York City, and a national network of Montessori schools.
The potential for discovery is great. “Thousands of families now apply to public preschool programs that use lottery-based assignment systems, presenting opportunities to address new research needs with no or limited disruption to a locality’s standard operations,” they state.
While each of the five programs are unique, they share many essential characteristics. The teams find six common challenges that need to be considered in the future of such research:1) the availability of baseline student characteristics – used to check that randomization was successful and understand effects on important student groups– may not be very rich; 2) limited data on the programs’ alternative(s) to identify what the program was compared to; 3) limited and inconsistent outcome data because of how challenging it can be to follow young children and their families after program participation; 4) weakened internal validity – or potential for bias in determining the program’s effects – due to program attrition; 5) constrained external validity – the generalizability of the results in a broader context – due to unique characteristics of families who compete for oversubscribed programs; and 6) how difficult it can be to answer site-level questions when children (and not program features) are randomly assigned.
Their collective goal is “to improve the application of the lottery-based design in current and future education studies,” and “serve as a case study more broadly of how context can affect study design in applied education work.”
The paper’s lead author is Christina Weiland, EPI faculty co-director, and includes Brian Jacob, EPI founder and faculty affiliate, Anna Shapiro, former EPI predoctoral researcher now at RAND, and former EPI co-director Sue Dynarski, now at Harvard. Others on the teams represent New York University, MDRC, Urban Institute, American Institutes for Research, Brookings Institutions, and UMBC School of Public Policy.
Addressing the research challenges, they suggest a range of solutions, including building collection of richer data into the preschool application process to address covariate and counterfactual questions, as well as improving administrative data tracking outcomes. Regarding attrition of children from studies, in preschool programs, attrition problems can be exacerbated because the early childhood years are when families tend to be more mobile than when their children are older. The challenge is to create longitudinal datasets that span multiple school districts and states, which could be achieved by assigning a unique identifier to a student early on. Additional data collection activities also could be used to address the external validity and counterfactual concerns.
Moving forward, the paper makes eight recommendations for designing future preschool lottery studies:
1) Getting started. When designing lottery-based studies, start with the program’s theory of change, a locality’s research questions, and gaps in the broader research evidence base.
2) Understand what data is available. Identify the covariates, outcomes, and counterfactual data that are available from administrative data.
3) Addressing Attrition. To limit attrition problems, consider opportunities to create robust longitudinal datasets that span multiple school districts and states and set up systems for tracking students across localities using a common identifier from the time of preschool application.
4) Use prior data to inform research questions. Anticipate the external validity of a lottery-based study from past years’ data and use it to determine a priori what research questions a lottery-based study can answer well and which ones require a different design.
5) Consider tradeoffs. There are tradeoffs to consider in choosing an analytic strategy for estimating impacts from a lottery-based design. In particular, one must choose samples drawn from first-choice lotteries for children, first lotteries for children, and assignment score approaches.
6) Proceed carefully for site-level questions. For site-level questions, pinpoint where a setting-level intervention is implemented, the selection process into implementation, and site characteristics that may be correlated with a given intervention. Pivot to a different research design, if child-level randomization cannot satisfactorily answer a site-level question.
7) Leverage the network. Find opportunities to connect with colleagues engaged in similar work. Collaboration between our five teams began organically, with researchers considering a lottery-based design connecting with those who were already in the process of doing so. As teams leverage lotteries in other contexts and to address other questions, similar collaborative networks have a role to play in improving applied studies and accordingly shaping future evidence-based policy and practice.
8) Funders have a role to play. Finally, we also hope that funders will begin to recognize the potential contributions of the lottery-based design for building the next generation of evidence on early education programs. Funding for the early stage work to identify what questions these lottery-based early education studies can answer in a given context and the relevance of those questions to practice partners is essential.
The site projects have been funded by Institute of Education Sciences (Boston, DC, New Orleans, Montessori, and New York teams); Arnold Ventures (Boston); and the Heising-Simons Foundation (DC). The Autumn 2021 conference was funded by the Spencer Foundation.
You can read the full papers here: Lottery-Based Evaluations of Early Education Programs: Opportunities and Challenges for Building the Next Generation of Evidence, NBER Working Paper No. w30970, March 2023 or via Annenberg EdWorking Papers.
More news from the Ford School