These are interesting times for education research.
On one hand, the field has matured to the point where highly credible education research seems to come out on a weekly—sometimes daily—basis. Federal investment and infrastructure has contributed to a notable increase in the supply of rigorous studies that are widely available to educators and policymakers.
But now, some have suggested that the billions of dollars invested in education research by federal agencies, including the Institute of Education Sciences (IES) and the National Science Foundation (NSF), have failed to make an impact.[i]
This failure, so the argument goes, tracks back to a research production system that is run by and for academic researchers and principally serves their needs and interests. Studies take too long and cost too much money. State and local decisionmakers find the products of this research irrelevant to their needs. And there are few examples of where research has helped educators chart a better course. In short, the whole enterprise lacks a compelling theory of action that adequately accounts for the loosely-coupled nature of the American education system.
Is this bleak diagnosis on-target? Is it really true that education policymakers and practitioners disregard research? Is the federal education research and dissemination infrastructure disconnected from what is happening in classrooms and on campuses?
Actually, education research is used by educators
Usage data do not square with the grim portrayal of education research that some have put forward—at least with respect to the kind of research that seeks to develop effective interventions with funding from federal agencies. While no one seriously argues that effective use of high-quality research is standard practice in the education sector, a good case can be made that educators’ interest in and familiarity with research has never been more robust. Also growing are opportunities for district and state education leaders to partner closely with researchers to establish research priorities, plan and conduct studies, and interpret results. This growth has been mostly driven by large federal investments in research-practice partnerships in education over the past five years.
If there’s a problem with the production of education research, it’s that the field lacks enough funding or sufficient numbers of highly capable researchers needed to meet current demand by practitioners and policymakers for more and better evidence of what works in education. Whereas the market for evidence has grown, improvements in the supply and communication of research are not sufficient to incent the kind of demand that would have every school district regularly checking the What Works ClearinghouseTM (WWC) for the latest intervention reports. The Every Student Succeeds Act (ESSA), the reauthorization of the Elementary and Secondary Education Act, may ratchet up demand through its requirements that states, districts, and schools assess the evidence when selecting interventions. At this point, however, it is unclear how that will unfold, particularly with the scaled-back federal role in accountability.
In this context, where powerful policy levers to increase demand are largely absent, IES’s investment approach focuses on providing a steady supply of new, generalizable scientific knowledge about the effectiveness of approaches in education. IES also prioritizes the promulgation of rigorous standards (which can guide state and local decisions about evidence under ESSA), wide dissemination of research findings, and targeted capacity-building activities with local and state education agency staff, including through research-practice partnerships.
IES’s interest in supporting the production of generalizable knowledge needs to be underscored. With respect to federal investment, even local science—that is, locally generated questions and studies—must contribute to a broader knowledge base that can inform the decisions of states and districts that were not part of the study. The findings from local science should not stay local; rather, they need to be captured in a system that can make them available to anyone else who wants to learn from them. An important part of the federal role is to provide the infrastructure to capture findings from impact studies, including those from locally-driven efforts, and make those findings widely available.
Who uses education research, anyway?
A challenge for the education research field has been to understand whether and how research is being used in education decisionmaking. Studies of research use have been small in scale and hampered by a lack of good measures. But recent efforts to systematically map the dynamics of research use at local and state education agencies have provided new and perhaps surprising insights.
In a spring 2015 survey, the National Center for Research in Policy and Practice (NCRPP)—one of two IES-supported Research and Development centers focused on understanding how education research is used by practitioners and policymakers—found that self-reported research use was high and skepticism of research low among 271 central office staff from the nation’s 32 largest school districts.[ii]
Large shares of respondents to the NCRPP survey reported that they use research “frequently” or “all of the time” to make decisions about purchasing interventions (85 percent), adopting curricula (78 percent), designing professional development (78 percent), redesigning programs (77 percent), or scaling up programs (70 percent). Respondents overwhelmingly “agreed” or “strongly agreed” that education research helped identify solutions to problems facing schools (99 percent) and that education researchers provided a valuable service to practitioners (95 percent). Sixty-one percent could name a piece of research that was useful to their work in the previous year.
To be sure, the survey indicated room for improvement. Half of the respondents saw a disconnect between education research and education practice and policy. And it is certainly true that academic incentives align poorly with those of the education system. This misalignment makes some types of research (such as that conducted in research-practice partnerships) unattractive for some types of researchers (such as pre-tenured faculty). But in general, the requirements of scholarship and practice exist in creative tension, each with something important to contribute—an issue I discuss later in this piece.
We actually know and use things from education research
As the number of rigorously evaluated interventions grows, more school districts and states are implementing programs with evidence of impact. For example, the Building Blockscurriculum, which has shown positive effects on young children’s mathematics and literacy skills, was developed and tested with support from both NSF and IES. This curriculum recently was adopted citywide by Boston, New York City, and several California districts. Another example is the IES-funded evaluation of North Carolina’s Early College High Schools, which found positive effects on students’ completion of college preparatory courses and school attendance, among other outcomes. Based on these findings, North Carolina is expanding the program statewide with a Federal Investing in Innovation (I3) grant. IES also supported the development of the Early Literacy Skills Builder program, which has been shown to improve reading outcomes for students with significant intellectual disabilities and currently is used in nearly 1,300 school districts.
Studies funded by IES and others have identified effective practices that can be implemented in classrooms without purchasing a specific program or curriculum. These practices are summarized in the WWC practice guides, of which there are now 18. The WWC has begun to update older guides because research over the past decade has provided more information about effective practices.
Rigorous studies also have provided evidence about popular programs that do not produce the expected results. IES was one of the first funders of rigorous impact evaluations in postsecondary education, supporting large, multi-site trials of learning communities and summer bridge programs targeting students in need of developmental education. These evaluations showed modest improvements in developmental education outcomes but no significant impacts on earning college-level credits or persistence. A new generation of developmental education interventions has emerged—and is now undergoing testing—in response to these findings.
Local science serves an important purpose
In the past five years, IES has placed large bets that research-practice partnerships (RPPs) will produce useful, high-quality research and build the commitment and skill of local and state education agency staff to consult research on a routine basis. This investment is not meant to imply that researchers working outside of the context of RPPs are not good partners; indeed, research on classroom- or school-based interventions often would not be possible without the collaboration and professional insight of school personnel. But in contrast to IES’s traditional research grants, RPPs have an explicit, primary goal of increasing capacity of education partners to generate and apply research effectively.
IES’s first major investment in RPPs was through the 10 Regional Educational Laboratory (REL) contracts that began in 2012. RELs are authorized to conduct applied research, provide technical support, and offer opportunities to learn about research findings. The 2012 RELs have worked with about 80 RPPs (known as research alliances), each of which has a specific topic and education goal that requires education research to achieve.
The research alliances supported by the RELs vary in their composition (e.g. state personnel only; personnel from several districts) and participants’ initial comfort level with research. Some research alliances have started with data inventories, moved on to descriptive research using administrative data, and examined impact research conducted elsewhere to identify interventions—including those with studies reviewed by the WWC—that might work in their context. Several alliances are conducting their own impact studies, including quick-turnaround randomized controlled trials. Working with their alliance members, RELs have developed new and useful tools, guides, research summaries, videos, and webinar series—all of which started from local interests but are intended to benefit educators across the country. Those who dismiss the RELs’ role in IES’s capacity-building and dissemination infrastructure often have an incomplete understanding of new directions the program has taken in the past four years.
The two IES components that fund applications for research grants—the National Center for Education Research and the National Center for Special Education Research—support many grants that require formal partnerships between researchers and agency personnel. These include grants under the researcher-practitioner partnership program, which has supported 28 partnerships across the country; 22 large-scale evaluations of state and local programs and policies; and a new program for low-cost, short-duration studies, which will announce its first awards this spring. These grants share two critical features: (1) the program or issue under investigation is one that education officials identified as a priority, and (2) a staff member from the education agency is named to co-lead the study with an independent researcher. The investment in grant-funded RPPs and state-local evaluations is close to $100 million since the first grants were awarded in 2009, and growing steadily.
The pre-doctoral and post-doctoral training grants funded by these same two IES education research centers now explicitly emphasize preparing new researchers with the skills and experiences they will need to work productively with practitioners and policymakers. IES recommends that pre-doctoral fellows complete an internship at a local or state education agency.
At the National Center for Education Statistics, the 2015 State Longitudinal Data Systems (SLDS) grants seek to incent the use of these systems for research and evaluation. SLDS grants are made directly to states. A large share of the 16 states that received SLDS grants in 2015 will undertake evaluations using their data systems, and together these 16 states will receive approximately $100 million for this work. This will provide a test of whether the type and quality of research differs when states, rather than researchers, are the grant recipients.
It also must be said that, despite the current high enthusiasm for RPPs, there is a great deal that we do not know about whether and under what circumstances these partnerships contribute to more thoughtful incorporation of research into decision making, and whether the costs of this approach are commensurate with capacity built and the amount and quality of research generated.
Local exceptionalism is a dead-end
One of the arguments for RPPs is that education leaders find studies conducted with their own data more persuasive than studies conducted with someone else’s teachers or students. This is a compelling argument: the bright light of one’s own data can dispel myths about how serious a problem is (or isn’t), which students are affected, and whether trend lines are going in the right direction. Descriptive analysis of local data is a foundation for identifying whether and where intervention is needed and is a common undertaking of RPPs.
But should we encourage local education leaders to study whether an intervention works with their own data, even when rigorous studies of that intervention already have been conducted elsewhere? Given infinite resources and local capacity, the answer might be “yes,” as long as study results could be combined: more studies of an intervention provide a better picture of whether, when, and for whom an intervention works.
However, resources for education research are relatively scarce. The available financial and human capital is insufficient for every state and district to undertake its own impact studies.
Even beyond the resource issue, there are good reasons for not indulging local exceptionalism with regard to learning what works in education. Imagine if every hospital demanded to study for itself whether a particular procedure were effective, despite the firm conclusions of previous studies. Would we tolerate medical professionals who would be convinced of a procedure’s value only if a study were conducted in their own contexts and on their own patients? Have any of us as patients asked our doctors to provide local evidence that a prescribed treatment works?
In the new policy environment created by ESSA, where states and districts are required to examine the impact of the interventions they are considering, a key challenge will be to support these education leaders to thoughtfully assess what they can learn from research conducted by other people in other places. All impact research—even that conducted in one’s own context—is about what worked in the past. This research makes no promises about future outcomes in any context. Its role is to help educators make better bets about what is likely to work in the future.
Scholarship isn’t the bad guy
In the quest to engage state and local education personnel more fully in research, there is a danger of casting researchers as the sinners and education agency personnel as the saints—or at least the sinned-against. Who could possibly have better questions about education practice or policy, one might wonder, than agency staff themselves? Further, given the education problems that we face, can the federal education research infrastructure afford to support researchers who—among their other activities—build theories, fine-tune their measures and models, develop and debate precise terminology, and write for scholarly journals?
In response, I ask whether we can afford not to.
Let me be clear. IES supports research that is intended to produce practical benefits for education, often through the development and testing of new instructional strategies, classroom practices, learning materials, incentives, organizational routines, and systems. The path to those practical benefits, however, often leads through the careful scholarly work of developing and expanding constructs, measures, frameworks, and theories. This work is conveyed via scholarly publications and conferences to other researchers who, in the routine of their scholarly work, make further refinements and extensions. These activities may not be of great immediate interest or benefit to most education agency staff and will probably not be carried out in the context of RPPs. But, ultimately, these scholarly activities help to organize insights and inquiry—including, no doubt, those of future RPPs—and suggest new approaches for improving practice.
Moreover, a healthy investment strategy for generating practical benefits in education has room for questions asked by education agency insiders and by outside researchers who may be in a better position to raise issues that might be unsettling and uncomfortable. Every system needs fresh ideas and questions that come from the outside in addition to those generated from within.
IES’s investment approach
As the complexity and challenge of bridging research, policy, and practice in education has become clearer, IES’s investment strategy has evolved. The current approach strikes a balance between supporting research endeavors that require significant local partnership in developing questions and implementing studies, including RPPs, and those that require less. IES’s research investments have yielded practices and interventions that are both usable and widely used. Given this track record and the fact that the jury is still out on the extent to which RPPs will contribute to generalizable knowledge and capacity building, this balanced approach seems well-warranted.
At the same time, IES continues to support a research and data infrastructure that includes the training of new researchers who are increasingly familiar with the world of education practice; expansion and use of state longitudinal data systems for evaluation; a central source of syntheses of findings from impact studies; and a networked system for distribution of and engagement with findings from high-quality research in every region of the country. This infrastructure leverages additional investments in evaluation through the Department of Education’s broader strategy of incorporating “evidence requirements” into its discretionary grant programs, of which I3 is the most notable example.
Impatience is understandable. Nevertheless, bringing about a more evidence-reliant education system is a long, long game. Not all of the pieces are yet in place. We will need continued creative thinking about how to bridge research, policy, and practice. But it’s worth reminding ourselves how far we have come since 2002, when the creation of IES heralded the development of a new federal infrastructure for education research. With that infrastructure in place, we have generated research with immediate relevance, as well as a strong foundation for making further improvements in how we connect research to policy and practice.
Ruth Curran Neild is the Deputy Director for Policy and Research, Delegated the Duties of the Director of the Institute of Education Sciences, U.S. Department of Education.
[“source-Brookings”]