Policy Roundtable Seminar Focuses on Stimulating Innovation in Government

The National Academies’ Policy Roundtable of the Behavioral and Social Sciences held a seminar on October 30 focused on “Stimulating Effective Innovation in Government.” The Roundtable is chaired by David T. Ellwood of the Harvard Kennedy School and, beginning in 2015, will be directed by Arlene Lee, Director of the Committee on Law and Justice. For more on the Roundtable, see COSSA’s coverage of its last meeting.

Roundtable members are government users and producers of social and behavioral science research and behavioral social and scientists who have spent time in the government (the list of members is available on the Academies’ website). It provides a forum to “explore ways in which the behavioral and social sciences can better inform and otherwise contribute to more effective and efficient policies and programs of government.”

The seminar highlighted several current and past government efforts to implement innovative policies and programs. Speakers from each program addressed three overarching questions:

  • How can innovations be stimulated?
  • How can we efficiently learn about whether innovations produce positive impacts?
  • How can we use lessons of innovations to manage, reshape, and expand programs?

Innovation in Education

Robert Slavin, Chairman of the Board of the Success for All Foundation, spoke about the Department of Education’s Investing in Innovation Fund (i3), a competitive grants program that aims to fund programs demonstrated to have a positive impact on student achievement or growth, closing achievement gaps, dropout rates, high school graduation rates, or college enrollment and completion rates. Three tiers of funding were available, with the largest awards going to the projects with the most rigorous evidence behind them:

  1. Development grants ($5 million over five years), awarded to projects with a strong theory behind them, but not necessarily evidence;
  2. Validation grants ($30 million over five years), awarded to promising programs to more rigorously evaluate their impacts; and
  3. Scale-Up grants ($50 million over five years), awarded to expand the reach of programs proven to have positive impacts.

So far, the Department has awarded 117 i3 grants across the three levels; however, only five were for the large scale-up grants. The first grants were awarded in 2010, so the first round of programs has not yet finished the initial five-year period. However, many of the programs (particularly the scale-ups) seem to be having positive impacts. Slavin argued that the real value of the i3 program is that it is helping to build up the evidence base for education. As the program develops, policymakers will be able to use its findings to tie programs to evidence, hopefully eliminating the political push and pull of different administrations on education policy.

Meredith Farace of the Department of Education’s Office of State Support discussed the Race to the Top program, which, beginning in 2010, awarded $4 billion to 19 states to incentivize systemic education reform across four areas: (1) standards and assessment, (2) effective teachers and school leaders, (3) building data systems, and (4) turning around low-performing schools. Farace shared some of the lessons for fostering innovative approaches she took away from the competition. The conditions for innovation had to already be in place (such as support for change, existing momentum, and a need for the funding to move forward). Successful applications also needed to have a high level of specificity and evidence of promise for their plans. Finally, the competition gave states flexibility to try out new ideas and learn from what was and was not working. Race to the Top incentivized states to move away from a compliance-driven model to one that privileges collaboration, investment in comprehensive supports and tools, using data to inform decision-making.

Another important aspect of the Race to the Top program was the relationship the Department developed with grantees, even creating a new office—the Implementation Support Unit—to enhance the partnership. The Department was very transparent about when and how progress and outcomes would be measured, according to Farace. They approached program review as a continuous dialogue, not something to be sprung on states at the end of the grant period. The Department also recently created the Office of State Support to help rethink how the Department of Education interacts with states and act as a single point of contact across programs.

Innovation in Social Welfare Programs

Howard Rolston, Abt Associates, discussed public assistance waivers between 1987 and 1996. During this period, the government gave states greater flexibility in how they distributed public assistance funds, particularly from Aid to Families with Dependent Children (AFDC or welfare, which later became TANF), so long as the states could prove that their policies cost the same as or less per person than under the federal government’s policy (cost neutral). In order to determine cost neutrality, states were required to randomly assign participants to control and experimental policies, essentially creating a test lab for welfare policy. During this period, there were 80 approved demonstrations in 45 states, which produced new data on policies and programs and familiarized states with using experimental designs to evaluate policies.

However, many of these innovations were driven by politics, not evidence, and it was difficult to isolate key variables because the experiments were designed only to compare all treatments to a single control (the federal policy). Rolston concluded that while it is feasible to trade flexibility for rigorous evaluation, it is also necessary to be able to approve experimental design and convince states that rigorous evaluation is in their best interests. Even then, it is not sufficient as the sole approach to innovating and evaluating public assistance.

Mark Testa, University of North Carolina at Chapel Hill School of Social Work, explained how the Title IV-E child welfare waiver program (which expired in September 2014) allows states to test innovative approaches to improving child welfare. The program inherited its structure from the public assistance waivers discussed above, where states must prove their policy to be cost neutral. If the state policy proves to be cheaper than the federal policy, the state keeps the difference, providing a powerful incentive for innovation. He also observed that there are many low-cost ways to build evaluation directly into routine operations (such as using administrative data or easily programmable randomization routines).

In 2011, reflecting unease with the idea of random assignment in government programs, Congress stipulated that the Department of Health and Human Services could not consider randomization in its decision to approve a waiver. Most subsequent waivers assigned all individuals in the demonstration to the “treatment” policy and relied on projections to estimate cost neutrality, which is less responsive to sudden changes. However, Testa pointed out, not every promising intervention produces positive results.

Scott Cody, Vice President at Mathematic Policy Research, discussed potential applications of “rapid cycle evaluation” for the Department of Agriculture’s Supplemental Nutrition Assistance Program (SNAP). Rapid cycle evaluation is a technique often used in the private sector to quickly test the impact of small, incremental changes. It quickly generates evidence to show whether a change was effective, allows the program to target strategies to different types of participants, and is embedded within a continuous improvement framework (especially important for a program like SNAP, which needs to adapt quickly to changes in the economy, labor market, and technology).

Rapid cycle evaluation requires high quality data, a “testable” change, short window of impact, and sufficient flow of clients (to ensure statistical validity). In the case of SNAP procedures for intake and eligibility, it could be used to evaluate policy simplification (like waiving face-to-face interviews or lengthening the period for which someone was certified to receive benefits), administrative restructuring, new technology, and partnering with community organizations to assist in outreach and intake. This type of evaluation can be used to determine whether a given change has increased program participation, increased the amount of benefits, improved labor force participation, reduced program integrity, or increased administrative efficiency.

Innovation in Public Health

Evelyn Kappeler, Director of the Department of Health and Human Services’ (HHS) Office of Adolescent Health, discussed HHS’ Teen Pregnancy Prevention initiative, a $105 million competitive grants program created in 2010. Like the i3 grants program discussed above, Teen Pregnancy Prevention awards are divided into tiers based on the amount of existing evidence behind the project, with the majority of funding being used for implementing what we already know. These programs are targeted at a variety of settings, not just school sex-education classes. The smaller tier of grants is devoted to research and demonstration programs, which test new and adapted strategies (many of which come directly from those with experience in the field). Evaluation is embedded into both tiers of funding. The projects already underway will help build the evidence base for the next round of funding.

The subsequent discussion between the speakers and Roundtable members highlighted several of the main themes touched on during the presentations, including the importance of integrating evaluation into program design, the challenge of addressing discomfort with the idea of randomization, and the necessity of implementing a continuous improvement framework to build on what works. Another question that was raised was how to overcome the very strong incentives to emphasize any positive impacts and minimize or spin negative or null findings.

Those interested in the Roundtable may also find the recent National Research Council report Furthering America’s Research Enterprise of interest (as well as COSSA’s coverage of the report).

Back to this issue’s table of contents.

Subscribe

Past Newsletters

Browse

Archive

Browse 40 years of the COSSA Washington Update.