IMPROVING IMPROVEMENT: TESTING THEORIES OF INSTITUTIONAL CAPACITY BUILDING

David Hersh | Proving Ground

Volume 3 Issue 3 (2021), pp. 16-18

This is the fifth installment of Improving Improvement, our quarterly series focused on leveraging the power of research-practice partnerships (RPPs) to build schools’, districts’ and states’ capacity to improve. In our first year or so of writing for NNERPP Extra, we have shared an overview of our improvement work, lessons learned from working with existing partners during the pandemic, lessons learned from creating and launching an improvement-focused RPP in response to the pandemic, and results of pandemic-year interventions. We also argued for using stimulus funding to invest in an improvement infrastructure with RPPs playing a central role. 

In the next few installments, over the course of the 2021-2022 school year, we will again be writing about engaging partners in improvement work during a year filled with unknowns, but with a more explicit focus on how we are helping states and districts make continuous improvement an embedded practice rather than an add-on. We begin here by laying out the work ahead for our partnerships and the questions we hope to answer this year. As the school year progresses, we will update you with lessons we learn along the way and share any insights we hope you might find useful. We’ll close out the year with reflections on how it all went.

New Partnerships & New Networks

Our goal is ultimately for our partners to make evidence-based continuous improvement –including piloting and evaluating interventions– part of the ordinary course of business in their agencies. Toward that end, one lesson learned from three-plus years working with districts and charter management organizations (CMOs) is that building individuals’ competency in the key elements of continuous improvement is a necessary but not sufficient condition for full adoption. Educators work within an institutional framework that often creates barriers to the kind of practices we promote. For example, after guiding partners through a full improvement cycle last year, we asked whether they would continue using the practices on which we worked with them. One partner answered that they could do it but most likely wouldn’t. There were several reasons they cited but a key one was that competing pressures and priorities get in the way of the type of deep thinking our process embraces: the classic “urgent vs. important” argument.

To meet our goal of embedded evidence-based continuous improvement, then, we need to begin by breaking down those barriers. Because many of those barriers derive from state-imposed requirements that either conflict with or are in addition to good improvement work, we spent much of 2020-2021 recruiting states to support intrastate networks designed to better align our work with state requirements. The key was that state support would include not just financial support but also an endorsement of the work our partners would do as part of, rather than in addition to, state requirements. We had several states in mind based on prior interactions and two new states signed on to begin work this year.

In Georgia, the Department of Education (GaDOE) is sponsoring a five-district network focusing on attendance and engagement. Georgia has its own continuous improvement cycle for districts to use in developing their improvement plans. Districts using Proving Ground’s improvement process will effectively be building organizational and individual capacity to execute the state’s model well. At the same time, GaDOE staff charged with supporting their districts will learn additional tools and strategies to help them more effectively engage in continuous improvement work as they collaborate with us.

The second state[1] plans to go a step further. The state hopes to adopt our model as the model for supporting its highest-need districts and align it with their accountability and support system. As this goes to press, the state is finalizing the participant list, with anywhere from five to eight districts participating. To further the incorporation into the broader strategic planning process, districts will have a choice of priority outcomes on which to focus based on their review of their own data.

These new state networks join our existing Ohio attendance network, two partners focusing on math, and over 40 rural partners in the National Center for Rural Education Research Networks. All the networks will learn from each other; each is testing a different hypothesis about how best to make improvement methods part of the ordinary course of business in education agencies.

New Delivery Models

We’re also testing other hypotheses by trying new models of delivering support to our networks. Our existing models all involve district partners sharing data with us. Center for Education Policy Research (CEPR) analysts conduct impact evaluations on behalf of these districts, and we incorporate the results into the decision-making process. Now, two new models will assess the viability of building districts’ capacity to evaluate impact themselves. The theory behind this is that, to be truly sustainable and impactful, districts will ultimately have to do all parts of the process themselves. Both models will test this, but each will also test one more hypothesis about how to best build capacity and increase the odds of self-directed improvement taking hold.

The first, which we are calling Proving Ground Accelerator, launches this year with both new intrastate networks. Accelerator will test the hypothesis that district teams can be directly trained to institutionalize continuous improvement by assessing the degree to which we can help partners build not just individual but also institutional capacity. All content is delivered in a workshop series in which network members learn the competencies of continuous improvement alongside each other. Workshops will follow an explain-model-practice framework (read more about this approach here). Partners will also receive support from Proving Ground Improvement Coaches as they execute their improvement cycles on their own between workshops. Accelerator also includes a Superintendent’s Institute to help district leaders create and support the conditions for effective continuous improvement and workshops focused explicitly on incorporating continuous improvement into strategic planning, transitioning and sustaining the work.

The second new model, which we are calling Jumpstart, launches next year to test the hypothesis that districts will be more likely to adopt improvement methodologies if we can make it easier for them to do it. To that end, we created a web application to guide districts asynchronously through continuous improvement cycles. While all partners will have access to the web app, Jumpstart districts will receive only targeted support in using it. We will start with five pilot districts in Summer/Fall 2022. For now, we are recruiting participants to provide feedback on this model before we launch the pilot.

New Outcomes

Our last hypothesis for the year is that full adoption of an improvement methodology requires demonstrating its utility for any outcome which educators might be interested in improving. To that end, when we recruited partners this year –whether at the state or district level– we did not recruit on particular outcomes. Instead, we let partners choose their desired topics and then tried to include multiple districts for each one. The result was three groups: attendance, math, and a third where districts will choose from a set of state priorities based on their data. 

Looking Ahead

In future installments of Improving Improvement, we’ll share updates on the progress of each of our networks and lessons learned for practitioners hoping to bridge the research-practice divide.

We are also always open to additional suggestions for topics for future editions of Improving Improvement. Reach out to us with any questions you have about our networks, continuous improvement process, or ideas you’d like to see us tackle.

David Hersh (david_hersh@gse.harvard.edu) is Director of Proving Ground.

 

[1] We have a contract pending. Until it is signed, we will not be able to name the state publicly.

Suggested citation: Hersh, D. (2021). Improving Improvement: Testing Theories of Institutional Capacity Building. NNERPP Extra, 3(3), 16-18.

NNERPP | EXTRA is a quarterly magazine produced by the National Network of Education Research-Practice Partnerships  |  nnerpp.rice.edu