IMPROVING IMPROVEMENT: HOW DISTRICTS PERCEIVE IMPROVEMENT SUPPORTS

David Hersh | Proving Ground

Volume 4 Issue 3 (2022), pp. 18-22

This is the eighth installment of Improving Improvement, our quarterly series focused on leveraging the power of research-practice partnerships (RPPs) to build schools’, districts’, and states’ capacity to improve. Last school year, we dove into the questions we hoped to answer, reflected on how to answer those questions, and evaluate the success of improvement efforts, and shared one big reflection on the biggest barrier to improvement for our partners. 

In this installment, we examine another big question: To what degree has our partners’ capacity to improve increased over the last 5 years? We recently surveyed all of our partners, past and present, to get at exactly this question. Here, we discuss the results –and limits– of this survey.

    A Refresher on How We Define Success

      Back in our December installment of this series, we shared how we determine whether an improvement effort we are supporting is succeeding. In defining success, we wrote that “Success for us… means our partners hav[e] institutionalized the practice of continuous improvement so thoroughly that they continue practicing the core competencies of improvement with fidelity for all strategically aligned problems of practice long after our engagement ends.” We identified 9 core competencies [1] that make up our definition of evidence-based continuous improvement and leading indicators of success to look for in three phases:

        Phase 1: Do our partners execute a high-quality improvement cycle while working with us? How well did they execute each competency and to what degree was the decision outcome-optimizing? 

        Phase 2: Based on their work with us, how confident are we that they are able to do this without us – can they generalize from the model we worked on together to other problems of practice? Additionally, how confident are we that they are willing to do this without us – have we created the internal demand? 

        Phase 3: Are they doing this after our engagement ends? How well and for how many of their strategic priorities?

          Most of our available data from the survey informs Phase 1 and Phase 2 questions. We can begin drawing some inferences about Phase 3 questions, but do not yet have data collection set up for a thorough analysis. 

            A Little Proving Ground History

            As we consider the data we have available to answer the questions in each phase, we need to take into account that how we support our partners in selecting, piloting, testing, and scaling solutions to their problems has evolved substantially over time. This is true both in terms of the delivery model partners experienced, the skills we have covered, and the tools we provided. Practicing what we preach, we have been iterating on our model as we’ve seen partners engage with it. For example, after our initial cohort of partners, our hunch was that they were not well positioned to continue using the process without us. We therefore revised both the content and our delivery to better support authentic capacity building and make the process easier to execute without us. We would, therefore, expect more recent partners to be trending better on our measures of improvement capacity than where our initial cohort was at a similar stage in their engagement with us.

            Our first cohort of partners –all of whom were working on improving attendance– experienced intensive one-to-one support from a single coach who conducted three in-person workshops at each partner site each year, with bi-weekly phone check-ins in between, for at least three years. The coach facilitated all sessions live with paper and pen or sticky notes and sharpies. Proving Ground analysts conducted partners’ diagnostics on their behalf, and there was intensive support for root cause analysis, intervention prioritization, pilot design, implementation planning, impact analysis –which Proving Ground analysts conducted– and decision-making. Intervention design support was added in their second year. We provided limited support around the other core competencies, most of which were added to the process only later in our engagement with them. Finally, while partners were always members of a network, Proving Ground operated as the hub connecting them, with our annual convening the only built-in opportunity for partners to directly engage with each other.

            By contrast, our current partners may experience one of two delivery models, both of which are primarily virtual and both of which are far more partner-led than what the first cohort experienced. For all partners, we provide at least one session for each competency. The 3 site visits a year have been replaced by 9 core Zoom sessions over two years. In each of the delivery models, a Proving Ground coach supports each partner, but rather than leading activities for the partner, our coaches model each activity then provide support and feedback as each partner leads it themselves. The majority of the improvement tools we use now allow for synchronous and asynchronous digital collaboration and we provide intensive support on all competencies. For our intrastate model, in which districts within a state move through the process as a cohort, partners participate in joint workshops with their colleagues from other districts. For our classic model, workshops remain one-to-one. Finally, while we continue hosting an annual convening and the intrastate cohort model naturally provides more opportunities for partners to connect with colleagues in other districts, all partners now also have access to every other partner in the network through a Proving Ground-hosted Slack Workspace and optional, monthly meetings. 

            We have had several partners experience multiple versions of the service. For example, one partner participated in the initial attendance cohort and is now participating in our classic, one-to-one model to improve math outcomes for elementary and middle school students. 

            Our hypothesis is that current partners experiencing more ownership of the process and more frequent engagement with and detailed support for all competencies will be better positioned to do continuous improvement work by themselves. We also hypothesize that they would be more engaged and likely to want to continue doing the work after our engagement ends, though there is a fair argument that the opposite might be the case (e.g. if they don’t enjoy the work, the more comprehensive approach might reduce the degree to which partners take ownership of the process). 

            Preliminary Findings from our Partners’ Self-Reports

            This summer, we surveyed all of our current and past partners, primarily to get a sense for how they perceive the improvement supports we have provided so far. Questions ranged from (1) how much partners valued each activity or tool in the process (some competencies comprise multiple activities), to (2) whether the process helped them get better at the competencies, to (3) their likelihood of engaging in the activity –or using the corresponding tools– in the future. All questions were on a 5-point scale ranging from Strongly Disagree to Strongly Agree. For each question, we also invited open-ended feedback. The data thus largely inform Phase 1 and 2 questions, but we can also generate some hypotheses about Phase 3 questions. There are limits to the conclusions we can draw about agency-level learning and adoption of the practices we train them on, since respondents answered the survey as individuals and not as representatives of their agency.. 

            Who took the survey?

            Not surprisingly, we had a better response rate from current partners. Of 32 individual responses, 28 were from current partners while 4 were from past partners. The 32 respondents represent 20 partners, 16 current and 4 past. We thus have responses from participants of half of our 8 past partners and nearly 90% of current partners (16 of 18). Most partners had only one respondent, but 40% (8 partners) had multiple respondents. 

            Overall Findings

            Our partners have generally found all of the tools and activities valuable. Of the 14 tools and activities covering the 7 competencies that nearly all our partners have already engaged with –many partners have not yet gotten to the decision-making and goal reflection competencies yet– the lowest rated had 75% of respondents agree that it was valuable. For most tools and activities, 80% of respondents found them valuable, with the highest rated generating 100% agreement. Several activities garnered strong agreement from over half of participants. The qualitative responses echoed this. For example, one respondent shared, “The total planning process has been a great experience and we have been introduced to a number of useful planning tools that have generated rich discussions within our group.” Another wrote, “We found all the tools to be extremely helpful. Additionally, the facilitation was exceptional.” Participants also seemed to appreciate the intrastate cohort models’ facilitation of cross-district collaboration: “I…most appreciate learning from other districts.”

            With respect to  the Phase 2 questions of whether partners have built capacity to use the activities / tools and are likely to continue using them in the future (and for past partners, the Phase 3 question of whether they continue executing the process), the results are more nuanced. We have been generally successful at building partners’ perceptions of their own capacity. Nearly 80% of respondents agreed that “the PG process has helped [them] improve [their] team’s continuous improvement… efforts,” and nearly 85% feel they are better equipped to do so. At a competency level the results are similar. For example, a large majority agreed that they were better equipped to understand their challenges (>90%) and to identify potential solutions (~84%). The qualitative responses suggest one of the greatest successes is in challenging existing problem solving approaches. For example, participants shared, “it is very systematic. It also makes you not jump straight to what you think a solution to the problem might be before you really dive into the why.” Others said, “the process forces us to look at issues more objectively and challenge assumptions about why things are a certain way.” At least half of participants –and in some cases over 80%– indicated they were likely to continue engaging in a given activity in the future.

            On the other hand, fewer participants were likely to continue doing each activity than valued the activity. The gap was generally around ten percentage points for agreement and larger for strong agreement. The activity-level responses are consistent with the reflections on the overall Proving Ground process. While nearly 86% of respondents agreed that they felt better equipped to do this work, a bit fewer (80%) indicated that they were likely to use the process overall to address other challenges. Likewise, just over one in three stated that they had applied it to another problem to date, though some of this is a product of how early many partners are in the process. Nevertheless, the results suggest that we’ve had some success but we also have more work to do before we can be confident that partners will continue executing this process after our engagements end. While our hypothesis is that institutional disincentives and the degree to which this process runs counter to some individuals’ instincts remain barriers to sustained adoption of this process, the qualitative responses suggest that time may be the biggest barrier to continuing this work, at least for the few not agreeing that they would continue many of the activities. Said one, “A district could never replicate this process due to the amount of time and manpower required.” Said another, “Too many steps from vision to execution.” 

            Past vs. Current Partners

            The number of past respondents is small enough –and the likelihood of non-response bias high enough– that we hesitate to draw any conclusions from differences in responses between past and current partners. That said, there is little evidence to support our hypothesis that we were less successful with past partners than current ones. The former participants that responded were more likely to agree that the process helped improve their teams improvement efforts (100% to 75%), similarly likely to agree that they were better equipped to engage in continuous improvement work (75% to 85%) and similarly likely to agree that they are likely to use this for other problems and challenges (75% to 79%). As expected, former participants were more likely to say that they had already used the process on other problems and challenges than current partners (75% vs. 29%). 

            Takeaways

            The data from our recent survey serves as one of many sets of data points we will use to learn whether we are achieving Proving Ground’s goals. Despite its limits, the survey informs our questions in several ways. First, our partners generally value the process overall and the tools and support we provide. This suggests that a key element of engagement is present. Similarly, though we cannot objectively say from this data whether our partners are actually building capacity to the degree we want them to, they agree that they are learning, with a large majority of participants agreeing that their teams are getting better at parts or all of the process. On the other hand, although the intent to continue the process is relatively high, agreement rates are lower than valuation and self-reported capacity change. Thus, despite some successes, our efforts to ensure partners continue systematically solving problems after our engagements end likely need to be improved. The barriers our partners identified, most notably the time the process takes relative to how little they have, suggest a good place to start. 

            Looking Ahead

            Reflecting on what we’ve done so far, the Proving Ground team took a step back to refine the Core Values that drive our work and we continue iterating on the service we provide. In upcoming installments of Improving Improvement, we will discuss our Core Values and early learnings from the launch of a new service delivery model called Proving Ground Jumpstart and share more detailed updates on the progress of our intrastate networks, the Georgia Improvement Network, the Rhode Island LEAP Support Network, and the new Ohio Improvement Network, including lessons learned from partnering with states to support districts on their improvement journeys.

            We are also always open to additional suggestions for topics for future editions of Improving Improvement. Reach out to us with any questions you have about our networks, continuous improvement process, or ideas you’d like to see us tackle.

            David Hersh (david_hersh@gse.harvard.edu) is Director of Proving Ground.

             

            [1] The 9 core competencies are:

            1. Clearly define the problem and set an improvement goal for it
            2. Identify root causes
            3. Identify a set of potential interventions aligned to the root causes
            4. Prioritize a potential intervention from that set to try
            5. Design the intervention using user-centered design principles
            6. Plan for implementation and progress monitoring
            7. Pilot to generate evidence of impact
            8. Use evidence from the pilot to decide whether to stop, scale or adapt
            9. Reflect on the results in light of your improvement goal

             

            Suggested citation: Hersh, D. (2022). Improving Improvement: How Districts Perceive Improvement Supports. NNERPP Extra, 4(3), 18-22. https://doi.org/10.25613/3FK4-AJ83

            NNERPP | EXTRA is a quarterly magazine produced by the National Network of Education Research-Practice Partnerships  |  nnerpp.rice.edu