WHAT’S NEW WITH RPP EFFECTIVENESS?

Paula Arce-Trigatti (NNERPP)

Volume 3 Issue 2 (2021), pp. 9-14

INTRODUCTION

Although RPP effectiveness, including how to define and assess it, remains a critically important topic at NNERPP, it has been a while since we last shared a piece exclusively about this for NNERPP Extra (here is the most recent one and here is the one before that). Elsewhere, we have organized a number of learning opportunities for those interested in getting up to speed with RPP effectiveness, through:

  • Providing up-to-date curated content and resources on RPP effectiveness on the NNERPP RPP Knowledge Clearinghouse, an online repository of RPP-related artifacts covering all kinds of partnership topics;
  • Hosting conversations about RPP effectiveness at every NNERPP Annual Forum since 2016 (incidentally, we first workshopped the now well-read Henrick, et al. (2017) framework with attendees at the 2016 Annual Forum);
  • And holding several formal and informal conversations with NNERPP members and friends, through our monthly virtual brown bags, external conference presentations, and ad hoc.

Given all of these efforts, we thought it was time to check back in with where we are in terms of RPP effectiveness, which will also help us see where we should go next. Here, we provide a recap of major efforts around defining and assessing the effectiveness of RPPs, highlight the outstanding questions that remain, and finally offer a couple of new ideas for the field to consider going forward (feedback welcome!).

RECAP

We begin this recap with the Henrick, et al. (2017) framework, as it represents a pivotal moment in our quest to understand RPP effectiveness. This white paper helped advance our conversations from one-off, singular contributions to the public sphere on understanding how and when RPPs work (e.g., Ralston, et al., 2016; Wentworth, Mazzeo, and Connolly, 2017) to a field-informed contribution. Thus, the Henrick, et al. framework can be seen as more than a large step forward in conceptions of effectiveness, in that it also synthesizes / marks where the RPP field stood at that moment. 

Taking a step back to 2017, one of the major challenges at the time was to name and consider the different types of RPPs (e.g., design-based, research alliances, and networked improvement communities, from Coburn, Penuel, and Geil, 2013) and whether they could (or should) be assessed using similar metrics. One of the important features of the Henrick, et al. framework is that it represents a near-consensus, if you will, of the prioritized aims identified by RPPs of all three types – it documented five dimensions or outcomes that seemed to be important for “effective” RPPs, no matter the approach to partnership. 

Four years later, we have moved away from hard distinctions among the three types of RPPs, as the lines between each model have blurred considerably (Arce-Trigatti, Chukhray, and Lopez Turley, 2018), but the Henrick, et al. framework has endured. It has since anchored many conversations we’ve had within NNERPP and with others outside of our network, leading to a number of new developments that we highlight here (note that this is not an exhaustive literature review but rather key examples of artifacts that have emerged after Henrick, et al.):

>>Application of the Henrick, et al. framework to a large sample of RPPs by those studying RPPs 

Farrell, C., Davidson, K., Repko-Erwin, M., Penuel, W., Quantz, M., Wong, H., Riedy, R., Brink, Z. (2018). A descriptive study of the IES researcher-practitioner partnerships in education research program. Boulder, CO: National Center for Research in Policy and Practice (NCRPP).

The Institute of Education Sciences (IES) asked a team of researchers at NCRPP to help them study IES’ (now defunct) Researcher-Practitioner Partnerships in Education funding program. In this descriptive effort, NCRPP developed interview questions and survey instruments based largely on the five dimensions from the Henrick, et al. framework. This technical report summarizes the instrument development process and key constructs, as well as findings across 27 RPPs that participated in the study.

>>Application of the Henrick, et al. framework to an individual RPP by those in an RPP

Goldstein, H., McKenna, M., Barker, R., and Brown, T. (2019). Research-practice partnership: Application to implementation of multitiered system of supports in early childhood education. Perspectives of the ASHA Special Interest Groups, p. 1-13.

This paper is an example of an individual RPP working through the five dimensions of the Henrick, et al. framework, and examining how these dimensions apply to their own particular context. These reflective efforts are quite valuable to furthering our understanding of how (and whether) the five dimensions apply to a partnership’s context. 

>>Application of the Henrick, et. al. framework to a community of RPPs from the same funding source

There are several ongoing efforts to apply the Henrick, et al. framework to the evaluation of Computer Science for All RPPs, spurred in large part from the National Science Foundation requiring an evaluative component for its funded projects. The convener of this community of RPPs, RPPforCS, has also developed a “Health Assessment” tool based on the Henrick, et al. framework and asked 5 partnerships to reflect on their use of the tool.

>>Tools for using the Henrick, et al. framework to assess your RPP’s progress on its goals

Most recently, a small team at REL Southwest led by Carrie Scholz produced a tool grounded in the Henrick, et al. framework to assess the health of an RPP. This is a formative analysis meant to provide ongoing opportunities to check in with partners around the five dimensions identified in Henrick, et al. Importantly, the tool is meant to support improvement efforts around partnership health by asking members to “purposefully and honestly reflect on their collaborative work” and moreover, “to make necessary adjustments over time”. Teams are invited to set quarterly goals around the 5 dimensions and given guidance on how to monitor those goals via an Excel workbook containing prompts and interview protocols.

>>Tools for using the Henrick, et al. framework to assess your RPP’s developmental progression

In our own backyard, NNERPP has partnered with NCRPP to develop a set of research-based tools grounded in the Henrick, et al. framework, through a project funded by the William T. Grant Foundation. Using an evidence-centered design (ECD) approach, we are creating a survey and set of interview protocols that all types of RPPs can use to get a sense for where they fall along a developmental progression. We have additionally developed and included an equity lens to each of the dimensions, an aspect that was not as explicit in the original framework. We are currently in Phase 2 of the grant, where we are testing a pilot survey and working with RPP stakeholders to develop an interview protocol.

OVERARCHING THEMES, OUTSTANDING QUESTIONS

The efforts we review above have brought us a long way in our conversations around RPP effectiveness and our ability to measure certain aspects of it. However, some prickly questions remain unresolved. Many of these ideas have been raised previously throughout our conversations with NNERPP members and friends, with no clear answers yet. We summarize a few of them here:

1. RPP goals are sometimes similar but oftentimes not. Through these advancements in understanding RPP effectiveness, we have affirmed our prior suspicion that it is still incredibly important to first understand what a given RPP’s goals are, and especially how those relate to the five dimensions of the Henrick, et al. framework, if that is the basis for the assessment. The extent to which any dimension from the Henrick, et al. framework will be applicable or a priority for a partnership will vary substantially across all RPPs, even within the same “type”. At NNERPP, we took a deep dive into this idea for a previous NNERPP Extra article, asking members and friends to consider whether Dimension 4 of the framework, “Producing Generalizable Knowledge to Inform Efforts More Broadly” is / should be applicable to all RPPs, for example. 

2. What do we mean by “RPP effectiveness”? There are a number of ways of framing questions around RPP effectiveness, including assessing for “success”, evaluating progress on goals, and checking the “health” of the partnership, just to name a few. These are all slightly different and will provide different information about the partnership. For example, Wentworth, et al. recently wrote about RPP “success”, where they focus very narrowly on the partnership’s role in “shaping district policies and practice” (i.e., assessing for impact and practice-side change). On the other hand, the notion of “effectiveness” may be somewhat broader, perhaps involving inquiry into whether you are meeting the goals you set out for your partnership (whatever those may be), many of which include process dimensions of the work rather than ultimate downstream impacts on practice. Finally, partnership health seems like something entirely different, invoking a sense of strength relative to weakness. In this case, we might be interested in asking how strong is your partnership? As such, this may be less about goals and more about the connective relationships that support the work.

3. What is the intended use for the evaluation and who is using it? For example, an RPP’s evaluation could be intended as a formative assessment, like the REL Southwest tool introduced earlier, or it could be intended as a summative assessment, like what some of the evaluators are providing for the NSF CSforAll RPPs. One thing that remains unclear to us though is who is using this information, regardless of whether the assessment is summative or formative in nature. For example, do practice-side partners care about assessing the work or is this mainly a research-side endeavor? Does that matter? Are there mechanisms in place to help the partnership integrate findings from the evaluation? Whose responsibility is it to ensure that there are opportunities to improve based on the feedback? How will the partnership know whether it is indeed improving over time? 

4. What if you have limited resources to allocate to this work or are a small RPP? Apart from the NSF-funded CSforAll RPPs, which have to allocate some of their grant funding for evaluation, RPPs generally have a choice as to how much time, funding, and effort they will invest in assessing their effectiveness. Thus, the extent to which an RPP is able to assess its effectiveness may vary considerably depending on the availability of these resources. Moreover, this question also echoes the point made above around differences in RPP aims. How should we think about effectiveness for partnerships that are much smaller in scale, relative to those situated in large urban school districts, which tend to have more resources from which to draw upon? Additionally, does the potential to create partnerships that will have an impact on practice vary by the scale of operations? For example, is it “easier” to draw a line from RPP to impact for smaller partnerships and/or those who partner with smaller districts since there may be fewer barriers on the way from decision to practice?

A NEW IDEA

As our understanding of RPPs continues to evolve and grow, we expect (and hope) that new ideas on how to measure their progress, impacts, and health also emerge. To that end, I share a new idea for how we might assess RPP effectiveness, based on a concept that has previously been introduced in the RPP literature to characterize partnership work, but has not figured centrally in the current discussions of RPP effectiveness. Our aim here is to propose a not-yet fully developed way of imagining effectiveness so that others may weigh in as well, hopefully leading to new thinking or discussions.

The notion of “joint work” is arguably the most universal aspect of partnership work that cuts across RPP type, approach, and stated aims. Of course, there may be instances of joint work happening outside of partnerships, but to truly be considered an RPP, there must be examples of joint work occurring between partners. In fact, we could go so far as to say that it is part of a partnership’s “DNA”. Therefore, one way to think about RPP effectiveness is to invite questions specifically about the joint work itself – e.g., how much joint work is indicative of an effective partnership? When do we know that there are too few instances of joint work happening? What does high quality joint work look like? …and so forth. This line of questioning may help us better understand how well we are partnering, and forms the basis for the new idea introduced here. 

But before we get there, what is “joint work”, really? And how do we know we are engaging in “joint work”? The Henrick, et al. framework first introduces it as an indicator of progress under Dimension 1: “Building trust and cultivating relationships”. In this case, the authors use it to characterize the investments partners must make towards the development of trust and relationships, i.e., that partners “routinely work together”, which results in “joint work” (page 5). This description is somewhat vague, as many things can count as routinely working together and may not be in the vein of RPPs (e.g., researchers providing technical assistance to practitioners, which requires them to work together, but is not necessarily what we would consider an RPP). 

Somewhat fortuitously, I happened to recently re-read “Conceptualizing Research-Practice Partnerships as Joint Work at the Boundaries“ by Penuel, Allen, Coburn, and Farrell (2015), in which, as the title implies, the authors argue for characterizing partnership work as more than a transactional endeavor wherein the main goal is to facilitate the translation of research to practice. In the paper, the authors instead argue that the nature of partnership work is a kind of joint work that requires partners to engage in boundary crossing and boundary practices. Let’s first define these terms and then ponder what this has to do with RPP effectiveness. 

Starting with boundary crossing, the authors define this as “an individual’s transitions and interactions across different sites of practice” (p. 188). In other words, these are the instances in RPPs where a “researcher” or a “practitioner” is called to contribute to partnership efforts in a manner outside of their primary job description or home organization. Examples can include a researcher providing thought partnership to their practitioner partners in a research area that goes beyond their expertise, or a practitioner helping prepare a conference presentation for an academic meeting. Partnership work thus calls on each “side” to take on activities or roles that they may not have previously, in service of the collaboration, which results in boundary crossing.  

Building on this idea, boundary practices are the “stabilized routines, established and sustained over time, that bring together participants from different domains for ongoing engagement” (p. 190). The authors call these activities “unfamiliar” to partners prior to working in partnership and “hybrid” in that they often emerge from a meshing of “R” and “P” voices, roles, or perspectives to create a new process or product. One of the examples provided in the paper describes the work of the MIST Project, where the partnership worked to co-create a regular theory of action report. This report called for the practitioners to provide their expertise in filling in the components of the theory of action (as opposed to researchers bringing in theory / research to fulfill this in the absence of the RPP), while the researchers provided an external viewpoint for practitioners as they reflected on their process (as opposed to practitioners working through an internally-driven theory of action with their organizational peers). The resulting theory of action report thus reflects a jointly constructed artifact that would not exist were it not for the partnership. 

So, why should we care about boundary crossing and boundary practices? And what do they have to do with joint work and RPP effectiveness?

The authors suggest that “the joint work of partnerships requires participants to engage in boundary crossing, and that joint work is accomplished through boundary practices, which are routines that only partially resemble the professional practices of researchers and practitioners” (p. 187, emphasis mine). 

And so, here is the new idea: if all RPPs must contain examples of joint work, and joint work requires boundary crossing (which is enabled through boundary practices), what if we created measures to help us understand whether these (boundary crossing and practices) are indeed happening as a way to measure joint work? And to connect this with RPP effectiveness: if we are able to measure how well partnerships are facilitating joint work, we may be able to get a sense for how well the partnership is doing – which ultimately tells us something about the effectiveness, success, or health of the RPP.   

Elaborating further, encouraging partnerships to reflect on their efforts to create and support boundary crossing and practices would give them a new set of tools (and a new perspective) for aims around assessment. For example, what if partnerships asked themselves: to what extent are we creating multiple opportunities (i.e., through boundary practices) for instances of joint work to occur? Or, what does it mean for partnerships to cultivate the conditions leading to joint work (i.e., leading to instances of boundary crossing)? There may be great potential to expand our understanding of the various elements related to RPP “effectiveness” simply by taking up these new terms and applying them to our thinking.

ONE FINAL THOUGHT

Since we already have you here, I thought we would call out one more idea that we have not yet seen be taken up regularly in the discussion of RPP effectiveness. From what we have seen here at NNERPP, especially over the last year (and what is highlighted in “Research Insights” in this edition), an RPP’s potential or realized capacity to be responsive, adaptive, and nimble is critical in order to be considered “effective.” These three partnership features are not a part of the standard RPP effectiveness lexicon right now, and they probably should be. Despite the advances we’ve made in better understanding what makes an RPP productive, there is still no one “right” way to do partnership work…although if there is a “right” way, it may be that the partnerships which seem to grow, endure, and effect change are those that follow a responsive, adaptive, and nimble approach to partnering, all in service of their partners’ needs. These attributes are not directly named in the Henrick, et al., framework, for example, and it’s not clear that they are showing up in our RPP effectiveness conversations either. And yet, what we know from our experiences with members and friends on the practice-side is that “aims” are ever-changing, due to a variety of factors that are typically outside of the partnership’s purview. If we apply the idea that effective or productive partnerships are those that support their P-side partners in achieving their aims (Dimension 3 of the Henrick, et al. framework), then by default we are describing partnerships that must be responsive, adaptive, and nimble. (Again, these traits are exactly the ones we highlight in this very edition’s Research Insights contribution.) 

While we could go on, this is a good place to pause and hear from you! What do you think about these new concepts of RPP effectiveness? Are you already measuring instances of boundary crossing and boundary practices? What do they tell you about your partnering efforts? Has your partnership had to pivot in the last year? What made that easier or harder? Let us know here!

Paula Arce-Trigatti is Director of the National Network of Education Research-Practice Partnerships (NNERPP).

Suggested citation: Arce-Trigatti, P. (2021). What’s New With RPP Effectiveness? NNERPP Extra, 3(2), 9-14.

NNERPP | EXTRA is a quarterly magazine produced by the National Network of Education Research-Practice Partnerships  |  nnerpp.rice.edu