SHOULD RPPs BE RESPONSIBLE FOR PRODUCING KNOWLEDGE THAT INFORMS EDUCATION EFFORTS MORE BROADLY?

A MULTI-VIEW TAKE ON DIMENSION 4 FROM THE HENRICK, ET AL. RPP EFFECTIVENESS FRAMEWORK

Paula Arce-Trigatti (NNERPP)

Volume 2 Issue 1 (2020), pp. 8-17

We recently co-hosted a small meeting at NNERPP headquarters (Rice University) with our good friends at the National Center for Research in Policy & Practice (NCRPP) on research-practice partnership (RPP) effectiveness measures. Last year, NNERPP and NCRPP received a grant from the William T. Grant Foundation to develop a suite of measures that an RPP could use to make evidence-based claims about the effectiveness of their RPP. For this effort, we are building directly off of “Assessing Research-Practice Partnerships: 5 Dimensions of RPP Effectiveness,” a white paper written by Erin Henrick, Paul Cobb, William R. Penuel, Kara Jackson, and Tiffany Clark, and published by W. T. Grant in 2017. The Henrick, et al. Framework is currently the leading piece of literature in the RPP space for thinking about RPP effectiveness – it introduced, for the first time, five dimensions related to RPP effectiveness that partnerships should closely consider in structuring their work, sourced from a number of education RPPs in the field. While this Framework is a key starting point for those interested in assessing the effectiveness of their partnership, it stops short of providing a set of actual measures one can administer – this is the focus of the W. T. Grant funded work we are now collaborating on with our friends at NCRPP. 

One of our first tasks in this project was to examine the five dimensions outlined in the Henrick, et al. Framework in greater detail, which we took up with our participants at the January meeting. In this edition of Deep Dive, we share back several interesting insights that arose during our conversations with meeting attendees around Dimension 4 of the Henrick, et al. Framework: “Producing knowledge that can inform educational improvement efforts more broadly.” We invite you to join us as we explore a number of aspects of this dimension, including whether it should be universally applicable to all RPPs, how we might reconcile the fact that RPPs are necessarily hyper-focused on their local problems of practice while this dimension asks them to look beyond the partnership itself, and identifying some of the challenges related to fulfilling this dimension.

Dimension 4 – Universal Applicability?

As a first step during the January meeting, we took time to revise each dimension’s definition, dividing up our diverse group of participants into teams of 3-4, with each team tackling a dimension. In the course of updating the definitions, teams were asked to consider how that dimension might play out for different types of partnerships, depending on RPP model, age, or quality. As the authors note in the Henrick, et al. Framework, although the five dimensions (listed in Table 1 below) were collectively identified as critical to partnership health during their field-driven data collection, the degree to which any single dimension is a priority for an RPP varies widely. It thus is up to the interpreters or users of the Framework to determine how and to what extent each dimension applies to any given RPP.

Table 1. The 5 Dimensions of RPP Effectiveness from the Henrick, et al. Framework (2017)

Dimension 1: Building trust and cultivating partnership relationships

Dimension 2: Conducting rigorous research to inform action

Dimension 3: Supporting the partner practice organization in achieving its goals

Dimension 4: Producing knowledge that can inform educational improvement efforts more broadly

Dimension 5: Building the capacity of participating researchers, practitioners, practice organizations, and research organizations to engage in partnership work

For some of the dimensions, it’s probably safe to assume that every partnership is likely paying close attention to it – Dimensions 1 (trust building), 2 (conducting rigorous research), and 3 (supporting the practice-side partner) come to mind. Dimension 5 (building capacity to partner), on the other hand, is one where universal applicability is less clear. Although many partnerships most certainly dedicate time and resources to developing their teams’ capacity to partner, in some cases less value may be placed on this dimension simply because of the age of the partnership – younger partnerships might struggle with how to operationalize this. Other times, less priority may be placed on this dimension due to existing capacity: perhaps all partners and organizations involved in the work are already quite advanced in terms of RPP-related skills, so there is less need to focus on this aspect of the work. 

And this brings us to the focus of this particular article: Dimension 4, producing knowledge that can inform educational improvement efforts more broadly. In stark contrast to Dimension 2 (conducting rigorous research to inform action), Dimension 4 is not focused on local activity but rather the opposite – activity everywhere else. In particular, while Dimension 2 asks a partnership to be hyper-focused on its local partners and use their data to support their improvement efforts, Dimension 4 asks a partnership to additionally be thoughtful in sharing these local findings more broadly. Given the resource constraints that most RPPs face, including time, funding, or even capacity, can we reasonably expect them to prioritize the spread of knowledge beyond the borders of the RPP? Would we be ok with calling an RPP “ineffective” if they failed to create artifacts meant to engage stakeholders they did not know, in a different state, operating under different rules, and working towards different aims?

Curious about what fellow meeting attendees thought about these possible tensions in Dimension 4, I followed up with many of the folks at our January meeting, asking them to respond to the following prompt:

Should RPPs be responsible for producing knowledge that informs education efforts more broadly Why / why not?

While the responses in the next section are not exhaustive by any means, there are a variety of different partnership models and RPP actors represented. As you’ll discover, there does appear to be some agreement that this is perhaps an important endeavor for RPPs to consider. There are some disagreements about the value of this activity, though, and there are very practical challenges involved in carrying out this dimension. Let’s take a closer look at how folks responded (note that any emphasis by bolding parts of the responses included below was added by me).

Agreeing with Dimension 4

Several of the colleagues that we invited to participate in sharing their thoughts on the prompt listed above agreed that RPPs should in fact be responsible for producing knowledge that informs education efforts more broadly. 

Carla Stevens, former Assistant Superintendent for the Houston Independent School District (HISD) and past Associate Director of the Houston Education Research Consortium (HERC) said:

“While I don’t believe it should be “Goal 1” for an RPP, I do think that all research in general should inform education efforts more broadly if at all possible. From my perspective on the practice side representing the school district, the primary focus of our RPP should be on our jointly developed research agenda which serves to address challenges of the district in closing the achievement gap for all students. However, in doing the research that directly impacts the district, it makes sense to share the findings to a broader audience as the challenges faced by the district are most definitely not limited to this one district. Findings from studies, even ones that are very specific to a local context, can still be used to inform efforts in other contexts.”

Jessica Vasan, Manager at HISD and district liaison for HERC, agreed:

“In public education, we have scarce resources and yet probably reinvent the wheel more often than we should. So wherever feasible, yes, RPPs should produce knowledge to inform larger education efforts. The K-12 educators I have encountered and been fortunate enough to work with over the last twenty years have always been hungry to learn about the latest evidence, rigorously produced, that might inform their practice. We need to make it more accessible for them, and we also need to help them understand how generalizable (or not) the findings may be. They learn about a study and ask, “How would this apply to my students? To my classroom/school/district?” Context matters, and yet the science of reading, for example, is universal. RPPs hold great promise in producing relevant research using local data that builds on what we’ve already learned in the broader field.

Yuri Kim, Program Officer at the Bill and Melinda Gates Foundation, additionally identified the following reasons for why we might place an expectation of broader reach on RPPs:

“I do believe RPPs should aim to produce knowledge that informs education efforts across the field. A review of studies of school and district leaders indicates that research is difficult to access. There is a clear need for rigorous, evidence-based practices in education and RPPs can fill this need by producing knowledge that informs education efforts across the field.

Here’s why RPPs are uniquely positioned to generate relevant and accessible findings that can be useful and usable in other communities:

  • RPPs are purposely designed to create usable and accessible research. The defined problems of practice are practitioner-focused and lead to findings that impact decision-making in education.
  • Sharing knowledge beyond an RPP’s own network can lead to the expansion of bodies of research in the field. A great example of this is the impact of the 9th Grade On-Track Indicators research led by the UChicago Consortium.
  • It is an issue of equity – sharing findings or tools with other communities that may not have the resources to conduct the same kind of research.
  • Finally, it may benefit the RPPs themselves by increasing their own capacity to communicate and disseminate their work within their network.”

 

Rafi Santo, a learning scientist focused on the intersection of digital culture, education, and institutional change, made an additional distinction regarding Dimension 4, RPPs, and consulting work:

Yes, I think this [Dimension 4] is a defining feature of RPPs. Once you take this feature out, then you are essentially left with really good evaluation and design teams. This is not necessarily a problem, but they are not RPPs – they are purely focused on the local aspect of the work. RPPs are instead simultaneously about the local problem of practice, identified collaboratively with Rs and Ps, engaging in work to improve those local outcomes AND informing broader stakeholders. Part of the rationale for including an external audience component to the work is that in many good, local evaluations not done in RPPs, there is knowledge lost. How to translate local inquiry into societal knowledge is the shift we are trying to make with RPPs. And this type of research has high value, especially compared with how research is typically done: for example, actual institutional realities are taken into account with work done in RPPs, things that aren’t on the minds of traditionally-based researchers.”

As the four responses above make clear, there are a number of important reasons why, at least in theory, RPPs should fully embrace the responsibility of producing knowledge that can inform the work of others. This is likely to hold even when it might be slightly more challenging to do so, as Fabienne Doucet, Program Officer at the William T. Grant Foundation and Associate Professor of Early Childhood and Urban Education at New York University (on leave), pointed out:

It’s not a yes or no answer. Based on the funding strategies of certain RPPs, different funders might feel differently about this dimension. William T. Grant, for example, is a national-level funder, so one of the things we consider for funding is the potential for learning beyond just this one project. In general, there is a hope that the work that funders support will have lessons that apply more broadly. That said, if RPPs are supported by local funders, they may be less concerned that findings from an RPP would be applicable to broader audiences. For example, a Texas-based funder might not prioritize the relevance of the work for Illinois. Generally speaking, though, I think it’s a good goal for an RPP to have. Obviously there will be specific local needs that need to be taken care of but I would hope that we are engaging in an effort to contribute to a larger body of knowledge. The endeavor of research and practice is such that we can learn things so that other people don’t have to learn them all over again.”

Questioning Dimension 4

We next turn to potential cautions participants raised when considering this dimension.

Yetunde Zannou, Program Manager for the Center on Research and Evaluation at Southern Methodist University, shared this observation:

“I puzzled over “should” and think I’d abandon that to say there’s value in RPPs thinking broadly, but acting locally. The purpose of the partnership is to collaboratively solve local problems of practice. That should still remain the top priority. Thinking “broadly” would mean considering how to document change efforts, routines, etc. so that others can consider those efforts and take them up as they see fit in other settings. I would not advocate that RPPs produce knowledge just for their immediate context because even descriptions of how an improvement effort was designed, implemented, refined, and sustained in a real-world setting provides valuable information about what it takes to make an innovation work.”

What Yetunde highlights is that there is a particular kind of knowledge that is worth sharing from RPP work: it’s not the findings from an evaluation per se that are interesting to another p-side team. It’s the work that was done around the problem at hand that is most informative (i.e., “change efforts, routines” from Yetunde’s response). While some of this is echoed in “Indicator 2” for this dimension in the Framework (i.e., “the RPP develops and shares new tools and/or routines that can be adapted to support improvement work in other settings,” p. 15), the Framework does not explicitly name the documentation of “change efforts” that Yetunde identifies as key to being of broad interest.

Enrique (Henry) Suárez, Assistant Professor of Science Education at UMass Amherst, also an RPP participant / organizer, described this possibility in further detail:

“I tend to be optimistic yet cautious about broader generalizations made from educational research. To be clear, I do think that educational research, in general, and the work of RPPs, specifically, could make significant contributions to how we understand and organize activities around teaching, learning, development, and even schooling. But, to me, these contributions could come more in the form of expanding and/or refining current education theory and practice, rather than ready-made, plug-n-play, decontextualized ideas, designs, and/or strategies. 

For one, I worry that RPPs may organize their meaning-making around producing generalizable knowledge in ways that incentivize obviating or smoothing out the intricacies from their particular sociopolitical context. Moreover, I worry about trying to generalize the knowledge from one RPP in one particular context to another RPP in a different context, without first critically understanding the particularities upon which that knowledge is productive

I think I approach this more from the perspective of Design-Based Research (DBR), where the goal is not necessarily to make sweeping statements about teaching and learning, but rather design interventions that change specific aspects of the learning environment and, from there, humbly contribute to theory and practice. The trick, I think, lies in navigating the tension between co-constructing that localized and contextualized knowledge, understanding its limitations, but also looking prospectively at what aspects of that knowledge could travel to other contexts (similar or different); maybe even anticipating how that knowledge may break when operationalized elsewhere (what some DBR folks refer to as “putting knowledge in harm’s way”). 

And I think that’s exactly where the responsibility of RPPs should/could lie: producing knowledge that addresses the jointly-identified opportunities for refining practice in their particular context, while also keeping an eye out for how their meaning-making could be helpful to others. And I can even imagine various concentric circles of generalization based on the knowledge produced by RPPs of certain scales: partnering with individual teachers could produce knowledge that could be taken to other classrooms; partnering with individual schools could produce knowledge that could be taken to other buildings; partnering with specific districts could produce knowledge that could be used by other systems.”

Finally, there could still be an ideological argument for not taking up the goals outlined in Dimension 4 from the Henrick, et al. Framework. 

Adam J. York, Research Associate at the National Education Policy Center and Research Hub for Youth Organizing and Education Policy, explained:

“In our recent study we interviewed folks working in RPPs and other types of partnerships that were similar to RPPs in some ways. We were focusing on partnerships that included students, parents, and community groups and many employed participatory methods (i.e. community-based participatory research & youth participatory action research). In our conversations, we heard skepticism over attempts to scale-up findings and apply them out of context. That is, people were cautious about taking solutions and interventions that were developed in one place and attempting to apply them in other places. This is especially true for research projects that are closely attending to, and building on, local histories of struggle and social movements for more equitable and just education systems. Part of these histories include a legacy of top-down interventions from outsiders, including examples of interventions that have harmed students in the long run. However, we also heard examples of powerful sharing across contexts when it came to relating lessons from methodological innovation and strategies for data utilization. For example, a project that has success in encouraging transformative dialogue between community organizers and school district administrators could benefit the broader field through sharing the types of data and approaches to analysis that were most productive in those conversations, even if the specific findings and conclusions were context specific. Similarly, another area where information across settings could help the field is approaches to designing for multiple stakeholders sharing power within projects. There are lessons emerging in research design that can be useful to those trying to get started building more equitable research partnerships.”

As is clear in these insights, the what and the who of Dimension 4 can matter a great deal for the applicability and importance of this dimension to any particular RPP. We next turn to some of the practical challenges related to carrying out aspects of Dimension 4.

Challenges to implementing Dimension 4

As we know, RPPs work very hard with their local partners to customize artifacts specific for their needs. Asking RPPs to take up the same exercise for a broader audience, which is not well-defined, and when the research may never be relevant, seems like a tall order. This may be especially cumbersome / burdensome for partnerships that have limited capacity, including resources or time; newly emerging partnerships may struggle with this especially, not because they don’t value sharing their work more broadly but because it’s simply impossible given the demands on their time in launching the RPP. 

To fully understand the scope of the challenges, it is helpful to borrow “supply-side” and “demand-side” framing from economics. That is, Dimension 4 is best thought of as a two-sided problem: on the supply-side, the burden of “producing knowledge” that can “inform educational improvement efforts more broadly” is placed on the RPPs themselves, the suppliers of that knowledge. However, this second phrase, “inform educational improvement efforts more broadly” is also a demand-side problem, where the success of “informing” is dependent on the users of that knowledge. To state it more precisely, one cannot simply assume they have informed someone else and call it a day – the person receiving the information has to confirm they have indeed been informed. Hence, the two-sided nature of this dimension.

This subtle distinction, that users of research form part of the measure of success on this dimension, is not raised in the Framework. As written, the indicators of progress described in the Framework focus exclusively on the supply-side – that is, if RPPs produce a variety of artifacts that can (i.e., have the potential to) inform a broad range of education stakeholders, that is sufficient to be considered “successful” on this dimension. To put it more succinctly, an RPP could do everything “right” in terms of what is described in the dimension and yet fail to inform “education efforts more broadly” since this is the part that depends on the user. Working on this dimension, then, can potentially be a lose-lose, from a cost-benefit perspective.  

A related issue that is not taken up in the Framework is what is meant by “more broadly.” This description is quite vague, perhaps to allow a greater number of conditions to meet this criteria. How broad does an RPP need to reach with their work, if they are attempting to be successful on this dimension? For example, for a local, place-based RPP with strong ties to the community they are situated in, would “more broadly” simply be the neighboring community or district? Or is there an expectation that they should try to produce work that would inform state-level or even national-level audiences? Moreover, are both practitioners and researchers the target audience? 

Stacey Sexton, an RPP evaluator, researcher of RPPs, and project manager for RPPforCS additionally highlighted how these expectations might differ for a newly formed RPP versus a longstanding one:

“I’m not sure that it [Dimension 4] is a reasonable expectation for emerging partnerships. For long-standing, mature partnerships I think it is a more reasonable outcome to look for because they might have greater capacity to look outside of their own immediate context. I don’t think that I could support an RPP being penalized if they do not prioritize informing education efforts more broadly, but I do think that RPPs should want to do this and should include it as part of their maturation plans.”

We should point out there are existing channels of dissemination that might, in fact, lead to broader use. For example, academic journals are built for just that – to spread knowledge. While this is a narrow audience, it is perhaps a relatively low-cost option for many partnerships, especially for those based at universities, where publishing in peer-reviewed academic journals is a must. The drawback with this option, though, is that we might wonder to what extent academic journals are built to “inform educational improvement efforts” (emphasis mine). Last I checked, there was an overwhelming number of journals that were behind paywalls – inaccessible to those in the world of practice, i.e., those most likely to focus on improvement

In terms of the demand-side, we have heard from our practice-side friends (i.e., those working in districts or state education agencies) that research produced elsewhere is typically not as useful as research produced using their data for many reasons, most of which can be collapsed into those relating to relevance or accessibility. For example:

  • The research question itself may not be of interest to the district or SEA or is not relevant given their current priorities.
  • The population of students included in studies produced elsewhere differ substantially from their students, making it difficult to extrapolate how those findings might apply to their context.
  • Similarly, the overall context in which the other study takes place might be too dissimilar. For example, we have heard LEA leaders note that other districts or SEAs are generally operating under a different set of rules. So, for example, the Houston Independent School District is governed by Texas state laws; they are perhaps less interested in what Chicago is doing because Illinois state laws differ. (Note that by this same logic, however, Houston ISD would be more interested in what Austin ISD is doing, since they are in the same state.)
  • A few additional potential barriers that practice-side folks may encounter when accessing externally produced research include the research sitting behind a firewall or the readers of the research may not have the training or time necessary to translate and interpret the piece. 

Although these demand-side challenges are applicable to any situation involving practice-side teams attempting to translate, interpret, and apply externally produced research to their contexts, they carry greater weight when the effectiveness of an RPP depends partly on whether their work is taken up by these teams. Further clarity around who is included in “more broadly” would be helpful, although a recognition that demand-side conditions might still prevent external users from being informed from an RPP’s work is still needed.

Working towards Dimension 4

In this final section, we discuss a number of strategies partnerships may wish to consider as they work towards Dimension 4, recognizing some of the cautions and challenges raised above. Special to this section, we’ve asked Erin Henrick, President of Partner to Improve and lead author of the Henrick, et al. Framework, and Paul Cobb, Research Professor in Math Education and Professor Emeritus in the Department of Teaching and Learning at Vanderbilt University as well as co-author of the Henrick, et al. Framework, to share their thoughts on how partnerships might proceed with Dimension 4.  

We begin with timing: One additional aspect that is not discussed directly in the description of the dimension is when partnerships should take Dimension 4 into consideration. For example, should RPPs take broader impacts into account at the same time as they develop their projects – that is, on “Day 1”? Or is it perfectly acceptable to merely do the work one intends to do with local partners, and later, work on translating the research or simply disseminating the findings widely? The latter scenario is somewhat problematic in that research conducted in this vein was never intended to be applicable or relevant to anyone immediately beyond the project; consequently, we should not be too surprised if this research is never taken up by those outside of the partnership. 

In terms of addressing this potential issue, Erin and Paul suggest:

“In our view, the goals of supporting the partner practice organization and producing knowledge to inform education efforts are complementary and should be embedded in RPP study designs from the beginning.

One way for RPPs to produce knowledge to inform education efforts more generally is to: 1) explicitly frame the local problem as a case, and 2) identify the relevant aspects of the local context. The second step is critically important, so that others can take the contextual information into account and adjust the design to the context in which they are working.

Design-based research and design-based implementation research accomplish both goals at the same time. For example, the MIST Project, an RPP focused on understanding the conditions necessary to support ambitious and equitable math instruction at scale, studied four cases of large urban districts seeking to improve the quality of instruction for all students. We designed the study to include annual feedback and recommendations cycles to support the improvement efforts of our partner districts but our study design also included longitudinal analyses to develop a broader understanding of what it would take to improve instruction across a large urban school district. In this way, our study design made concurrently achieving these two goals possible. In our work with district leaders and schools, we supported our partners while also framing the agreed upon problem of practice as a case of a broader issue that is likely to be relevant to a significant number of other districts.

Second, partnerships will need to take into account the what: As shared previously, thinking through which aspects of the RPP effort need to be documented in order for the knowledge to be taken up elsewhere is an important step. According to Erin and Paul:

“It is important to share with others what was learned about how improvements can happen. Describing the processes and mechanisms for how improvements happen can help others working on similar problems. It is equally important, when writing up research findings to share more broadly, that RPPs clarify the context, so others can adjust the improvement ideas for their own context. For example, when describing the context in  a case of district leaders working to improve the quality of teacher collaborative meetings across a district (something that is very relevant to K-12 educators across the country), it is important to describe prior initiatives and professional development related to teacher collaborative meetings within the district. It is then critical to describe the processes and mechanisms to help others understand: What did it take to develop productive teacher collaborative meetings in this context?  It’s this kind of sharing that will push forward improvement work on complex problems in challenging settings.”

Third, how might partnerships allocate their resources to supporting this dimension, given that funding rules and priorities might not support this particular effort? Two potential strategies emerge in this regard: On the one hand, the funders themselves might have a role to play here. Erin and Paul write:

“At present, the nature of the funder matters for whether informing the broader improvement community is a priority. But perhaps funders not emphasizing broader contributions need to reconsider. From our point of view, if you don’t approach this work with the mindset of contributing to broader understanding, a huge opportunity is being missed. We strongly believe that RPPs can learn from other RPPs working on similar issues. If RPPs working on similar issues can share and learn from one another, everyone benefits in the long run.”

To that second point, RPPs might additionally leverage the dissemination and engagement infrastructures developed at NNERPP – we’ve implemented multiple support strategies to help our members and the field more generally have greater access to the work being produced in the RPP space by reducing the costs associated with sharing work. These include actively promoting our members’ efforts on Twitter, sharing our members’ recently produced research in our twice-monthly newsletter, inviting members to discuss their work with the NNERPP community in our monthly virtual brown bags, updating the NNERPP Extra website every Monday with recent member headlines, adding a sortable repository of these headlines, and producing the Research Insights articles featured in each issue of NNERPP Extra where we synthesize related work from our members. 

And while all of these dissemination efforts relate directly to the research produced by our members, we also work hard to pull together RPP-related knowledge our members have in order to advance our knowledge of how RPPs work and how they can work better.

As Ruth N. López Turley, Professor of Sociology at Rice University and the Founder/Director of the Houston Education Research Consortium (HERC), shared:

“RPPs rely on each other to get started, to overcome the continuous stream of challenges, and to keep learning and improving by sharing both research findings and partnership practices with each other. This is so important that I believe that any RPP that attempts to do this very difficult work apart from a support network of other RPPs is in a very precarious situation. This is why NNERPP exists, to make sure that all RPPs, new or mature, have the support they need. Sharing knowledge with other RPPs and other stakeholders does not need to be difficult or time-consuming. NNERPP exists to facilitate this type of information sharing and can do so in a way that is not only helpful for those receiving the information but also for those providing it.”

In Closing

As we’ve seen, there are plenty of reasons why Dimension 4 from the Henrick, et al. Framework should be applicable to all RPPs. And at the same time, we’ve seen why some RPPs might choose not to work on this dimension and why that may be perfectly reasonable as well. In any case, we do encourage RPPs to reflect on their current capacity as well as partnership goals to gain a better understanding of the affordances and constraints influencing their efforts as they work towards fulfilling aspects of Dimension 4. 

As Erin and Paul share:

“As an RPP community, it is important to consider our collective responsibility to not just help the communities we are working with, but to share what we are learning to support other communities without access to expertise and resources available in their own RPP. If all RPPs decided to only focus on their own context, everyone would be reinventing the wheel and not learning from what other people have learned, and we believe the field would suffer. We contend this is what is needed to equitably support education improvement efforts across the country and believe RPPs can support and facilitate this work.”

What do you think? Before you go, we invite you to take a moment and share your own thoughts on the prompt “Should RPPs be responsible for producing knowledge that informs education efforts more broadly? Why / why not?”. If you’d like to share your insights on this with us, please do so here!

 

Paula Arce-Trigatti is Director of the National Network of Education Research-Practice Partnerships (NNERPP)​. She wishes to thank Manuelito Biag, Paul Cobb, Fabienne Doucet, Erin Henrick, Yuri Kim, Ruth López Turley, Rafi Santo, Stacey Sexton, Carla Stevens, Enrique (Henry) Suárez, Jessica Vasan, Adam J. York, and Yetunde Zannou for their important contributions to this piece.

 

Suggested citation: Arce-Trigatti, P. (2020). Should RPPs Be Responsible for Producing Knowledge that Informs Education Efforts More Broadly? A Multi-View Take on Dimension 4 from the Henrick, et al. RPP Effectiveness Framework. NNERPP Extra, 2(1), 8-17.

NNERPP | EXTRA is a quarterly magazine produced by the National Network of Education Research-Practice Partnerships  |  nnerpp.rice.edu