DEVELOPING VALID MEASURES OF RPP EFFECTIVENESS: 

AN EVIDENCE-CENTERED DESIGN APPROACH

Caitlin C. Farrell | National Center for Research in Policy and Practice

Volume 3 Issue 4 (2021), pp. 2-5

Introduction

In the past decade, research-practice partnerships (RPPs) have moved from rare examples to widespread models in the field. Despite this growth, we do not have a good sense of how well RPPs are achieving their aims, nor do we have validated tools to help RPPs monitor their progress towards their goals. Thinking more deeply about the effectiveness of RPPs is necessary to understand if they are a good long-term “bet” that can indeed contribute to equitable improvements to educational systems as intended. 

The field-driven RPP Effectiveness Framework developed by Henrick and colleagues in 2017 offers an important beginning point in mapping how RPPs make progress toward common outcomes that partnerships pursue, as well as how they attend to equity in doing so. The next step is to develop a set of valid measures that are tied to these outcomes. Without such measures, it is hard for research, practice, and community partners to adjust their work towards more productive ends. In order to fill this gap, the National Center for Research in Policy and Practice and NNERPP are collaborating on a project funded by the William T. Grant Foundation to develop measures that can help an RPP assess its progress towards its goals. 

In this edition of the Research Insights series, I describe our recent efforts and explain how we have used Evidence-Centered Design (ECD) to guide our work.

Our Design Principles

We aim to develop measures that are able to distinguish among RPPs that are just developing versus those that have mature practices in key areas, in order to provide any RPP with useful feedback to help strengthen their work together. Measures also need to be valid and connected to use, that is, how they will be interpreted to inform decision-making and action. Below are the set of design principles that have guided our work to date:

  • We recognize that participatory methods support the development of tools that reflect the perspectives and needs of the end-users. Hence, our approach should involve RPP members and their perspectives at all stages of the work. 
  • We recognize that RPPs can benefit from low-stakes, formative evaluation to help them progress. Therefore, our tools should support this low-stakes formative evaluation of individual RPPs.
  • We recognize that RPPs have different approaches to their work, including research alliances, design-based partnerships, community-engaged partnerships, networked improvement communities, and hybrids of these approaches. Hence, our tools should be relevant to and useful for any of these various types of RPPs
  • We recognize that RPPs may have made differing degrees of progress on their goals based on how long they have been working together. Therefore, our tools should be relevant to and useful for RPPs that have been working together for different time frames
  • We recognize that RPPs include partner members coming from different organizational types and professional roles. As such, our tools should be able to capture multiple perspectives from those centrally involved in the RPP, including research, practice, and community members.
  • We recognize that RPPs’ progress on their goals is not binary but developmental. Therefore, our tools should allow for RPPs to move between different stages on the developmental trajectory as conditions change.
  • We recognize that more equitable relationships are central to all aspects of RPP efforts. Therefore, we attend to equity within each dimension of the Henrick et al. (2017) framework.
  • We recognize that RPPs may have multiple projects and lines of work. Hence, our tools should invite RPP members to select a single project to focus on for their responses. Specifically, we invite RPP members to select a project that (1) reflects the core aims of the partnership, and (2) for which the RPP would like more information about the project within the partnership.
Background: Evidence-Centered Design

In this project, we draw on a principled approach to assessment design called Evidence-Centered Design (ECD) to develop, refine, and test tools for measuring an RPP’s development.  The ECD approach centers the goals and purposes of the measurement activity, and works to ensure that evidence is gathered and interpreted in ways that are consistent with those goals (Mislevy & Haertel, 2006; Mislevy et al., 2003; Mislevy et al., 1999). ECD differs from older approaches to validity and reliability that treat validity and reliability solely as a property of the instruments themselves. In contrast, ECD foregrounds matters of validity for particular purposes, in our case, for low-stakes, formative evaluation of individual RPPs. It is construct-centered: At all phases of measures development, a clear sense of the respective construct guides decisions about what is to be measured, when and where measurement will take place, what items will be used to bring forward information about constructs, and how scores generated from responses to measures will support valid interpretation of data.  

Broadly, ECD involves three sets of activities:

(1) It begins by defining constructs to be measured, followed by identifying the range of situations and behaviors that might elicit evidence of these constructs. In our work, we drew on the 5 dimensions of the Henrick et al. (2017) RPP effectiveness framework, and we consulted research evidence on RPPs, past measures and tasks, and input from relevant stakeholder groups to describe our constructs.  

(2) Next, the team selects or creates tasks or measures to gather observable evidence of desired attributes for those constructs, such as valued activities, dispositions, or skills. In our case, we developed a set of survey and interview measures through an iterative design, testing, and revision approach. Our field test involved close to 300 RPP partners from 65 different RPPs to test our survey and interview measures.

(3) Finally, the team analyzes how well the measures produce valid and consistent evidence for the constructs and determines what kind of guidance supports sense-making and interpretation of results for different purposes. With the documentation produced during the ECD process, the team then can adjust these different components, as ECD unfolds iteratively, with measures being refined as new information emerges from the performance of items or tasks. We are currently working on examining the validity of the measures, as well as producing guidance on when and how these measures can be used in practice.

An Example: Measuring Trust and Cultivating Relationships

Below, we share a draft version of our hypothetical developmental progression for one of our key constructs, building trust and cultivating relationships (outcome 1 of the Henrick et al. framework; Table 1), and the progression for attending to equity when building trust and cultivating relationships (Table 2). In cultivating trust and relationships, partnerships can attend to equity by addressing historical imbalances of power among participating individuals, organizations, or communities. Imbalances of power can be related to historical relationships among research and practice organizations as well as to social identities related to race or ethnicity, language, gender, sexual orientation, ability, citizenship status, age, and more. Addressing these imbalances of power involves supporting authentic participation and voice among partnership members, and specifically elevating the perspectives of those with less power, as opposed to simply aiming for “everyone having an equal voice.”

Table 1. Description of the developmental progression for building trust and cultivating relationships in RPPs or RPP projects.

FORMING 

MATURING 

SUSTAINING

Partners have opportunities to  follow through on their commitments to one another. 

Partners generally follow  through on their commitments to one another.

Partners almost always follow  through on their commitments to one another.

Partners learn about each  others’ contexts and establish routines of interaction and  communication. 

Partners have a general understanding of each others’ contexts, perspectives, and  commitments through established routines of interaction and communication.

Partners have deep understandings of each others’ contexts, perspectives, and commitments  through well-established routines of  interaction and communication.  

Partners may not yet feel  comfortable raising difficult issues with one another.

Partners engage in honest  discussion of difficulties by the partnership.

Partners have worked through both internal conflicts and challenges to  achieving goals, with relationships strengthened.

Table 2. Description of the developmental progression for attending to equity when building trust and cultivating relationships in RPPs or RPP projects.

FORMING 

MATURING 

SUSTAINING

Partners identify differences in historically limited social power of participating individuals or organizations in the work together. Partners elevate the voices of participating individuals or organizations with historically limited social power in their work together.  Partners regularly address the historically limited social power of participating individuals or organizations, removing barriers and creating structures that support authentic involvement, participation, and voice. 

From these trajectories, we identified situations where we might observe or gather evidence related to an RPP’s placement on the trajectory. For instance, looking at the notes from an RPP’s regular meeting could provide insight into the routines that support interaction that build trust, or the ways in which different voices are heard. As for key informants, we suspect leaders within each partner organization would be critical for evaluating trust, although the perspectives of a wide range of stakeholders would be appropriate to evaluate overall trust and shared commitment. 

Two sample survey items that might help elicit information about an RPP’s placement on this trajectory include:

1. What best describes your partnership? 

  • a) My partners are reliable in completing tasks in an agreed upon time;
  • b) My partners sometimes complete tasks on time.
  • c) We have established some deadlines for our partner’s tasks, but they have not come up yet. 
  • d) My partners do not reliably complete tasks on time.

2. Our partnership privileges the ideas of practitioners, community members, or other stakeholder groups in our meetings. [Strongly Agree/Agree/Neither Agree nor Disagree/Disagree/Strongly Disagree]

In Conclusion

It is our goal to develop valid and reliable measures that RPP members can use to gather information that helps them evaluate their partnership and make progress on their goals. In terms of next steps, we are currently working through our collected data to make sense of the information, in addition to putting our constructs and claims through testing to ensure their reliability. We hope this article provided some insight into this work and the value of approaching it using Evidence-Centered Design. We welcome any feedback on these efforts and look forward to continuing to involve the NNERPP community in this work.

Acknowledgement

This work is funded through the generous support of the William T. Grant Foundation. Our team includes Caitlin Farrell, Bill Penuel, Robbin Riedy, Julia Daniels, Kristina Stamatis, Kristen Davidson, Sarah Wellberg, Paula Arce-Trigatti, and Jim Soland. We are so appreciative of all NNERPP members who have been involved in the project to date.

Caitlin Farrell is director of the National Center of Research in Policy and Practice (NCRPP).

 

Suggested citation: Farrell, C. (2021). Developing Valid Measures of RPP Effectiveness: An Evidence-Centered Design Approach. NNERPP Extra, 3(4), 2-5. https://doi.org/10.25613/S6FJ-7X88

NNERPP | EXTRA is a quarterly magazine produced by the National Network of Education Research-Practice Partnerships  |  nnerpp.rice.edu