RPP EVALUATION: WHAT DOES AN EVALUATOR DO?

Stacey Sexton | SageFox Consulting; Erin Henrick | Partner to Improve; Madeline Noya | Aldea Analytics; and Danny Schmidt | Partner to Improve

Volume 3 Issue 4 (2021), pp. 6-10

INTRODUCTION

Since its inception in 2017, more than 125 unique projects have been borne from the National Science Foundation’s (NSF) Computer Science for All (CSforAll: Research and RPPs) (hereafter CSforAll: RPP) program, yielding many new RPPs in various stages of development. Given the emergent nature of our collective knowledge of RPPs and evaluation, we were motivated to better understand, and thus better support, external project evaluators associated with CSforAll: RPPs.

Specifically, we were interested in further exploring 1) characteristics of CSforAll: RPP evaluators and evaluation plans, 2) evaluation activities, 3) evaluation use and impact, and 4) RPP evaluator needs. To begin to accomplish this, we reviewed evaluation plans from 32 CSforAll: RPP proposals, and conducted interviews with 7 external evaluators representing 14 RPPs. The participating projects (who are all members of the RPPforCS community) come from the first three cohorts of the CSforAll: RPP program. The analysis includes 10 large grants (maximum $2M ), 16 medium grants (maximum $1M ), and 6 small grants (maximum $300K). Here we share initial findings from this exploratory study and conclude with reflections to further support the RPP evaluation community.

BACKGROUND

Evaluation is the “systematic investigation of the worth or merit of an object” and involves gathering and analyzing data to inform learning, decision-making, and action in order to contribute to the improvement of a program or policy (Weiss, 1998). NSF expects every CSforAll: RPP to incorporate external review and feedback on the project’s designs and activities, but project teams have the freedom to determine the role of the evaluative body, how (and what types of) feedback is provided to the team, the cadence of evaluation milestones, and how the team uses information from an evaluation. Currently, there are few models for RPP evaluation. Given this, understanding more about how the CSforAll: RPPs are using evaluation to support their work is a useful topic to investigate.

FINDINGS
(1) Characteristics of CSforAll: RPP evaluators and evaluation plans

External review design. Given the exploratory nature of this study, we first wanted to understand how CSforAll: RPPs designed their external review and feedback plan. 38% of the projects in our sample had an external evaluator, 31% of the projects had both an advisory board and external evaluator, and 16% of the projects only had an advisory board (15% of the evaluation plans did not specify). 50% of the evaluators in the sample were described as being affiliated with an independent consulting firm and 19% were affiliated with a university (the remaining projects either use an advisory board model or did not report evaluator affiliation). 

Type and focus of evaluation. We then examined the evaluation plans to determine the type and focus of the evaluation. 34% of the plans included both formative and summative evaluations, 31% of the plans included formative evaluation only, and only one plan described only a summative evaluation (31% of the proposals did not specify the type of evaluation planned). In order to understand the foci of the evaluations, we categorized the evaluation questions into three categories: 1) 38% of the evaluation plans included questions to evaluate both the partnership and the overall CSforAll: RPP program, 2) 34% only included questions to evaluate the program, and 3) 16% only included questions to evaluate the partnership (13% did not specify evaluation questions). 

Six out of seven evaluators we interviewed reported investigating the health of the partnership, with a focus on the 5 Dimensions of Effectiveness of RPPs (Henrick, Cobb, Jackson, Penuel, Clark, 2017). While equity did not emerge as a central focus for the CSforAll: RPP evaluations (which was a surprising finding for us), two evaluators did specifically mention that they pay particular attention to whose voices are being heard, who is participating in RPP discussions, and power dynamics within the RPP, asking questions like “Are we practicing what we preach around equity?” and, “Talk about who is at the table. Whose voices are you listening to?”  

Evaluator background. In order to understand evaluator backgrounds, we examined the brief biographies included in the proposals. While 25% of the plans did not specify the evaluator’s expertise, 63% reported experience in evaluation, 47% reported having experience in STEM/CS, and 34% of the evaluators reported experience in RPPs. 

Interviewees were asked what characteristics they believed were most important to be an effective evaluator of RPPs. The most common themes related to building relationships with the partners. One interviewee said, “Engender trust first. It’s more important that people on both sides trust you. More important than being right. More important than showing how smart you are.” Strong communication skills and being able to report findings in an easy to understand and actionable format was another theme that emerged from interview responses. Another evaluator also emphasized honest communication saying, “Being a proactive communicator. It lends itself to that critical friend role, as well as just being honest about how things are going.”

Role of the evaluator. 6 out of 7 evaluators we interviewed described their role as a “critical friend.” One evaluator further clarified this term- “Unlike a traditional evaluation, you’re deeply embedded in the work routines. You have to be, to kind of understand them, that’s part of the process.” One evaluator described their role as a “thought partner” and two described their role as a “broker.” Some interviewees acknowledged that their role has changed over time. One evaluator stated: “Roles will always get renegotiated but that’s the primary work in the first year- figuring out the roles and how to collaborate. The focus of evaluation should be on process and relationships and workflow and maybe later it goes to outcome and impact. I wish there was more focus in RPPs on that evolution.” One interviewee indicated that they tend to play a more embedded role when evaluating an RPP than in traditional research grants. We conjecture this might be due to the complex nature of RPPs, challenges related to understanding how the partnership operates, and the need for RPP evaluators to be responsive.

(2) Evaluation activities

Evaluator activities. An analysis of the evaluation plans revealed the types of activities in which the evaluators planned to engage, as well as the planned data collection methods. 63% reported plans for the evaluator to collect data, 53% reported plans to analyze data, 44% described plans to report results, and 19% described plans to attend meetings (25% of the plans did not specify evaluator activities). Interviewees reported engaging in three main types of activities: collecting data, attending meetings, and providing feedback to partners and NSF. All evaluators interviewed reported collecting data and developing evaluator reports for NSF; half of the interviewees reported also attending team meetings.

Types of data collection. The most common data collection methods described in the evaluation plans were surveys and interviews, each appearing in 47% of the evaluation plans. 31% reported plans to conduct focus groups, while conducting observations and document analysis each appeared in 25% of the plans. 13% of the evaluation plans did not specify data collection methods. Interview data indicated that evaluators typically utilize a variety of qualitative data methods to determine the project’s success along the five dimensions identified in the Henrick, et al. (2017) framework. 

(3) Evaluation use and impact

Interviewees identified three areas where evaluation feedback made an impact on the RPP with specific examples from their own projects. 

Improving the functioning of the RPP. Evaluation activities and feedback cycles allowed some evaluators to improve the functioning of the RPP by helping to establish clearly defined roles and group norms amongst the partnership participants, while another emphasized their effects on the group’s problem solving cycle saying, “From the first year we developed a problem solving cycle. It’s become ingrained in [the organization]. It’s how they orient all of their partnership work.” Another evaluator also mentioned some smaller logistical changes that benefited their RPP saying, “Nuts and bolts things they can change. Simple things like [using] action items.” 

Improving research efforts. Some evaluators described how evaluation feedback helped improve the project’s research efforts. One evaluator described how they were able to use survey and interview responses to improve participant recruitment strategies. Another described using evaluation data to improve the parent consent process saying, “We had a suggestion early on on how to make it slightly less onerous.”

Improving program offerings. Finally, evaluators reported their impact on the intervention. Some said they influenced the overall design and frequency of delivery of professional development workshops delivered based on feedback from program participants. One evaluator described how the program offered “a third round of PD that they weren’t planning to do, and it was totally different from the first two rounds, because part of the evaluation data showed them that their pre-packaged curriculum wasn’t meeting the needs of the teachers.” Another evaluator stated, “They viewed our surveys and interviews as a way to check progress on how things are going. The feedback we provided would influence how the PD would go throughout the school year.” 

It’s important to note that in addition to the examples above of ways evaluators felt they had meaningful impacts on their RPPs, some evaluators also expressed a lack of perceived impact on their projects. One evaluator stated, “We write formal reports that get appended to things. Not sure it’s being used to improve their practice.” While another said plainly, “I don’t think it’s changed the function of the RPP.” Yet another expressed the variability in how feedback is received by different members of the team saying, “Feedback is not well received by the PI, but I get emails from other project team members explaining it was helpful, thanking me.”

(4) RPP evaluator needs

When asked what tools and supports they would like in order to be more successful, respondents reported being generally satisfied with the evaluation instruments currently available, but expressed a desire for a forum with which to have honest conversations with other evaluators. Five of the seven interviewees emphasized the importance of learning from their fellow RPP evaluators. Several mentioned wanting to learn through the results of this study.

LIMITATIONS

There are several limitations to this study. The proposals we were able to analyze were a small sample of the funded projects to date. The evaluation plans in the proposals varied greatly in the amount of detail provided about the specific methods and approaches, so some information we collected was incomplete. The biggest limitation to understanding the evaluators’ role and impact within the project in our view is that we were unable to interview other RPP members to understand their perspectives.

DISCUSSION

NSF has allowed significant flexibility in how the external evaluation role is fulfilled, and the findings from this study indicate great variation related to how the role is implemented across the CSforAll: RPP program. Findings suggest that it is important for RPP teams to discuss the role and responsibilities of external evaluation particularly regarding the concept of “critical friend.” From our perspective, one can only succeed as a critical friend if there is agreement across the RPP that 1) practitioners and researchers welcome critical feedback, 2) the evaluator will be included and have enough information to make a meaningful assessment, 3) critiques will be constructive and intended to improve processes and outcomes, and 4) feedback will be discussed openly. 

Additionally, we believe that CSforAll: RPP evaluations would benefit from a more explicit equity focus, particularly given fundamental values related to equity and inclusion ingrained in the CSforAll: RPP program. If evaluators are unclear on what it means to focus on equity in RPP processes and outcomes, then we risk reinforcing, rather than addressing, the disparities that the CSforAll: RPP program is targeting. We believe that RPP evaluators can play a key role in assessing equity in both RPP processes and outcomes. The Center for Evaluation Innovation (2017) outlines a framework and guiding principles for equitable evaluation that we believe are relevant to the CS for All community of RPPs, including ensuring that evaluative work “hold at its core a responsibility to advance progress towards equity” (p. 4).  

This pilot study is a first step towards understanding more about RPP evaluation, including the characteristics of CSforAll: RPP evaluation plans and evaluators, types of activities RPP evaluations engage in, how RPP evaluation is supporting the goals of the project, and RPP evaluator needs. NSF has created a unique opportunity for RPPs by requiring a critical review process, but we need to learn more about how to support RPP evaluators and how RPP evaluations can be designed and implemented to best support the needs of RPP teams.  

Team

Our study team consists of four RPP evaluators with experience across a range of RPP types and evaluation experiences. Stacey Sexton has been an evaluator on three RPPs funded through the NSF and is also the facilitator of the RPPforCS community. Erin Henrick is the lead author of the Five Dimensions of RPP Effectiveness framework and an evaluator of several RPPs within and outside of CS. Madeline Noya has served in the evaluation role in two CS for All: RPPs, and Danny Schmidt is the research analyst for this study and several CS for All: RPP evaluations conducted by Partner to Improve.

REFERENCES

Arce-Trigatti, P. (2021). What’s New With RPP Effectiveness? NNERPP Extra, 3(2), 9-14.

Center for Evaluation Innovation, Institute for Foundation and Donor Learning, Dorothy A Johnson Center for Philanthropy, Luminare Group. “Equitable Evaluation Framing Paper.” Equitable Evaluation Initiative, July 2017, www.equitableeval.org.

Henrick, E.C., Cobb, P., Jackson, K., Penuel, W.R., Clark, T. (2017). Assessing Research-Practice Partnerships: Five Dimensions of Effectiveness. New York, NY. William T. Grant Foundation.

Henrick, E., McGee, S., & Penuel, W. (2019). Attending to Issues of Equity in Evaluating Research-Practice Partnership Outcomes. NNERPP Extra, 1(3), 8-13.

National Science Foundation. 2020. Computer Science for All (CSforAll: Research and RPPs. https://nsf.gov/funding/pgm_summ.jsp?pims_id=505359 

Weiss, C. H. (1998). Evaluation: Methods for studying programs and policies. Prentice Hall, Upper Saddle River, New Jersey.

Suggested citation: Sexton, S., Henrick, E., Noya, M., & Schmidt, D. (2021). RPP Evaluation: What Does an Evaluator Do? NNERPP Extra, 3(4), 6-10.

NNERPP | EXTRA is a quarterly magazine produced by the National Network of Education Research-Practice Partnerships  |  nnerpp.rice.edu