IMPROVING THE USE OF RESEARCH EVIDENCE THROUGH COMMUNICATION SCIENCE PRINCIPLES: INSIGHTS FROM RPP COMMUNICATION LEADERS

Communication science offers an audience-centered approach to engaging education policymakers and practitioners with research evidence that can inform their decisions. In our previous issue’s “Deep Dive” article, Itzhak Yanovitzky, Professor of Communication at Rutgers University and expert in the areas of behavior change communication, public policymaking, translational research, and program evaluation, and Cindy Blitz, Research Professor at the Rutgers University Graduate School of Education and facilitator of translating scientific knowledge into educational practice in K–12 education, shared four key principles of audience engagement from this communication science perspective.
Here, we continue the conversation from a different perspective, oriented to practical application: In this second and final part of our two-part series examining how principles of communication science can help improve the use of research evidence, we are joined by five communication leaders working in research-practice partnerships (RPPs) to consider how to implement the principles shared by Itzhak and Cindy in an RPP setting.
Before we dive in, meet the communication leads who generously shared their insights and experiences with us for the purpose of this article: Joining us for the conversation are Megan Dillingham, Communications & Development Manager at the Houston Education Research Consortium; Chelsea Farley, Communications Director at the Research Alliance for NYC Schools; Lila Goldstein, Research Data Analyst Lead at the Northwestern Evanston Education Research Alliance; Jessica Holter, Research Manager at the Tennessee Education Research Alliance; and Sara Slaughter, Associate Director of Communications and Operations at ERA-New Orleans. As you might have noticed as you read through this mini-introduction, “communications” is not necessarily the main or only aspect of Megan, Chelsea, Lila, Jessica, and Sara’s roles. Second, they represent a diverse range of RPPs, in terms of location, size, and age. And as a last note, they have been leading their partnerships’ communications efforts for anywhere from a few months to several years. This all makes for different experiences, as you’ll find out in the following paragraphs! These same aspects will likely also shape how and to what extent you might be able to implement communication science principles in your own partnership.
For each of the four principles introduced by Itzhak and Cindy in Part 1 of this series, we asked our five communications leads to address a number of questions about what it looks like (or could look like) to implement the principles “on the ground.” Below, we summarize what they shared with us.
PRINCIPLE #1: THINK “USE,” NOT “EVIDENCE”
>> In short: It is better and more informative to map out users’ evidence use routines rather than promote an artificial “use” versus “non-use” dichotomization of evidence
1. How should we measure “success” in terms of RPP artifacts getting “used,” especially given Itzhak and Cindy’s recommendation to report various levels of engagement from “little to no engagement” to higher levels that might include “frequent, deliberate, systematic, and critical”?
All five of our communications leads have either started to think about or are actively pursuing methods of measuring engagement that go beyond tracking the number of downloads of research briefs or the number of retweets on Twitter posts about the research. Chelsea points out that tracking the “demand” for evidence among key audiences or audiences’ “responses” to the RPP’s work can be measures of engagement. For example, audiences reaching out with comments or follow-up questions about research publications or even proposing new, related lines of research are clear indications of active engagement with the evidence. Chelsea says that her RPP pays attention to invitations by stakeholders in the community or in educational organizations to either present research or even just contribute thoughts and lend a listening ear as indications of engagement (and Lila plans to do the same). Jessica adds that “the frequency and depth of the conversations” RPPs are having with their partners are valuable indications of the level of engagement with the research.
Megan and Sara emphasize the importance that engagement with the broader community holds for their respective RPPs, and both HERC and ERA-New Orleans have recently intensified their efforts to build closer relationships with community representatives. For example, Sara’s RPP in New Orleans has expanded its advisory board to include community members. Additionally, Megan also describes Houston’s recent efforts and successes in engaging more closely with policymakers, including meeting with Texas House and Senate decision makers to discuss the RPP’s recent findings on the trajectory from high school and postsecondary education to the workplace. These efforts suggest that one way for RPPs to increase engagement is to actually broaden engagement by connecting and building relationships with more audiences. Chelsea points out that the importance of relationships in and for RPPs is well-established in publications from the field and that “stakeholders are more likely to engage with evidence when they know and trust the people producing it.” Sara observes that this also speaks to Itzhak and Cindy’s next point (see paragraph below) about what counts as use — the more the RPP connects with community stakeholders, the more insight it gains about how research findings can and cannot be useful to them.
2. How should we decide what counts as “use,” given the different goals, needs, capabilities, and circumstances of our users? (i.e., how do we carefully account for more than just the R-side’s perspective of what “use” looks like?)
Our communications leads agree there are various ways to think about this and RPPs must not necessarily limit their goals for or definitions of “use” to just one of these ways. At the same time, the specific purpose or mission of an RPP can give some direction around which “kinds” of uses to prioritize. For example, Jessica’s RPP in Tennessee is built around the concrete mission of helping policymakers at the Tennessee Department of Education make decisions about policy change or implementation at the state level and in districts when the research is relevant. Her RPP, then, cares about whether their research is being considered in this way or is at least helpful for policymakers in thinking about issues based on the data. Chelsea’s definition of use is a bit more loose: “I see any kind of engagement with research evidence as a positive.” Acknowledging that “often, there is substantial room for interpretation about what a particular research finding might suggest for policy or practice,” Chelsea suggests that even agreeing on a set of “facts” coming out of a research study can be a good foundation for evidence use in the longer term: Even if stakeholders disagree about their implications, an agreed-upon set of facts can accomplish important goals, including narrowing the set of options under consideration and pointing to new questions that additional research can answer, providing clearer guidance for policy and practice decisions. Lila adds that getting people to “think about issues in a more nuanced way” based on research already counts as valuable “use” of research in her book.
3. How should we think about the artifacts produced by the RPP in relation to the various other forms of evidence our p-side partners might use in weighing a decision? What does “success” look like there?
As Jessica puts it: “As long as it is clear that our primary stakeholders are regularly engaging with our research and considering the research findings as part of the larger policy puzzle, that’s a win for us.” Lila elaborates on the same thought, saying that different pieces of evidence should not be seen as being used in competition with one another, but in contrast, can also complement each other, given the limits of different research studies, forms of evidence, and areas of expertise. Additionally, Jessica points to variables outside of their control (such as the current pandemic) that might impact whether p-side partners consider certain research findings. Even then, information provided by RPP research might be picked up again “down the line when a policy window opens again.”
4. Finally, how should we “map out” users’ evidence use routines, as suggested by Itzhak and Cindy?
In their answers to this question, all communications leads point to the need to talk directly with audiences and ask them about this in some shape or form; Chelsea, Jessica, and Sara point to concrete examples of efforts their RPPs have undertaken to this effect. Sara describes how a conversation with a college counselor led her to find out about the counselor’s school doing an orientation session for new teachers that involved a gallery walk of key figures from ERA-New Orleans’ research studies posted around the room to orient teachers new to town around the context of Hurricane Katrina and the New Orleans school reforms. If not for that deliberate conversation, her RPP would never have known that their research was being used in this way — and there was a need for perhaps additional, more targeted artifacts that could orient those new to the city around the specific local context. Jessica describes how TERA has worked with a communications firm to conduct interviews with key partners specifically for the purpose of learning more about where the RPP added value and where there were gaps, and Chelsea similarly describes the Research Alliance’s launch of a stakeholder survey to assess people’s perceptions of the work and the partnership’s role in New York City. She reflects on how this survey might also be helpful in gathering more specific information about people’s evidence use routines.
PRINCIPLE #2: IDENTIFY THE RIGHT PROBLEM
>> In short: Use of research evidence is enabled by the combination of users’ capacity, motivation, and opportunities to use research evidence. This capacity-motivation-opportunity framework is an effective tool for diagnosing the real problem you need to address in your interventions to improve use of research evidence in policy and practice.
1. From your perspective, which of the three conditions for use identified by Itzhak and Cindy (i.e., capacity, motivation, opportunity) are the greatest challenge for RPPs / your RPP?
Speaking from their experiences at their own RPPs, both Chelsea and Sara name motivation to seek out and use research evidence as their biggest challenge. This is true in particular once you go beyond those practitioners and policymakers directly involved in the development of research projects (who by nature of their direct work with the research project do have the motivation to consider its findings), Sara points out. Chelsea adds that motivation is also the condition for use that seems hardest to change or influence for RPPs, especially in a highly politicized climate, in which ideology rather than evidence often seems to have the greatest influence on people’s thinking and actions. In contrast, opportunity is the condition that an RPP can probably influence the most, for example by making the research easily digestible and readily available in ways that consider the audience’s preferences and needs. However, Lila points to factors that complicate a partnership’s ability to influence or control users’ opportunity to engage with research, such as shifting policy windows and the associated decreased opportunity (such as time) practitioners and policymakers might have to consult now less-relevant research.
2. Any ideas or strategies for how to address this?
RPPs might be able to address some of the previously named challenges around the conditions for using research. For example, RPPs can ensure that evidence is part of the public conversation about important education policy issues to increase awareness of key findings and communities’, practitioners’, and policymakers’ motivation to act on that information, says Chelsea. An example of how this can be effective, she says, is journalist Emily Hanford’s writing about the “science of reading” in 2018 and 2019 (see for example here and here), which though not new appealed to key audiences — and this interest and attention in turn created “awareness and pressure to act on evidence that had existed for some time.”
PRINCIPLE #3: KNOW YOUR AUDIENCE
>> In short: Place the target audience and their perspectives (including their capacity, motivation, and barriers and facilitators to using research evidence) at the center of your communication strategy through an audience analysis to effectively promote a specific behavior or practice.
1. Generally speaking, does your RPP conduct any type of audience analysis (i.e., try to understand your different users’ needs, goals, interests, predispositions, and experiences) with respect to the development of RPP products?
Our communications leads agree that the very nature of RPP work prioritizes and facilitates an understanding of audiences’ needs, goals, and interests, at least when it comes to primary target audiences. Since partnership work by definition addresses actual problems practice-side partners are experiencing and is developed in partnership between researchers and the p-side, ongoing collaboration and communication about the usefulness of the research and the needs of p-side partners is already built into the research process from the very inception of the research questions. Chelsea and Jessica also name other intentional processes and strategies their respective partnerships have developed to capture key stakeholders’ perspectives: A Steering Committee (at the Research Alliance for NYC Schools) and an Advisory Council (at TERA) that frequently provide feedback and input on the direction of the work, both representing a range of key stakeholders beyond groups that might be directly involved in a given research project. As Jessica puts it, “our research very much happens within cycles of conversation and feedback with our key audiences.” Chelsea additionally points to the previously referenced stakeholder survey her partnership recently conducted, which offered valuable insights about and from core audiences. Finally, relatively simple measures such as website and social media traffic and analytics give additional insight about the audiences that are engaging with partnership work in these ways.
2. More specifically, does your RPP engage in audience segmentation? If so, what are the dimensions of your audience that you have found useful to segment on? Or, does your RPP try to tailor products for different audience segments? How so?
For the most part, our communications leads acknowledge that their partnerships don’t do as much intentional audience segmentation “as we probably should” – mostly due to time and capacity constraints. That is not to say that messages and artifacts are not tailored to specific audiences and communication/engagement platforms to a certain extent, often in terms of length, level of detail, and general language. But more detailed audience segmentation is a complicated endeavor. Perhaps most strategically, Sara shares her partnership’s three key groups of audiences:
- Policymakers, practitioners, and people who can use the partnership’s research to directly affect change
- Academics, media, and influential voices who can use the partnership’s findings to inform their own work, shape discussions, and share the work with others
- General public, including families, students, and community members who are most directly affected by the issues the partnership studies
Chelsea adds that her partnership often segments audiences by professional role and position in the education ecosystem, such as teacher, superintendent, researcher, advocate, nonprofit or community-based organization staff, and so on. Lila thinks about her partnership’s audiences “in terms of what they care about and what kinds of decisions they need to make,” which will differ across research studies. Jessica points out that different types of products may speak to different types of people, so a broad array of products/platforms, such as briefs, podcasts, newsletters, and social media, is likely to reach more and diverse audiences, even if audience segmentation is not happening more intentionally. For certain studies, it can also be much more clear than for others who the intended audience is, in which case tweets and newsletter language can be crafted in accordance with that audience’s interests. Interestingly, Lila observes that early-stage partnerships with less well defined artifacts or products might in fact engage in audience segmentation more, saying “we are all audience segmentation all the time!” as a way to determine how research findings will be shared given a lack of pre-defined templates or communication plans.
PRINCIPLE #4: MATCH COMMUNICATION STRATEGY TO BOTH THE PROBLEM AND THE AUDIENCE
>> In short: Once you have determined the problem (capacity, motivation, or opportunity) and know your audience well, your actual communication comes into play. Here, the audience engagement continuum ranges from simple exposure to actual engagement – building your audience’s interest, motivation, perhaps even enthusiasm to use research evidence. The best engagement strategy by far is to partner with your target audience on the design of communication.
1. Do you have any suggestions for how to move away from an “exposure” strategy to support research use, where one simply “exposes“ your intended audience to the RPP’s message / findings / content, towards an “engagement” strategy, where the RPP actively builds your audience’s interest, motivation, and enthusiasm to engage with the RPP’s artifacts?
In Chelsea’s words: “The more engagement we have from the beginning of a project, the more effective we are at communicating the results and why they matter.” To her, the best way to promote and sustain evidence use is through a combination of taking insights from the policy and practice world (through ongoing collaborative work) and then also communicating about the work to the broader public (for example, through a relationship with a great reporter). Similarly, Jessica shares how her partnership sees an effective policy brief as the “end product of all the engagement we’ve had throughout the research process.”
2. Do you partner with your intended audience / users on the design of the communication? If yes, how so?
Jessica takes us directly to this final question by outlining how in her partnership, a research brief is in fact developed through continued dialogue with the practice partners whose problems are addressed in the research (in fact, you can read more about it in this piece she wrote for an earlier issue of NNERPP Extra!). Megan adds that school district partners, often the main intended audience for her partnership’s research, can also provide valuable feedback for developing artifact templates, sharing how one-on-one meetings with such a district partner helped her design and format more effective one-page research overviews. Both Megan and Chelsea also describe partnership processes whereby drafts of reports, briefs, and presentations are frequently shared with and reviewed by practice-side partners and Steering Committee members ahead of publication to make them stronger.
An even deeper partnership as envisioned in this communication science principle, where a research product or tool is truly co-designed by researchers and intended users, could definitely be a goal for the future, Chelsea adds. Additionally, “hearing people wrestle with findings from different vantage points,” perhaps through convenings with researchers, policymakers, and practitioners about evidence, might also be informative for creating better communication.
As our conversation with the five communication leads demonstrates, the principles of communication science are quite relevant for RPPs’ efforts in getting their evidence used. In many ways, RPPs are uniquely designed to support the implementation of these principles, given the inherently close partnership that takes place with many of the end users of the research. In other respects, it can be quite challenging to put the principles into practice – not least because of time and capacity issues, which suggests a need to directly fund partnerships’ communication and engagement efforts. These challenges will likely be greater still for smaller partnerships without dedicated communications leads. Nevertheless, our intention with the conversation shared here is to further our knowledge bases around supporting use of research evidence, and in particular, provide further guidance to all for how to apply the theoretical underpinnings from communication science to RPP practice.
Megan Dillingham is Communications & Development Manager at the Houston Education Research Consortium; Chelsea Farley is Communications Director at the Research Alliance for NYC Schools; Lila Goldstein is Research Data Analyst Lead at the Northwestern Evanston Education Research Alliance; Jessica Holter is Research Manager at the Tennessee Education Research Alliance; and Sara Slaughter is the Associate Director of Communications and Operations at ERA-New Orleans.
Suggested citation: Dillingham, M., Farley, C., Goldstein, L., Holter, J., & Slaughter, S. (2020). Improving the Use of Research Evidence through Communication Science Principles: Insights from RPP Communication Leaders. NNERPP Extra, 2(3), 6-11. https://doi.org/10.25613/J0HM-JA06