IMPROVING USE OF RESEARCH EVIDENCE: INSIGHTS FROM COMMUNICATION SCIENCE
Itzhak Yanovitzky and Cindy Blitz | Rutgers University
Volume 2 Issue 2 (2020), pp. 10-13
Evidence-informed policy and practice is often touted as a gold-standard in both scientific and popular discourse. For many people, the idea that decisions made by policymakers and practitioners should be guided by the best available science, whether to improve STEM education in schools or fight a global pandemic, is immediately intuitive. It is therefore only natural for producers and funders of scientific research, as well as for research intermediaries such as NNERPP, to explore diverse venues for connecting policymakers and practitioners with the most rigorous and relevant research evidence that is applicable to problems they confront. Often, a primary focus of such efforts is the efficient translation and dissemination of useful research evidence. However, with the recognition that users themselves play an active role in acquiring and interpreting research evidence, there is a growing interest in implementing effective user engagement strategies.
Communication science has much to contribute in this regard due to its audience-centered approach. Unlike an information-centered approach which focuses on improving the transmission of information between producers and users, an audience-centered approach is focused on enabling and/or motivating users to integrate research-based insights into decisionmaking processes.
In this first part of a two-part series examining how principles of communication science can help improve the use of research evidence, we briefly introduce readers to key principles of audience engagement from this perspective. Part 2 in the next edition will offer the perspective of Communication leaders in research-practice partnerships (RPPs) on how to implement these principles in an RPP setting.
Think “Use,” Not “Evidence”
Too often, programs and interventions that aim to improve use of research evidence in policy and practice measure their success by tracking the scope and nature of the research evidence used in decisionmaking processes. However, this is not the same as tracking use of research evidence. In fact, this common practice has a number of undesirable consequences.
First, it promotes an artificial dichotomization of use vs. non-use, whereas use in reality is far more diverse and complex, and therefore appropriately defined as a continuum ranging from little or no engagement with research evidence to higher levels of engagement (e.g., frequent, deliberate, systematic, and critical).
Second, it imposes researchers’ own normative conception of what counts as use, whereas use in practice is determined by the goals, needs, capabilities, and circumstances of users.
Third, it disenfranchises the valid and important contributions of other forms of evidence to sound decisionmaking processes, most notably, practice or experience-based evidence produced by users themselves.
Lastly, it does not fully track the cognitive and social processes underlying use of research evidence including seeking, acquiring, filtering, interpreting, sharing, and deliberating the implications of research evidence.
The bottom line is that defining and inferring use based on a priori assumptions or expectations is rarely useful. It is better and more informative to map out users’ evidence use routines – who they typically seek or receive research evidence from, how they evaluate and interpret research evidence, how and for what purpose they use research evidence, etc. – to be able to adequately define and assess use relative to the unique context and circumstances of users.
Identify the Right Problem
You can’t generate an effective solution to a problem if you get it wrong. Like any other behavior, use of research evidence is enabled by the combination of users’ capacity, motivation, and opportunities to use research evidence.
Capacity has to do with the degree to which intended users have the necessary skills, competencies, and/or tools for seeking, acquiring, interpreting, and making an informed inference from research evidence. It is important to recognize that users may vary in their capacity to use research evidence as a function of their training and preparation.
Motivation to use research evidence may be internal or and/or external. Users are internally motivated to use research evidence if they perceive it to be valuable given their interests and goals compared to alternatives (i.e., relative benefits vs. costs of using research evidence) and/or if they believe that this practice is normative (i.e., that most other members of the group, particularly those who are important to them, use research themselves and expect them to do the same). External incentives or disincentives for using research evidence (for example, economic incentives or formal mandates) can also motivate use of research evidence, although they are generally less effective and more temporary source of motivation than internal motivation.
Opportunity refers to any objective barriers or facilitators for research evidence. This includes ease of access to sources of research evidence (e.g., scientific journals or experts), availability of technical assistance and other resources to support use of research evidence, time constraints, etc. RPPs, for example, can be an effective mechanism for improving use of research evidence because they create structured opportunities for researchers and practitioners to establish research-based collaborations to identify and address problems of practice.
The capacity-motivation-opportunity framework is an effective tool for diagnosing the real problem you need to address. Many interventions that target improvements in use of research evidence in policy and practice tend to assume that the problem is one of capacity or opportunity. But if the problem is essentially one of users lacking motivation to use research evidence, investing in building capacity or expanding opportunities to acquire and use research evidence may not solve the problem (if you build it, they may not come). So it is crucial to get the problem right before moving on to consider possible solutions.
Know Your Audience
The success of any strategy to improve use of research evidence crucially depends on the response from target audiences. For this reason, communication science places the audience at the center of any strategy for promoting a specific behavior or practice. This audience-orientation is formalized through audience analysis (sometimes referred to as audience insights). The goal of audience analysis is to develop a better understanding of target audiences’ needs, goals, interests, predispositions, and experiences as they relate to the behavior or practice you are promoting.
Approaching the problem from the perspective of audiences can inform your overall strategy in two important ways.
The first is targeting (or audience segmentation). We already know that a single fix will not universally solve the problem for everyone. Educators with relevant research training and experience require different types of support than educators who are new to research use. Targeting improves the efficiency and efficacy of behavioral interventions by identifying relatively homogeneous sub-audiences who are likely to benefit from the same intervention strategy. But this has to be done right. Segmenting target audiences based on demographic characteristics, for example, is customary but not very helpful from an intervention perspective. The fact that a user is a male or a female should make no difference regarding their use of research evidence, unless gender is a proxy for something else like differential access or training. Audience segmentation is more useful when it is based on dimensions that are directly relevant to the enactment of the behavior. For example, segmenting audiences based on differences in capacity, motivation, and/or opportunity to use research evidence is particularly useful because it gets you to consider how different sub-groups experience the problem and what they require to change.
The second objective of audience analysis is to inform the tailoring (or customization) of the intervention to each target audience segment. For example, it may be that a large segment of your target audience simply lacks the motivation to use research evidence, but for different reasons. Perhaps there is one group of users that lack the self-efficacy to use research evidence; they want to use research but don’t think they have the necessary skills to do this right or well. The other group of users have the self-efficacy but fail to see what’s in it for them; they can do it but are concerned that doing so means an additional burden on their time or work commitments. Each one of these groups will need a slightly different motivational intervention: the first group needs something to build self-efficacy (e.g., a training or a tool) whereas the second group needs to be persuaded or offered an incentive.
Keep in mind that both targeting and tailoring requires you to collect information from your target audience regarding capacity, motivation, and barriers and facilitators to use of research evidence (based on how use is defined). It is always a good idea to ground data collection in behavior change theories such as the theory of planned behavior, social cognitive theory, or theories/instruments that are directly about research evidence use (see Gitomer, Crouse, and Allen, for example).
Match Communication Strategy to both the Problem and the Audience
A successful behavior change strategy is typically judged based on its ability to achieve its goals. You may have determined that the real problem is one of capacity, or motivation, or opportunity, and you even have a good sense of what you need to do to address the problem. You still need the buy-in from your audience to make this happen, which is where communication enters. Now, we all have a natural tendency to believe that we are good communicators. Regardless of whether this is true or not (call it optimism bias), the issue is that we end up crafting a message and then expect it to be well received by our target audience. It should not surprise you to learn that this strategy almost never works. The reason for that is that your goal is to expose your audience to the message instead of seeking to engage them.
Exposure is at the lower end of the audience engagement continuum. It assumes that your audience is a passive consumer of the information you provide and is very likely to find your message persuasive (again, because you are a good communicator). This explains why providing information that is intended to educate potential users about the value of using research evidence, which is the default approach, is also the least effective – you are asking people to do something that is important to you, but not something that is important to them or that they believe they can do.
Engagement, in contrast, is about building your audience’s interest, motivation, perhaps even enthusiasm to use research evidence. In other words, it’s about making them care. One communication strategy that can be useful in this respect is to connect use of research evidence to something users already personally value and/or are familiar with – improve your own performance, serve better those who benefit from your knowledge/expertise/advise, etc. – this is an example of how knowing your audience can be helpful in tailoring the message to your audience. Another potentially effective strategy is to suggest or provide cues that use of research evidence is desirable (valued by peers) and normative (prevalent, expected). You can even use a bit of modeling – show an example of excellent use of research evidence and how it is rewarding (or rewarded). A more evolved form of audience engagement involves coaching – don’t just tell them what you want them to do and persuade them they care, also tell them how they can do this in a way that is easy, rewarding, and likely to result in the desired outcomes. This is the same as your doctor telling you to lose weight but also offering some guidance or a plan that will take you there. Without the plan, all you offer is prescription.
Still, the best strategy by far to engage your target audience with the change you are seeking is to partner with them on the design of communication. After all, they are the real expert on the problem since they are the ones experiencing it. This means that they have a wealth of insights to contribute regarding what should be communicated, how, when, and where – which is the essence of a basic communication plan.
One Last Cautionary Note
All interventions run the risk of unintended effects (positive or negative), regardless of how good your plan is. One important lesson from communication science is that you should always strive to pretest your communication strategy with an audience group before you go live so you can catch and correct any possible issues. The second important lesson is that it is your responsibility to anticipate any unintended effects on your intended audience but also unintended audience groups. Keep in mind that research, objective as it may appear to be, may introduce some bias into the way users think and act. For example, if a valid research procedure such as a survey systematically underrepresents the thoughts and experiences of a minority group, the evidence it produces is necessarily biased and users ought to be conscious of this bias. So communicators also have responsibility to communicate about what inference or conclusions can – and cannot – be drawn from research evidence to decrease the likelihood of misuse and disinformation. Research does not speak for itself; we must speak for research.
Itzhak Yanovitzky is Professor of Communication and Chair of the SC&I Health and Wellness Faculty Cluster at Rutgers University and is an expert in the areas of behavior change communication, public policymaking, translational research, and program evaluation. Cindy Blitz is the Executive Director of the Center for Effective School Practices (CESP) and a Research Professor at the Rutgers University Graduate School of Education (RU-GSE), where she works to facilitate the translation of scientific knowledge into educational practice in multiple domains of K–12 education.
Suggested citation: Yanovitzky, I., & Blitz, C. (2020). Improving Use of Research Evidence: Insights from Communication Science. NNERPP Extra, 2(2), 10-13. https://doi.org/10.25613/CGME-S465
NNERPP | EXTRA is a quarterly magazine produced by the National Network of Education Research-Practice Partnerships | nnerpp.rice.edu