In the world of research, evaluations are paramount. But what exactly is an evaluation? At REC we think of evaluations as systematic methods for collecting, analyzing, and using the information to understand the effectiveness and efficiency of projects, policies, and programs.
In an ideal research world, evaluations are carried out perfectly and the results are useful and valuable. Unfortunately, when it comes to performing evaluations in the real world of research, mistakes are made, details go awry and challenges can seem insurmountable. So, in this two-part series, we will discuss six key challenges in evaluation and solutions to those challenges (in Part Two), which we hope will guide you toward more effective evaluations.
Failing to plan for just about anything usually results in poor outcomes, and the same is true when you’re conducting an evaluation. Poor planning can lead to not having the right amount of time needed to conduct your evaluation, a lack of direction in what outcomes you’re hoping to achieve and poor planning can lead to not having enough resources (i.e. funding, personnel, space, etc.) for your evaluation. Poor planning can also result in implementation fidelity issues (i.e. how well a program or intervention is being adhered to) which negatively impacts the integrity of the evaluation and leads to unintended consequences.
If an evaluation isn’t seen as a priority there can be a lack of buy-in from staff and stakeholders in the evaluation process, which can result in limited resources, uncooperative staff, and an absence of understanding of why the evaluation is even needed or valuable.
If you don’t use the right data collection methods, you don’t understand how to properly and correctly identify data, you don’t have a thorough understanding of outputs and outcomes and/or you don’t choose the right evaluator for your project, then guess what? You won’t have an effective or positive evaluation experience.
Deciding on the right questions to ask to get you the results you’re looking for is a key element of the evaluation process. Asking the wrong questions can derail a project. So, just what are ‘bad’ questions? Questions that are unclear, that use too much jargon, that don’t take into account the audience, that are biased in any way, and that don’t have a clear and understandable method for participants to respond are all problems that will upend the evaluation process.
If you ask bad questions, you’ll get bad responses – it’s as simple as that. In addition, if you don’t properly and cleanly input the data you do get, if there is missing, messy or unorganized data, then the results will also be messy and unorganized and, ultimately, not useful.
When it comes to collecting data, quality beats quantity in most instances. More data does not necessarily equate with better data. In fact, the opposite is often true. If you have mountains of data, then you have mountains of data to manage and process, and that takes time and resources that many programs just don’t have. Additionally, if you collect a surplus of data it can lead to less consistent information and less certainty and support for the goal of the evaluation, which may just defeat the whole purpose.
We’ve presented six key challenges to the evaluation process here. For solutions to these challenges, read Part Two. If you have any specific follow-up questions to this blog post, or any other research and evaluation needs, please contact Dr. Annette Shtivelband.