Section 3: Evaluation Design

Step 2: Identify the population or subpopulations, intervention goals and objectives, and types of evaluation for each evaluation question

Now that you and your evaluation partners have identified and prioritized the evaluation questions, it is time to identify the population or subpopulations, intervention goals and objectives, and types of evaluation corresponding to each question.

A simple table can be used to link all of these elements as follows:

View the Resource Table

Your partnership can draw on the initial populations and subpopulations and types of evaluation identified in Section I, including formative, process, impact, and outcome evaluation, as well as the intervention goals and objectives from Section II.

You and your evaluation partners will need to identify whether the evaluation questions are aimed at assessment (pre-intervention, formative evaluation), implementation (intervention delivery, process evaluation), or outcomes (during or immediately following the intervention, impact, or outcome evaluation), or some combination of these. Again, impact and outcome evaluation correspond to the outcomes identified in your intervention objectives and goals, respectively.

Crafting evaluation questions that bring together formative and impact or outcome types of evaluation set the stage for evaluation designs with both baseline and follow-up measures. Combining these methods will increase confidence that changes in the outcomes being measured are, to some degree, due to the intervention strategies being evaluated.

Attribution:

Caused the observed outcomes

  • Are the outcomes of interest attributable to the intervention?
  • Are the outcomes of interest changing as a result of the intervention?
  • Did the intervention cause the outcomes of interest?

Contribution:

Helped to cause the observed outcomes

  • Is the intervention contributing to the outcomes of interest?
  • Are the outcomes of interest changing?
  • Is there evidence that the intervention helped achieve (or was part of what caused) the outcomes of interest?

When you evaluate pedestrian safety interventions, such as PSAPs or educational and promotional campaigns, it can be challenging to determine if the intervention components caused the change in the outcomes being measured. It can be challenging to identify a true “cause” in a pedestrian safety evaluation due to a number of factors – including multiple complementary intervention strategies, longer intervention durations, and fluctuating environmental and social factors that can affect pedestrian safety. While you and your evaluation partners are designing an evaluation plan to demonstrate the changes in outcomes can be fully attributed to your pedestrian safety intervention, it is more likely that the intervention components contributed to the changes in outcomes. See figure3 highlighting differences between attribution and contribution in evaluation.

Consider the following example:

View the Resource Table

Alignment of process and impact/ outcome evaluation questions creates the opportunity to examine exposure to the intervention and/or dose of the intervention hand-in-hand with the changes observed, increasing the ability to make causal inferences about the intervention’s influence on the outcomes. In other words, the process evaluation data can provide useful descriptive data about how the intervention succeeded or failed to affect the outcomes observed.

Footnotes

  1. A. Almquist (2011). CDC Coffee Break: Attribution versus Contribution. National Center for Chronic Disease Prevention and Health Promotion, Division for Heart Disease and Stroke Prevention, Evaluation and Program Effectiveness Team.

Next: Continue to Step 3
elderly woman with children child at crosswalk