Resource Center: Glossary

A


Achievable Objective:

An achievable objective is feasible and considers the availability of resources, the scope of the intervention, and is attainable within a bounded timeframe.

Activities:

Found in a logic model, activities describe specific events that are planned to achieve the goals and objectives outlined in the intervention. Activities should be tightly linked to the resources that support the implementation of the proposed strategy, as well as, connected to the intended outcome the activity will achieve.

Analysis Plan:

A roadmap for how you plan to organize and assess the data to make sense of the information and identify final conclusions to answer the evaluation question.

Asset mapping:

Techniques for identifying community resources, such as people, places, services, or other goods (Community Tool Box).

Assumption:

Hypotheses about factors or risks which could affect the progress or success of an intervention or hypothesized conditions that bear on the validity of the evaluation; assumptions are made explicity in theory-based evaluation. (Development Assistance Committee. (2002). Glossary of key terms in evalaution and results-based management. Paris, France: OECD.).

Attribution:

Attribution involves drawing causal links and explanatory conclusions between observed changes and specific interventions. (Iverson, A 2003, Attribution and aid evaluation in international development: a literature review, prepared for CIDA Evaluation Unit, International Development Research Centre, May.)

Audience:

Refers to whom or what organization intended to receive information (e.g. intervention, evaluation report, executive summary, etc.) from the evaluation.

B

Back to Top ^

Bias:

The extent to which a measurement, sampling, or analytic method systematically underestimates or overestimates the true value of an attribute. (US Environmental Protection Agency. (2007). Program Evaluation Glossary. Office of the Administrator/ Office of Policy/ Office of Strategic Environmental Management/ Evaluation Support Division.)

C

Capacity:

The ability of community members to make a difference over time and across different issues. (http://ctb.ku.edu/en/table-of-contents/overview/model-for-community-change-and-improvement/building-capacity/main)

Capacity-Building:

The intentional, coordinated and mission-driven efforts aimed at strengthening the management and governance of public health agencies to improve their performance and impact (Brownson, EBPH).

Case Studies:

An in-depth study of a documented event aimed to narrow down a board topic and offer a real-world application of a specific concept or theme.

Categorical data:

Measures that place data into a limited number of groups or categories. (US Environmental Protection Agency. (2007). Program Evaluation Glossary. Office of the Administrator/ Office of Policy/ Office of Strategic Environmental Management/ Evaluation Support Division.)

Causal inference (causality):

Judgment about the relationship of causes to the effects they produce; a cause is termed “necessary” when it always precedes an effect even if it is not the sole cause or the effect is not the sole result; a cause is termed “sufficient” when it inevitably initiates or produces an effect; any given causal factor may be necessary, sufficient, neither, or both (Brownson, EBPH).

Coalition:

Group of individuals and/or organizations that join together for a common purpose (Brownson, EBPH).

Confounding:

An error that distorts the estimated effect of an exposure on an outcome, caused by the presence of an extraneous factor associated with both the exposure and the outcome (Brownson, EBPH).

Contextual factors:

Indicators associated with the surroundings within which a health issue occurs, including assessment of the social, cultural, economic, political, and physical environment (Brownson, EBPH).

Continuous data:

Quantitative data with an infinite number of attributes. (US Environmental Protection Agency. (2007). Program Evaluation Glossary. Office of the Administrator/ Office of Policy/ Office of Strategic Environmental Management/ Evaluation Support Division.)

Continuous quality improvement:

An approach to quality management that emphasizes the importance of maintenance and sustainability of the quality improvement intervention.

Contribution (analysis):

An approach for assessing causal questions and inferring causality in real-life program evaluations. It offers a step-by-step approach designed to help arrive at conclusions about the contribution a program has made (or is currently making) to particular outcomes. This approach is designed to reduce uncertainty about the contribution the intervention is making to the observed results through an increased understanding of why the observed results have occurred (or not!) and the roles played by the intervention and other internal and external factors.

Countermeasure:

A measure or action taken against an unwanted action or situation.

Cross-tabulation:

A type of table in a matrix format that displays the (multivariate) frequency distribution of the variables; the tables provide a basic picture of the interrelation between two variables and can help find interactions between them. (Gokhale, D. V.; Kullback, Solomon (1978). The Information in Contingency Tables. Marcel Dekker.)

D

Data Collection Matrix:

A two-dimensional table that serves as a planning tool to map out anything related to how the data will be captured and analyzed. A basic data collection matrix contains (7) elements: evaluation question, indicators, data source, data collection method, responsible party, timing, and analysis plan.

Data Collection Method:

The method at which the data will be captured. This includes, but not limited to, surveys, focus groups, interviews, observations, etc.

Data Source:

Identifies from whom or what agency the data will come from for the evaluation.

Data Visualization:

Data visualization is a technique that is used to enhance the quality of evaluation reports and communicates data/information using engaging images that resonate with the audience.

Deductive reasoning:

A reasoning process in which multiple, general premises are assumed to be true and are then combined to generate a specific conclusion.

Delivery:

Refers to how the method in which results of the evaluation will be disseminated to an external audience.

Delphi method:

Originally developed at the RAND Corporation, an iterative circulation of questions and responses that are progressively refined in light of responses to each round of questions by a group of experts (preferably, participants’ identities should not be revealed to each other); the aim is to reduce the number of viable options or solutions, perhaps to arrive at a consensus judgment on an issue or problem, or a set of issues or problems, without allowing anyone to dominate the process (Brownson, EBPH).

Dependent variable:

"Whatever behavior or outcome you are trying to change (e.g., program, intervention) as a result of the presence of the independent variable(s). If you are evaluating a number of different methods or conditions, each method is an independent variable. These variables are called ""dependent"" because changes in these variables depend on the action of the independent variable (or something else).

Dissemination:

Process of communicating either the procedures or the lessons learned from a study or program evaluation to relevant audiences in a timely, unbiased, and consistent fashion (Brownson, EBPH).

Dissemination Plan:

A roadmap of how you plan to share your evaluation findings internally and externally.

Distal outcome:

Changes associated with an intervention’s goals, often measured in terms of morbidity, mortality, quality of life, or related changes (Brownson, EBPH).

Distal Outcomes:

Outcomes that are expected to occur long after implementing the intervention. Related to Long-term outcomes.

Document Analysis:

A form of research that systematically reviews a document, policy brief, public record, etc. and then interprets the data to measure the impact of the file.

Downstream intervention strategies:

Interventions and strategies focus on providing equitable access to care and services to mitigate the negative impacts of disadvantage on health.

E

Ecological framework:

Model relating individual, interpersonal, organizational, community (including social and economic factors), and health policy factors to individual behavior change and their direct effect on health (Brownson, EBPH).

Education:

Programs aimed at achieving changes in motorist and pedestrian behavior or attitude. Education efforts can also improve the ability of drivers and pedestrians to use and respond to the roadway environment safely and correctly.

Emergency Services:

Emergency response services and programs designed to manage pedestrian injuries after a crash occurred.

Encouragement:

Programs and efforts aimed to promote walking and engage the public in pedestrian safety programs.

Enforcement:

Law enforcement agency efforts to promote compliance with laws, ordinances, and regulations related to pedestrian safety. To teach motorists and pedestrians about safe driving and crossing practices.

Engineering:

Modification to the roadway environment and improve the existing transportation infrastructure and factor in safety when designing new transportation infrastructure. Traffic engineers use road safety audits, street redesign and the use of engineering countermeasures to improve pedestrian safety.

Environmental Factors:

External elements in the environment that may have an influence on risk and protective factors for your target population (i.e., the structure of a road, design of a crosswalk, etc.).

Evaluation:

Assessment of the effectiveness of a pedestrian program by involving procedures that are useful, feasible, ethical, and accurate.

Evaluation design:

The types and sequencing of data collection methods and intervention approaches used to evaluate a policy or program, including experimental, quasi-experimental, and non-experimental studies (Brownson, EBPH).

Evaluation Plan:

A plan and process that attempts to systematically and objectively determine the relevance, effectiveness, and impact of activities in the light of their objectives (Brownson, EBPH).

Evaluation Question:

A high-level question aimed to understand the value, impact, and significance of the intervention. A well-written evaluation question will serve as the guiding framework for your evaluation design and identify will what you want to understand about the intervention.

Executive Summary:

An executive summary is an abbreviated version of a traditional report, meaning it summarize the same sections found in the comprehensive report. Often, executive summaries are written after the full report is completed and excerpts copied and pasted in from the larger report and restricted to create the condensed report.

Experimental study design:

Evaluation in which the investigators have full control over the allocation and/or timing of intervention delivery and evaluation observations; the ability to allocate individuals or groups to intervention or control conditions randomly is a common requirement of an experimental study (Brownson, EBPH).

Exposure:

The state of being exposed to a specific agent or concept.

External environment:

Factors over which you have little or no control may affect your program’s outcomes. These external factors – such as the political and economic circumstances or social influences — can help or hinder an intervention's success. In a logic model, elements of the external environment may also be referred to as "external factors" or "surrounding circumstances." (Innovation Network)

External validity:

Evaluation is externally valid, or generalizable, if it can produce unbiased inferences regarding a target population (beyond the subjects in the study); this aspect of validity is only meaningful with regard to a specified external target population (Brownson, EBPH).

F

Fact sheet:

Written document that states the issue at hand, outlines recommended action steps, and that provides supplemental information to support findings and recommendations. Issue briefs are usually one to two pages and it includes a list of references and the contact information of the author.

Focus Group:

An interactive discussion between a homogenous sample of six to eight individuals. Focus groups are usually facilitated by a trained moderator who focuses on a specific set of topics to capture social trends and group perspectives.

Focus Groups:

An interactive discussion between a homogenous sample of six to eight individuals. Focus groups are usually facilitated by a trained moderator who focuses on a specific set of topics to capture social trends and group perspectives.

Follow-up Activities:

A section on the dissemination plan that allows you to take notes and track the progress of each report being disseminated.

Formative evaluation:

Type of evaluation conducted in the early stages of an intervention to determine whether an element of a program or policy (e.g., materials, messages) is feasible, appropriate, and meaningful for the target population (Brownson, EBPH).

Frequency:

The count of cases corresponding to the attibutes of an observed variable. (US Environmental Protection Agency. (2007). Program Evaluation Glossary. Office of the Administrator/ Office of Policy/ Office of Strategic Environmental Management/ Evaluation Support Division.)

G

Goal:

Long-term outcomes partners hope to achieve.

Governance:

The structures, processes, rules and traditions through which decision-making power that determines actions is exercised, and so accountabilities are manifested and actualized.

H

Health disparities:

Differences in the incidence and prevalence of health conditions and health status between groups, based on race/ethnicity, socioeconomic status, sexual orientation, gender, disability status, geographic location, or some combination of these.

Health equity:

The opportunity for everyone to attain his or her full health potential; no one is disadvantaged from achieving this potential because of his or her social position or other socially determined circumstance (Whitehead M. et al).

Health inequities:

Systematic and unjust distribution of social, economic, and environmental conditions needed for health (Whitehead M. et al).

I

Impact evaluation:

Assessment of whether intermediate objectives of an intervention have been achieved. Indicators may include changes in knowledge, attitudes, behavior, or risk-factor prevalence (Brownson, EBPH).

Implementation fidelity:

The degree of fit between the developer-defined elements of an intervention and its actual implementation in a given organization or community setting. (Backer, T.E. (2001). Finding the balance: Program fidelity and adaptation in substance abuse prevention. Rockville, MD: SAMHSA.)

Independent variable:

The variable (e.g., program, methods, conditions) that the evaluator wants to evaluate. They are called variables because they can change. They are independent because their existence does not depend on whether something else occurs: you choose them, and they stay consistent throughout the evaluation period.

Indicator:

An indicator is a measure used to express the behavior of a system or part of a system, including the following characteristics: performance measurement, progress toward goals or objectives, evidence of results achieved, uniform measurement for comparison, and modifiable over time (Flowers J., 2005; Brizius & Campbell, 1991).

Indicators:

Specific qualitative and quantitative data points that are used to operationalize the outcomes or activities of the intervention. It is recommended to have more than one indicator for each outcome or activity being evaluated.

Inductive reasoning:

A logical process in which multiple, specific premises are assumed to be true and are then combined to generate a generalized conclusion.

Inferential statistics:

Statistical analysis using models to confirm relationships among variables of interest or to generalize findings to an overall population (www.cdc.gov/eval/guide/glossary/index.htm).

Inputs:

Resources and processes that support the intervention design, planning, and implementation efforts. An example of inputs that are resources includes staff, target audience, money, space, time, partnership meetings, and technology.

Intermediate:

An intermediate outcome is identified shortly after the intervention has ended. Evaluators will be able to observe some type of change in behavior or norms, which may take some time to see. Intermediate outcomes tend to measure change 3 to 6 months after a participant has completed the intervention.

Intermediate Objectives:

An intermediate outcome is identified shortly after the intervention has ended. Evaluators will be able to observe some type of change in behavior or norms, which may take some time to see. Intermediate outcomes tend to measure change 3 to 6 months after a participant has completed the intervention.

Intermediate outcome:

Changes associated with an intervention’s objectives, often measured in terms of knowledge, attitude, or behavior changes (Brownson, EBPH).

Internal validity:

Degree to which the causal inference drawn from a study is warranted when account is taken of the study methods, the representativeness of the study sample, and the nature of the population from which it is drawn; index and comparison groups are selected and compared in such a manner that the observed differences between them on the dependent variables under study may, apart from sampling error, be attributed only to the hypothesized effect under investigation (Brownson, EBPH).

Issue brief:

Written document that states the issue at hand, outlines recommended action steps, and that provides supplemental information to support findings and recommendations. Issue briefs are usually one to two pages and it includes a list of references and the contact information of the author.

J

K

Key Informant Interviews:

A semi-structured, one-on-one conversation designed to gain insight on a given topic. The interviewer will guide the interviewee through a discussion of their own life experience, perspectives, and opinions to further understand or create new knowledge about a specific subject.

Key Messages:

Major takeaway points from your evaluation findings that should be disseminated to those invested in the outcomes of the intervention.

L

Logic Model:

A framework used to depict how the interventions are supposed to function and the theory in which it will work. Logic models can include process and outcomes elements, similar to creating SMART objectives. The process component of the model will describe what is needed in the planning phase of the intervention (e.g., resources, program events or strategies, deliverables/ products from the activities). Whereas the outcome elements in a logic model demonstrate the intended effect or goal with respect to a given time period.

Long-Term:

Long-term outcomes will depict the ultimate goal of the intervention.These outcomes have shown to be sustainable within the priority population and are often measured 6 months to a year after the intervention has been completed.

Long-Term Objectives:

Long-term outcomes will depict the ultimate goal of the intervention. These outcomes have shown to be sustainable within the priority population and are often measured 6 months to a year after the intervention has been completed.

Long-term outcome:

Changes associated with an intervention’s goals, often measured in terms of morbidity, mortality, quality of life, or related changes (Brownson, EBPH).

M

Mean:

A measure of central tendency, the arithmetic average; a statistic used primarily with interval-ratio variables following symmetrical distributions. (US Environmental Protection Agency. (2007). Program Evaluation Glossary. Office of the Administrator/ Office of Policy/ Office of Strategic Environmental Management/ Evaluation Support Division.)

Measurable Objective:

A measurable objective requires a quantifiable activity that resulted in the desired change. It implies that baseline data is required so that result can reflect the positive and/or negative impact of the proposed intervention.

Median:

A measure of central tendency, the value of the case marking the midpoint of an ordered list of values of all cases; a statistic used primarily with ordinal variables and asymmetrically distributed interval-ratio variables. (US Environmental Protection Agency. (2007). Program Evaluation Glossary. Office of the Administrator/ Office of Policy/ Office of Strategic Environmental Management/ Evaluation Support Division.

Mission:

A statement that captures the enduring focus of your partnership — why your partnership exists and what needs it fulfills in your community (National Institutes of Health, 2002).

Mixed method:

An evaluation approach in which researchers collect, analyze, and integrate both quantitative and qualitative data in a single study to address evaluation questions.

Mode:

A measure of central tendency, the value of a variable that occurs most frequently; a statistic used primarily with nominal variables. (US Environmental Protection Agency. (2007). Program Evaluation Glossary. Office of the Administrator/ Office of Policy/ Office of Strategic Environmental Management/ Evaluation Support Division.)

Morbidity:

Any departure, subjective or objective, from a state of physiological or psychological well-being. In practice, morbidity describes instances of disease, illness, injury, and disability. 

Mortality:

A measure of the occurrence of deaths or fatalities in a defined population. 

N

Needs assessment:

Systematic procedure that makes use of epidemiologic, sociodemographic, and qualitative methods to determine the nature and extent of health problems, experienced by a specified population, and their environmental, social, economic, and behavioral determinants (Brownson, EBPH).

New Urbanist:

Promotes the creation and restoration of diverse, walkable, compact, vibrant, mixed-use communities composed of the same components as conventional development, but assembled in a more integrated fashion, in the form of complete communities. These contain housing, work places, shops, entertainment, schools, parks, and civic facilities essential to the daily lives of the residents, all within easy walking distance of each other.

Nominal group technique:

Structured, small-group process designed to achieve consensus; individuals respond to questions and prioritize ideas as they are presented (Brownson, EBPH).

Non-parametric statistics:

Mathematical procedures for statistical hypothesis testing which, unlike parametric statistics, make no assumptions about the probability distributions of the variables being assessed. (Corder, G. W.; Foreman, D. I. (2014). Nonparametric Statistics: A Step-by-Step Approach. Wiley.)

O

Objective:

Concise time- and action- specific, measurable statements that describe how a goal will be reached.

Observations:

A research method aimed to systematically observe and record interactions, events, locations, etc. between individuals and their environment in a natural state. Observations allow evaluators to describe and understand people’s behavior in context.

Outcome evaluation:

Long-term measure of effects such as changes in morbidity, mortality, and/or quality of life (Brownson, EBPH).

Outcome Objective:

Measures the intended effect of the program on the target population or at the end of the intervention. With a major focus on the intended audience, outcome objectives will determine the success of the intervention. Outcome objectives can be divided into three periods: Short-, Intermediate-, and Long-term.

Outputs:

Found in a logic model, an output describes the tangible items or experiences an individual will encounter if they participated in the intervention at hand.

P

Parametric statistics:

Data comes from a population that follows a probability distribution based on a fixed set of parameters. (Geisser, S.; Johnson, W.M. (2006) Modes of Parametric Statistical Inference, John Wiley & Sons.)

Participatory approaches:

Collaborative, community-based research method, designed to actively involve community members in research and intervention projects (Brownson, EBPH).

Pedestrian:

Any person on foot, walking, running, jogging, hiking, standing, sitting, lying down, or in a manually or mechanically propelled wheelchair (but not riding in or on a motor vehicle, railway train, streetcar, pedalcycle , animal, animal-drawn vehicle, or other vehicle) on a public road, in the public right of way, or in a parking lot.

Pedestrian Counts:

A method used to collect pedestrian data in specific communities or local areas.

Pedestrian injury:

When a pedestrian sustains bodily harm in an unintentional motor vehicle traffic crash with one or more vehicles or pedalcycles.

Pedestrian safety:

An aspect of walkability that deals with the level of risk to pedestrians when attempting to walk along or across the network of roads in a community.

Pedestrian Safety Action Plans:

A plan developed by community stakeholders intended to improve pedestrian safety in the community.

Pedestrian Safety Education Campaign and Promotions:

Coordinated efforts designed to improve pedestrian safety by informing a defined population about a specific pedestrian safety issue(s) targeting knowledge, attitudes, awareness, beliefs, behaviors, and/ or social norms related to pedestrian safety. These efforts can vary in complexity depending upon a variety of factors, such as duration, resources, and message.

Pedestrian safety education campaigns and promotions:

A pedestrian safety intervention aimed to improve awareness, education, and behaviors of drivers and pedestrians in the community.

Pedestrian Safety Intervention:

Activities outlined by a local or state Pedestrian Safety Action Plans or Pedestrian Safety Educational Campaigns and Promotions aimed to improve pedestrian safety.

Pedestrian volume:

The number of pedestrians that occupy a given space.

Personal factors:

Internal elements that have an influence on risk and protective factors for an individual (e.g., education status, awareness of laws, behaviors etc.)

Population of interest:

Group of people with diverse characteristics who are linked by geographical location or setting, social ties, common perspectives, and/ or joint actions (Brownson, EBPH).

Primary Data:

Refers to data that were collected for your own evaluation, meaning data is collected directly from the source.

Process evaluation:

Analysis of inputs and implementation experiences to track changes as a result of a program or policy. This occurs at the earliest stages of public health intervention and often is helpful in determining midcourse corrections (Brownson, EBPH).

Process Objective:

Measures activities that are necessary to deliver the program effectively and efficiently. Process objectives tend to be short-term outcomes by nature and generally evaluates the operational components of implementing an intervention.

Product:

Refers to the type of information that will be presented in the evaluation report and how it will be presented.

Protective Factors:

Any characteristics of an individual that decreases the likelihood of an adverse event or experiences that may threaten an individual's morbidity, mortality, and quality of life status.

Proximal outcomes:

Outcomes that are expected to occur soon after implementing the intervention. Related to short-term and intermediate outcomes.

Q

Qualitative data:

Descriptive, non-numerical data that approximates or characterizes – but does not measure – the attributes, characteristics, and properties of a thing or phenomenon. Qualitative data provides contextual information that can convey the “how” and “why” of a phenomena or issue. Qualitative data describes whereas quantitative data defines. (Business Dictionary)

Quantitative data:

Numerical data that can be counted (quantified), verified, and statistically analyzed. The process of collecting and analyzing quantitative data is intended to uncover numerical patterns and trends. Quantitative data defines whereas qualitative data describes. (Business Dictionary)

Quasi-experimental study design:

Evaluation in which the investigators lack full control over the allocation and/or timing of intervention delivery and evaluation observations, but conduct the study as if it were an experiment, allocating subjects to groups; the inability to allocate individuals or groups to intervention or control conditions randomly is a common situation that may be best studied as a quasi-experiment (Brownson, EBPH).

R

Range:

A measure of spread which gives the distance between the lowest and the highest values in a distribution; a statistic used primarily with interval-ratio variables. (US Environmental Protection Agency. (2007). Program Evaluation Glossary. Office of the Administrator/ Office of Policy/ Office of Strategic Environmental Management/ Evaluation Support Division.)

Readiness:

An organization's or program’s ability to successfully implement an evaluation project or framework. Evaluation readiness has multiple components, including leadership support for evaluation, organizational culture in support of learning and improvement, evaluation skills and expertise, and resources.

RE-AIM framework:

Framework for consistent reporting of research results that takes Account of Reach to the target population; Effectiveness or Efficacy; Adoption by target settings or institutions; Implementation of consistency of delivery of intervention; and Maintenance of intervention effects in individuals and settings over time (Brownson, EBPH).

Release Date:

Refers to the date the evaluation report will be sent out to a specified audience.

Relevant Objective:

A relevant objective relates to the goals and reflects program activities appropriately. The evaluation objective has an overall effect on the desired change.

Responsible Party:

The individual or entity responsible for collecting/ providing the needed data. For example, an external evaluator will be the responsible party that will conduct interviews and collect primary data for the evaluation.

Risk Factors:

Any characteristics of an individual that increases the likelihood of an adverse event or experiences that may threaten an individual's morbidity, mortality status, and quality of life status.

S

Sample:

A selected subset of a larger group or population (the “universe” population).

Sampling Plan:

A sampling plan is a detailed outline of which measurements will be taken at what times, on which material, in what manner, and by whom. Sampling plans should be designed in such a way that the resulting data will contain a representative sample of the parameters of interest and allow for all questions, as stated in the goals, to be answered. The steps include: identify the parameters to be measured, the range of possible values, and the required resolution; design a sampling scheme that details how and when samples will be taken; select sample sizes; design data storage formats; and assign roles and responsibilities.

Secondary Data:

Refers to data that were collected by someone else, rather than the user.

Short-Term Outcome:

Short-term outcomes are those that demonstrate the immediate impact of an intervention on the target audience. For instance, short-term indicators can be related to changes in an individual’s knowledge, attitude, or skill level related to the intervention.

Smart Growth:

An approach to development that encourages a mix of building types and uses, diverse housing and transportation options, development within existing neighborhoods, and community engagement.

SMART Objective:

A mnemonic acronym that explains how to create an objective. According to the mnemonic, objectives should be specific, measurable, achievable, relevant, and time-bound.

Social determinants of health:

Life-enhancing resources, such as food supply, housing, economic and social relationships, transportation, education and health care, whose distribution across populations effectively determines length and quality of life (James S, 2002).

Specific Objective:

A specific objective will identify the setting and activity the caused the desired change. Additionally, it will indicate how the change was implemented and clearly demonstrate what was done to facilitate the impact.

Standard deviation:

The standard deviation of a set of numerical measurements (on an “interval scale”). It indicates how closely individual measurements cluster around the mean (www.cdc.gov/eval/guide/glossary/index.htm).

Statistical power:

The likelihood that a study will detect an effect when there is an effect there to be detected (Brownson, EBPH).

Strategy:

A way of describing how you are going to get things done; a good strategy will take into account existing barriers and resources (people, money, power, materials, etc.) and it will stay with the overall vision, mission, and objectives of the intervention. (http://ctb.ku.edu/en/table-of-contents/structure/strategic-planning/develop-strategies/main)

Surveys:

A method of collecting information from a target population by asking a series of questions with pre-identified responses or space to answer an open-ended question.

T

Theme:

A quantitative or qualitative representation of an attribute of a person, place, thing, or idea.

Theory of change:

A comprehensive description and illustration of how and why a desired change is expected to happen in a particular context.

Time Bound Objective:

Identifies when the objective will be accomplished using a specific and reasonable timeframe.

Timing:

Related to the frequency at which the data will be collected (e.g. daily, quarterly, and annually). Serves as a timeline of data collection for your evaluation.

Traditional report:

Traditional reports tend to be formal and comprehensive in nature and should follow the standard format of reporting in your agency. The document provides a detailed summary of the evaluation goals and objectives, methodology, and findings and is illustrated with facts and figures to showcase the data.

Traffic Records System:

The traffic records system inventory includes reliable state-level data sources that can help decision-makers use data to develop and evaluate engineering, enforcement, education, and emergency medical services safety countermeasures.

Triangulation:

A technique used to analyze diverse sources of data (i.e., surveys, interviews, observation, etc.) in order to supplement missing contextual information and identify emerging themes.

U

Upstream intervention strategies:

Interventions and strategies focus on improving fundamental social and economic structures in order to decrease barriers and improve supports that allow people to achieve their full health potential.

Utility-Focused Approach:

An approach based on the principle that an evaluation should be judged on the usefulness to its intended users.

V

Variable:

A quantitative or qualitative representation of an attribute of a person, place, thing, or idea.

Variance:

A measure fo the spread of the values in a distribution; the larger the variance, the larger the distance of the individual cases from the group mean. (US Environmental Protection Agency. (2007). Program Evaluation Glossary. Office of the Administrator/ Office of Policy/ Office of Strategic Environmental Management/ Evaluation Support Division.)

Vision:

A statement that captures the desired end state of your partnership and describes future direction and long-term focus (National Institutes of Health, 2002).

W

X

Y

Z