What is evaluation?
Evaluation can be defined as the systematic appraisal of the success and quality of a project. Success refers to whether the project objectives have been achieved, and quality refers to whether the needs of the stakeholders have been met.

Types of evaluation
Depending on the purpose of the evaluation, a distinction can be made between formative (or process) and summative (or effect) evaluation.

  • Formative evaluation aims to assess initial and ongoing project activities, with a view to improve the work in progress and increase the likelihood that the project will be successful. It is done at several points during the project implementation, and has several components:
    • needs assessment determines who needs the project, what needs they have, and what sort of activities can answer these needs;
    • evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness;
    • implementation evaluation aims to assess whether the project is being conducted as planned, starting from the idea that effects can only be evaluated if the project and its components are operating according to the proposed plan.
    • progress evaluation aims to assess the progress towards meeting the project objectives; it involves collecting information to see if milestones were met and to identify unexpected developments.
  • Summative evaluation aims to assess the quality and impact of a fully implemented project, and to verify if the project has reached its stated goals. Summative evaluation also has several components
    • outcome evaluation investigates whether the project resulted in demonstrable effects on specifically defined outcomes.
    • impact evaluation assesses the overall effects (intended or unintended) of the project, including longer term effects.
    • cost-effectiveness and cost-benefit analysis address questions of efficiency by comparing outcomes to the costs of the project.

Steps in the evaluation process

Identify key evaluation points
The first step in an evaluation is to identify the key points that need to be considered for the evaluation. These points can be identified on the basis of the conceptual model underlying the project, and the stakeholders’ views on what kind of evaluation is necessary.

  • A good way to capture the conceptual model underlying the project is to use a logic model representing the process from inputs to long-term outcomes. Such a model creates a common understanding about the project’s structure, connections, and expected outcomes, and can help to focus the evaluation on the most critical elements. In developing a conceptual model, it may be useful to “work backwards,” starting from the desired outcomes and then determining critical conditions or events that will need to be established for these outcomes to occur.
  • Key evaluation points should incorporate the stakeholders’ views, and focus on what they want to know. This may be assessed directly by asking the stakeholders which questions they would like to see answered for the evaluation, as part of an "evaluability assessment".

Formulate evaluation questions, indicators and targets
The key evaluation points are the basis to formulate evaluation questions, relating to the quality of the implementation (process evaluation) and the success of the project (effect evaluation).

  • Process evaluation questions should be linked to the planning and organisation of the project activities, and focus on whether the activities are implemented according to plan, how obstacles and difficulties will be identified during the implementation and dealt with, and how the quality of the project implementation will be assured.
  • Effect evaluation questions should be linked to the specific objectives, and verify if the stated objectives have been achieved.
  • Next, indicators need to be formulated to quantify the evaluation questions.
  • Process indicators verify the accuracy and timeliness of the steps foreseen for the project implementation.
  • Performance indicators relate to the level of participation on the project, user satisfaction, efficiency, take-up, etc.
  • Effect indicators relate to the achievement of the objectives. If the objectives have been formulated SMART (specific, measurable, achievable, realistic, timed), one or more variable can be specified for each objective to measure the level of its achievement.

Evaluation indicators are variables that should be easy to measure, objective, valid, reliable and repeatable. If possible, target values should be specified (e.g., numbers expected, level of quality aimed for) to serve as a standard to compare the process or results of the project with.

Select an evaluation design
Once the evaluation questions have been formulated and indicators and target values defined, a design can be selected for the evaluation study. Issues to be considered are:

  • Longitudinal or cross-sectional design. In a longitudinal study, data are collected from the same individuals at different time intervals (e.g., before and after the intervention). In a cross-sectional study, new samples are drawn for each successive data collection. Longitudinal designs are preferred for methodological reasons, but often pose problems related to linking individuals’ responses over time or loss of respondents. Cross-sectional designs may therefore be a valuable alternative. Another methodological choice is whether to use comparison groups to ascertain if the outcomes can be attributed to the intervention.
  • Sampling. While a large sample will reduce sampling error (i.e., the probability that different results would be obtained with a different sample), the validity of evaluations is also threatened by sample bias (i.e., bias due to loss of sample units) and response bias (i.e., responses or observations that do not reflect “true” outputs). Evaluators should give priority to procedures that will reduce these sources of bias, rather than selecting larger samples.
  • Methods for data collection. A choice must be made between quantitative (e.g., questionnaires and surveys, records, web logs, counting) or qualitative (e.g., open interviews, focus groups, observations or expert opinions) methods, or a combination of both. The adequacy of these methods does not depend on the methods themselves, but on whether or not they will answer the evaluation questions and match the context for the project implementation and the expectations of the target group and stakeholders.

Collect data
Once the evaluation design has been determined, the information must be collected. Both technical and political issues need to be addressed.

  • Before data can be collected, the necessary clearances and permissions must be obtained. It is important to find out what the procedures are for data collection in the organization(s) involved, and to address them as soon as possible. Cooperation may be enhanced by offering to give information to the participants on the project outcomes.
  • Needs and sensitivities of the participants must be considered. Participants should be clearly informed how the results will be used. It is helpful to state explicitly that information will remain anonymous, and that no personal repercussions will result from information presented to the evaluator. If sensitive information needs to be disclosed, even in an anonymous way, informed consent should be obtained.
  • When more persons are involved to collect data, they must be trained to operate in an objective, unbiased manner. Ratings or categorizations of data of different assessors for the same event can be compared, and inter-rater reliability should be established. Supervision may be required to ensure objectivity.
  • To reduce sample bias, efforts must be made to maximize the number or respondents. Non-respondents should be contacted and encouraged to participate, for instance by re-sending surveys, rescheduling interviews, or planning observations on multiple occasions. Reasons for non-response should be investigated, and systematic differences between responders and non-responders should be explored and their impact on the generalizability of findings noted.
  • Evaluation data should be collected in a way that causes as little disruption as possible. Schedules and sensitivities of the target group should be paid attention to, and approaches may need to be changed if necessary.

Analyze data
Once the data are collected, they must be analyzed and interpreted. The type of analysis to be performed will depend on the nature of the data (e.g., qualitative or quantitative), but regardless of the actual analysis the following steps should be followed:

  • Checking of raw data. Before the actual analysis takes place, data should be checked for responses that are out of range or unlikely (e.g., always giving the same answer), with a view to eliminate problematic responses or items from the data set.
  • Preparation for analysis. To prepare the data for analysis, they must be coded and entered (keyed or scanned) in a data set. Quality control of the data set should also be performed.
  • Conduct initial analysis. The next step is to perform the analyses foreseen in the evaluation plan. For the analysis of quantitative data, statistical programs are widely available. For qualitative data, computerized systems offering the possibility to analyse narrative data are also becoming increasingly available. Most evaluations rely on fairly simple descriptive statistics (e.g., means, frequencies, differences, etc.), but where more complex analyses and causal modeling are derived, evaluators will need to use analyses of variance, regression analysis, or structural equation modeling.
  • Conduct additional analyses based on the initial results. The initial analyses will often raise as many questions as they answer. To address these questions, further analyses can be performed. Several iterations of re-analysis cycles may be required, as emerging patterns of data suggest other interesting avenues to explore.
  • Integrate and synthesize findings. The final task is to integrate the separate analyses into an overall framework, drawing conclusions from the data to answer the evaluation questions. As the different data sources may not always yield consistent findings, apparent contradictions should be explained.

Report evaluation findings
The final step of the evaluation process is to report what has been found. This requires pulling together the data collected, distilling the findings in light of the evaluation questions, and disseminating the results. An evaluation report typically includes sections on the background, evaluation questions, evaluation design and methods, data analysis, findings, and conclusions. This information needs to be provided in a manner and style that is appropriate, appealing, and compelling to the target audience. Different reports may have to be provided for the different audiences, and it may be necessary to add other methods of communicating findings, such as presentations or web-based documents, to complement the evaluation report.

Practical issues

Staff skills
Planning an evaluation, selecting an evaluation design, collecting data, analyzing and interpreting them requires specific knowledge and skills. When these are not available in the project organization, evaluation can be outsourced to an external evaluator. Outsourcing has pros and cons: while it is likely to enhance the quality and objectivity of the evaluation, add to the project status, and take away the practical burden of carrying out the evaluation, it also reduces the ownership of the evaluation results, may give rise to conflicts over priorities, and reduces the opportunity to learn from the project. Whether or not the evaluation should be done by experts depends on its scope. Small-scale evaluations focusing on formative aspects of a project can mostly be undertaken by organizations themselves. Large-scale, complex evaluation designs, on the other hand, require more expertise to design the study, select the instruments and manage the data collection and analysis.

Budget
Project evaluation can be costly, particularly if it aims to capture various aspects of both the process and outcomes of the project. Evaluation should therefore be incorporated in the project’s budget in a way that makes the evaluation study realistic, manageable, efficient, and productive.

Timing
It is a common mistake to assume that evaluation takes place at the end of a project. Although the effects of a project are usually achieved at the end, evaluation must be planned from the outset and conducted throughout the project life time. The scope, complexity and quality of the evaluation design will affect the time needed for data collection and analysis. It is important to plan enough time for the evaluation, taking into consideration the requirements of the methods envisaged. A survey requires considerable time to create and pretest questions and to obtain high response rates. Qualitative methods may be time consuming because data collection and analysis overlap, and the analysis gives rise to new evaluation questions. If insufficient time is allowed for evaluation, it may be necessary to curtail the amount of data to be collected or to cut short the analytic process, thereby limiting the value of the findings. For evaluations that operate under severe time constraints—for example, where budgetary decisions depend on the findings—choosing the best method can present a serious dilemma.


Source: original news by the European Executive Agency for Health and Consumers

more info: 
- Project Management Infokit. Jisc Infonet. www.jiscinfonet.ac.uk
- Wholey JS, Hatry HP, Newcomer KE (Eds) (2004). Handbook of Practical Program Evaluation (2nd Ed). San Francisco: Jossey-Bass.
- Hughes J, Nieuwenhuis L (Eds) (2005). A Project Manager's Guide to Evaluation. Evaluate Europe Handbook Series. Bremen: ITB Institute Technology and Education.
- National Science Foundation (2002). The User Friendly Handbook for Project Evaluation. Washington: NSF, Directorate for Education and Human Resources.
- Zarinpoush, F. (2006). Project evaluation guide for non profit organisations. Fundamental methods and steps for conducting project evaluation. Ottawa: Imagine Canada.