Samples
We value the work we do and are proud to cherish our writing experts by not just paying well, but by showing off their masterpieces! Come and take a look at some of the sample writing assignments that you can order at SmartCustomWriting.
Types of Evaluation
Process evaluation
Rossi, Lipsey, & Freeman (2004), define the process evaluation as the type of evaluation that focuses on the way in which a program was implemented and how the program is operating. It keys out the procedures that are used as well as the decisions that are made in developing the program. Process evaluation describes the way in which a program operates, the type of services it offers, and the functions it fulfills. Process evaluation used the experimental data to assess the delivery of the implemented program. It verifies on what the program is meant to do and whether the program is implemented as required. Process evaluation is very important in a number of ways as listed below:
It is useful in determining the degree to which a given program will be implemented according to the available plan.
Very useful in assessing and documenting the degree of variability and fidelity in the implementation of a program irrespective of whether it is expected or unexpected, planned or unplanned.
It is used in comparing multiple sites in relation to fidelity.
Process evaluation is used to give validity regarding the relationship between intervention used and its outcomes.
Used to give out the information concerning the type of components of the given interventions, which can bring about the outcomes.
It is helpful in allowing one to understand how the program context and program processes relate to each other. The program context can be the setting characteristics while the program process can be the levels of implementation.
Useful in providing managers with the feedback regarding the quality of the implementation.
Used to improve the components used in delivery.
Used to provide the program accountability to the public, sponsors, funders, and clients.
Used to ameliorate the quality of the implemented program since the act of assessing is an intervention.
It is very important that, when the program components have been identified, a logical model is created that in graphical terms depicts the relationship between the components of a program and the expected outcome of the components. The logic model can be defined as a set of statements that provide a link for the problems a program is trying to address, the manner in which it will address them, and the immediate and intermediate results (Rossi, et al., 2004). The Logic Model is important in a number of ways.
It is useful in developing clarity concerning a program or project that is to be implemented.
Useful in developing consensus among the involved people.
Used in pointing out the redundancies and the gaps in a given plan.
Used in the identification of the main hypothesis of the program to be implemented.
Used to compactly convey what the program or project is all about.
The Logic Model can be used during any given work for the purposes of clarifying the task being done, the reason to why it is done, and the intended outcomes of that work; during program or project planning to ensure that the particular program or project is complete and logical; during evaluation planning in order to focus the evaluation; and during the project or program implementation to act as a template for making comparisons with the real program and as a filter to find out if the proposed changes are appropriate or not (Rossi, et al., 2004).
Process evaluation is characterized by two formats of data collection. The two formats are; Quantitative, archival, or recorded data which can be controlled by a management system or computerized tracking; and the qualitative data which can be acquired by a variety of formats, for example surveys or focus groups.
Outcome evaluation
Schalock (2001) defines outcome evaluation as the type of evaluation that deals with determining whether, and to which extent, program services or activities have accomplished their designated outcomes among the population of target. Outcome evaluation is usually described by first looking at the basic components of the program. It looks at programs as systems which have activities or processes, inputs, outcomes and outputs. The inputs are the resources and materials that the program uses to carry out its processes, or activities to serve clients, for instance, staff, equipment, volunteers, money, and facilities. They are usually easy to identify and many of the inputs are common to most programs and organizations. Outcome evaluation is very important as measures the changes in the outcomes of the implemented programs and also it establishes that the intervention on the way the program operates causes the observed changes. The most essential decisions in a given evaluation to demonstrate causation, takes into consideration its design, that is, those decisions concerning what will be measured and the time when the evaluations will take place.
The outcome evaluations can either be experimental or quasi-experimental. The experimental evaluations are usually random assignment studies which are used to evaluate the total impacts of a program or the activities of the program, and this allows for the appropriate conclusions to be made concerning the cause and effect. The quasi-experimental evaluations are used to monitor the outcomes for a single group over time or it can compare the outcomes among the individuals getting the services to a comparison group, national data, or a similar population (Schalock, 2001).
Activities are the processes which a given implemented program does to the customer in order to satisfy his or her needs, for instance, counseling, teaching, feeding, clothing, and sheltering. It is of great significance to note that during the time of keying out the activities in a project or program, the focus is still important on the program or organization, and still not very much on real changes in the customer. Outputs are the units of service concerning the program to be implemented, for instance, the number of individuals, sheltered, clothed, counseled, and fed. The number of customers served only indicates the numerical number of customers who underwent the program. Outcomes are the real impacts for participants after or during the program, for instance, in the case of a smoking cessation program, participants quitting smoking can be the outcome. The outcomes are often expressed in terms of behaviors, skills and knowledge, status, conditions, and values (Schalock, 2001).
Outcome evaluation uses a quantitative approach. It usually makes use of randomized controlled trial, comparison group and pre-post comparison. Randomized control trial design uses groups of customers who are assigned randomly to a plausible option or to the treatment in question. The members in all the groups are given similar pre-treatment and post-treatment evaluations. The comparison group design is almost the same as the randomized design but the difference is that in comparison group the groups of customers are deliberately and not randomly chosen. The pre-post design is a more realistic design for treatment systems or services having insufficient resources or experience. The pre-post design is less rigorous scientifically but it can give useful results for the uses in program improvement and program accountability.
Outcome evaluations can be done at various points during the development of a given program. It is advisable that the outcome evaluations are not conducted for start-up programs that have not yet attained a fully incorporated service-delivery models. It is also not advisable to conduct an outcome evaluation without first conducting the process evaluation that accompanies it. It has been seen that, the findings obtained from the outcome evaluations are used to depict whether or not the involved individuals are receiving the predicted returns of the program (Schalock, 2001).
References
Rossi, P., Lipsey, M., & Freeman, H. (2004). Evaluation. Thousand Oaks: Sage Publications.
Schalock, R., (2001). Outcome-Based Evaluation. New York: Kluwer Academic/Plenum Publishers.