< Back to Search

CCF/SCF Tools Evaluating Training and Technical Assistance

Published: September 12, 2012
Audience:
Strengthening Communities Fund (SCF), Compassion Capital Fund (CCF)
Category:
Guidance, Policies, Procedures, Tools

 

 

 

 

 Overview

 

 

Evaluation processes validate program outcomes.

An outcome is a change in individuals, groups, organizations, systems, or communities that occurs during or after program activities. An outcome answers the question “so what?” So what if you provide an organization with 10 hours of technical assistance on fundraising techniques? Is the organization better able to raise money? Do they actually raise more money now? So what if you train an organization on how to develop a strategic planning process? Can the organization effectively perform the steps involved? Do they actively engage in strategic planning now?

Quantitative and qualitative evaluation measures help to answer this “so what?” question by methodically linking an organization’s actions to client results. Proper evaluation processes and procedures help a training and technical assistance provider to answer the questions: what has changed as a result of this program? How has this program made a difference? How are the lives of our clients better as a result of the program?

Keep in mind that logic models and evaluation processes can provide insight regarding your organization’s contribution to positive results. In order to prove direct causation, however, an organization will need to take part in experimental research and a controlled study to link training and technical assistance to results.

Kirkpatrick’s four levels of evaluation provide a framework.

Donald L Kirkpatrick is a Professor Emeritus at the University of Wisconsin and a former President of the American Society for Training and Development. He is well known throughout the educational and training community for his work in creating a framework of training evaluation.
Kirkpatrick identifies four levels of evaluation.

  • Level 1-Reaction: The first level of evaluation measures the audience’s opinion of the training or service delivered.
  • Level 2- Learning: The second level of evaluation measures whether or not the training or service resulted in a knowledge gain for the recipients.
  • Level 3- Behavior:  Level 3 assessments enquire as to whether an individual actually applied the knowledge they gained in a valuable way.
  • Level 4-Results:  The fourth and final level of assessment explores return on investment by showing that changes in behavior led to consequent changes in program outcomes.

Each level of evaluation is discussed in more detail in Chapters 2-5 of this lesson.

 

1. Logic Models and Outcome Measurement

 

Clearly defined outcomes become organizational goals and hypotheses.

Organizations may find it helpful to analyze their activities and outputs through the “if/then” lens. When developing outcomes, an organization should ask itself, “If we provide these activities and outputs, what do we hope will then happen?”

The answer to this question should provide an organization with short-term, intermediate, and long-term outcomes.

Short-term outcomes are those outcomes that will occur while clients are receiving your services, including things like knowledge gain or changes in attitude in the organizations that you work with. Achievement of short-term outcomes can generally be measured using Kirkpatrick’s second level of evaluation.

Intermediate outcomes are those that occur within the client organization itself, including changes in behavior or skill-gain that you expect to result from the training and technical assistance you provided. Achievement of intermediate outcomes is usually measured through tests for learning and observations of changes in behavior, Kirkpatrick’s second and third levels of evaluation.

Long-term or end outcomes refer to the resulting ability of a client organization to operate more efficiently and effectively by serving more people, or becoming more sustainable in accomplishing its larger purpose. Achievement of long-term outcomes can be measured through Kirkpatrick’s fourth level of evaluation.
See the video below to view interactivity Use the if/then exercise to identify outcomes.

Logic models document relationships.

While not all logic models look the same, they all serve the same purpose: to graphically capture the assumption and cause and effect relationships that drive your organization’s work on a project.

Download a sample logic model template and test your understanding of the different elements of a logic model using the activity on the right.
Click to open interactivity Elements of a Logic Model

Build from a foundation of data.

Experienced training and technical assistance providers know that in order to prove the effectiveness of their services, they must incorporate evaluation into all that they do and build off a foundation of data collection. Organizations may decide to collect this information through in-person or online surveys, or through site visits to client organizations.

Conducting regular surveys and needs assessments with your client population can help you to determine client demographics, experience, training and technical assistance needs, motivations, job satisfaction levels, and baseline performance.

While these surveys are incredibly helpful in providing insight into what sort of training and technical assistance opportunities would most benefit the client, these surveys also offer long-term value, providing points of comparison that your organization can reference throughout the evaluation process.

Site visits can also present training and technical assistance providers with important insight into how client organizations are performing and operating. Site visits can be an excellent source of qualitative information, most of which is not easily conveyed through surveys.

 

2. Evaluating Reaction

 

Make the most out of your surveys.

The length and type of level 1 survey will often depend on the length and type of training or technical assistance delivered. Regardless as to the format of this survey, organizations should try to ensure that 100% of participants respond, that participants remain anonymous, and that results are quantifiable, yet allow for comments and written feedback. Download a sample level 1 training evaluation, here.

There are a number of web-based survey applications, including Zoomerang, SurveyMonkey, and SurveyGizmo, that organizations can use to create and distribute electronic surveys. Each survey application has different editions that allow you to analyze functionality and choose a plan and price point that works for your organization. If you are unable to financially invest in a survey tool, check out the free versions on Zoomerang and SurveyMonkey.

Both in-person and electronic surveys can also be used to evaluate technical assistance offerings. Whether technical assistance takes place over the phone, via email, or in person, organizations should be prepared to deploy a survey enquiring into whether the individual providing the technical assistance was helpful and whether the client’s questions were answered.

Develop performance measures and keep high standards.

Performance measures are the data points that support the achievement of a larger outcome or goal. At initial stages of evaluation, performance measures are usually easy to identify, as they relate directly to organizational outputs. When formulating performance measures, an organization should ask, “How do we know we’ve been successful?”

For example, your organization identifies fifty hours of training as one of your outputs. In order to assess whether you’ve successfully delivered this output, you might collect a series of performance measures including attendance rates, contact hours, and participant level 1 surveys that note opinions regarding the usefulness of the training.

Acceptable quality levels (AQLs) are the quantifiable standards that your organization has set for its own performance measures. For instance, your organization might say that, in order to be considered a successful training event, 100% of all registered participants must attend the training and 90% of training participants must agree that they would recommend the training to a coworker.

The development of AQLs should be a collaborative process, involving all those that play a role in implementing training or technical assistance events. After you have developed a level 1 survey tool and AQLs, you can begin to tabulate results and measure them against your organization’s standards of performance.

 

3. Evaluating for Learning

Document level 2 gains with pre- and post-tests.

In order to prove that your clients have gained new knowledge, skills, or attitudes as a result of your training or technical assistance, your organization will need to be able to quantify those gains using performance measures. Pre-tests or pre-event surveys can help to capture your client’s beginning understanding or knowledge in the training and technical assistance subject area.

Just like with level 1 surveys, level 2 pre- and post-tests should be developed in a consistent manner, so that you can easily compare the two and identify the impact of your training or technical assistance.

Pre-tests or surveys can also be very informative, as they help identify a client’s strengths and weaknesses and highlight areas that your staff should focus on in more detail, or spend less time on, depending on the client’s skill level and knowledge.
View the video below to open interactivity Surveys are subjective.

Develop level 2 evaluations that are relevant to the learning content.

Depending on the objectives of your training or technical assistance event, you may find it useful to use a variety of methods to evaluate clients’ learning. Learning evaluations will vary depending on whether the TTA event is designed to increase participants’ knowledge, improve their skills, or change their attitudes.

Level 2 evaluations can include written or electronic tests or surveys, presentations, essays, or small projects. For longer or more dynamic training events, a combination of these elements might be more appropriate.

 

 

 

4. Evaluating Behavior

 

Client interviews reveal behavioral changes.

In order to effectively evaluate for changes in behavior, you will need to reconnect with training and technical assistance participants. Whether you reach out via electronic survey, email, telephone, or in-person interview, you will be looking to answer the same set of questions:

  • What did you learn that you were excited to try to implement at your organization?
  • How eager were you to implement these new changes?
  • Were you able to successfully implement these changes? Why or why not?
  • How do you plan to do things differently in the future?

Your organization may also find it beneficial to interview client staff members who regularly interact with the individual who took part in the TTA event.  These staff interviews may include colleagues, supervisors, or subordinates – anyone who might be able to provide insight into the individual’s behavior.  When interviewing client staff members, TTA providers should enquire as to whether the individual left the training or technical assistance event energized and excited to make positive changes, whether or not the individual actually made a change, and whether this change was well-received and sustainable within the client’s overall organizational climate.
Gain perspective through pre-and post-tests of behavior.

Just as with level 2 evaluations, level 3 evaluations are often more informative when organizations evaluate behavior both before and after a training or technical assistance event. These pre- and post-tests or surveys provide insight into how your clients have historically performed certain processes and procedures, and how new knowledge, skills, or attitudes have impacted or changed how those processes and procedures are performed.

Level 3 evaluations require patience.

It takes time to observe how learning impacts behavior. Because of this, your organization will need to review the content and objectives of your training and technical assistance efforts and decide on a reasonable length of time that provides your clients an opportunity to put their new knowledge or skills to work. Furthermore, you will also want to provide your clients with sufficient time to consider these behavioral changes and formulate an opinion as to whether they think the changes were positive and sustainable.

 

 

 

5. Evaluating for Results

 

Evaluate long-term outcomes and identify results.

Outcomes are the desired measurable changes in efficiency or effectiveness that are meaningful to the client. In the early stages of developing a training or technical assistance program, outcomes become goals or hypotheses as to the impact you hope to have on your client. When evaluating for results, an organization should revisit the long-term outcomes identified in their logic model, and consider ways to evaluate these outcomes.

Performance measures are the data points that support the achievement of a larger outcome. While an outcome generally represents a larger goal or aim for the organization, performance measures are the concrete factors that are assumed to quantitatively measure the established outcome.
View the video below to view interactivity Performance measures support long-term outcomes.

Level 4 evaluations do not exist in a vacuum.

Level 4 evaluations are compelling. Evaluating for results helps to affirm that your organization’s efforts were well spent, that your clients came away with meaningful knowledge that motivated them to change their behavior, and that this behavior change led to improvements in the way they do business.

Because results are so compelling, it is important that your organization be able to show the link between your services and each level of client evaluation. If your organization has not taken the time to make level 2 and level 3 evaluations a priority, it will be hard to make the case that your client’s successes can be attributed to the training and technical assistance opportunities you provided.

While all evaluation processes require you to make assumptions, effective evaluation of all four levels will make it far easier to correlate the activities and outputs your organization provides with the positive results of your clients. Keep in mind that correlation does not imply direct causation. In order to prove direct causation, an organization will need to take part in experimental research and a controlled study to link training and technical assistance to results.

 

 

 

 Summary

 

Let improvement drive your evaluation process.

Effective training and technical assistance organizations develop cultures of constant improvement and are constantly striving to make their offerings more convenient and relevant. Kirkpatrick’s four levels of evaluation can help your organization to identify both small and large changes that, when implemented, can significantly impact the quality of services you provide.

Whether evaluation results are positive or negative, they can help your organization to fine-tune training and technical assistance processes. To keep this drive for improvement at the forefront, end all evaluations with some variation of the question, “How can we make this program more helpful?”

Consider cost versus benefits.

When crafting an evaluation plan, an organization should always consider costs versus benefits.  Consider the who, what, when, and how of your evaluation plan.

  • Who -- Who from your staff will conduct evaluations? Will you hire an outside consultant, or utilize internal staff? How much does it cost to utilize this person’s time?
  • What -- What are you evaluating? What sort of level of effort is required to evaluate for reaction, learning, behavior, and results? Can you incorporate these evaluations into your organization’s pre-existing training and technical assistance offerings?
  • When -- Does the project have a set timeline and budget? How can you work within these parameters, but still collect relevant data?
  • How -- How will you collect evaluation data? Will you need to purchase new survey software? Will you encounter travel costs in order to conduct on-site observations with clients?

Consider the size and projected impact of the training and technical assistance you are providing, and develop a complementary evaluation plan that works within your available resources.