< Back to Search

CCF/SCF Tools Creating and Implementing a Data Collection Plan

Published: September 6, 2012
Audience:
Strengthening Communities Fund (SCF), Compassion Capital Fund (CCF)
Category:
Guidance, Policies, Procedures, Tools

 Overview

 

 

Data collection happens before analysis and reporting.

Valid and reliable data is the backbone of program analysis.  Collecting this data, however, is just one step in the greater process of measuring outcomes.  The five steps include:
  1. Identify outcomes and develop performance measures.
  2. Create and implement a data collection plan (discussed in this lesson).
  3. Analyze the data.
  4. Communicate the results.
  5.  Reflect, learn, and do it again.

This lesson will illustrate effective options and techniques for data collection.

At the end of this lesson, you will be able to understand how to plan for and implement data collection for a specific program; identify the most appropriate and useful data collection methods for your purposes; and manage and ensure the integrity of the data you collect.

 

 1. Data Collection Methods

 

Surveys are standardized written instruments that can be administered by mail, email, or in person.

The primary advantage of surveys is their cost in relation to the amount of data you can collect. Surveying generally is considered efficient because you can include large numbers of people at a relatively low cost. There are two key disadvantages: First, if the survey is conducted by mail, response rates can be very low, jeopardizing the validity of the data collected. There are mechanisms to increase response rates, but they will add to the cost of the survey. We will discuss tips for boosting response rates later in this lesson. Written surveys also don’t allow respondents to clarify a confusing question. Thorough survey pre-testing can reduce the likelihood that problems will arise.

Here are some examples of ways to use surveys:          

  • Track grassroots organizations’ use of and satisfaction with technical assistance services you provide.
  • Survey all organizations receiving technical assistance to learn about changes in their fundraising tactics and the results of their efforts to raise more money.

Click here to download the “Technical Assistance Survey Template.” You can adapt this template for use in your program evaluation.

Interviews are more in-depth, but can be cost-prohibitive.

 

Interviews use standardized instruments but are conducted either in person or over the telephone.  In fact, an interview may use the same instrument created for a written survey, although interviewing generally offers the chance to explore questions more deeply.  You can ask more complex questions in an interview since you have the opportunity to clarify any confusion.  You also can ask the respondents to elaborate on their answers, eliciting more in-depth information than a survey provides.  The primary disadvantage of interviews is their cost.  It takes considerably more time (and therefore costs more money) to conduct telephone and in-person interviews.  Often, this means you collect information from fewer people.  Interview reliability also can be problematic if interviewers are not well-trained.  They may ask questions in different ways or otherwise bias the responses.

Here are some examples of ways to use interviews:

  • Talk to different grassroots organizations to learn about the way in which they are applying new knowledge of partnership development.
  • Interview individuals within an organization to explore their perceptions of changes in capacity and ability to deliver services.

Focus groups are small-group discussions based on a defined area of interest.

While interviews with individuals are meant to solicit data without any influence or bias from the interviewer or other individuals, focus groups are designed to allow participants to discuss the questions and share their opinions.  This means people can influence one another in the process, stimulating memory or debate on an issue.  The advantage of focus groups lies in the richness of the information generated.  The disadvantage is that you can rarely generalize or apply the findings to your entire population of participants or clients.  Focus groups often are used prior to creating a survey to test concepts and wording of questions.  Following a written survey, they are used to explore specific questions or issues more thoroughly.

 

Here are some examples of ways to use focus groups: 

  • Hold a structured meeting with staff in a community-based organization to learn more about their grants management practices, what worked during the year, and what did not.
  • Conduct a discussion with staff from several organizations to explore their use of computer technology for tracking financial data.

Observations can capture behaviors, interactions, events, or physical site conditions.

Observations require well-trained observers who follow detailed guidelines about whom or what to observe, when and for how long, and by what method of recording.  The primary advantage of observation is its validity.  When done well, observation is considered a strong data collection method because it generates firsthand, unbiased information by individuals who have been trained on what to look for and how to record it.  Observation does require time (for development of the observation tool, training of the observers, and data collection), making it one of the costlier methods.

Here are some examples of ways to use observations:

  • Observe individuals participating in training to track the development of their skill in the topic.
  • Observe community meetings sponsored by grassroots organizations to learn about their partnership-building techniques and collaborative behavior.

Record or document review involves systematic data collection from existing records.

 

Internal records available to a capacity builder might include financial documents, monthly reports, activity logs, purchase orders, etc.  The advantage of using records from your organization is the ease of data collection.  The data already exists and no additional effort needs to be made to collect it (assuming the specific data you need is actually available and up-to-date).

If the data is available and timely, record review is a very economical and efficient data collection method.  If not, it is likely well worth the time to make improvements to your data management system so you can rely on internal record review for future outcome measurement work.  Just a few changes to an existing form can turn it into a useful data collection tool.  A small amount of staff training can increase the validity and reliability of internally generated data.

Here are some examples of documents or records from which you can gather data:      

  • Sign-in logs from a series of workshops to track attendance in training, measuring consistency of attendance as an indicator of organizational commitment to learning.
  • Feedback forms completed by workshop participants to learn about satisfaction with training provided.

Official records can include Federal, state, or local government sources such as the U.S. Census Bureau, health departments, law enforcement, school records, assessor data, etc.  If the data is relevant and accessible, then official record review is very low-cost.

 

 2. Validity and Reliability

 

 

Validity is the accuracy of the information generated.

The primary advantage of surveys is their cost in relation to the amount of data you can collect. Surveying generally is considered efficient because you can include large numbers of people at a relatively low cost. There are two key disadvantages: First, if the survey is conducted by mail, response rates can be very low, jeopardizing the validity of the data collected. There are mechanisms to increase response rates, but they will add to the cost of the survey. We will discuss tips for boosting response rates later in this lesson. Written surveys also don’t allow respondents to clarify a confusing question. Thorough survey pre-testing can reduce the likelihood that problems will arise.

Reliability refers to consistency.

Reliability can also be thought of as the extent to which data are reproducible. Do items or questions on a survey, for example, repeatedly produce the same response regardless of when the survey is administered or whether the respondents are men or women? Bias in the data collection instrument is a primary threat to reliability and can be reduced by repeated testing and revision of the instrument.

You cannot have a valid instrument if it is not reliable. However, you can have a reliable instrument that is not valid. Think of shooting arrows at a target. Reliability is getting the arrows to land in about the same place each time you shoot. You can do this without hitting the bull’s-eye. Validity is getting the arrow to land on the bull’s-eye. Lots of arrows landing in the bull’s-eye means you have both reliability and validity.

 

 3. Deciding When and How to Collect Data



Consider the most appropriate data collection design for your program.

Here are descriptions for five approaches or designs you are likely to use for your data collection.  You may want to employ more than one type of design.

Design 1:  Post-only Measures

Data are collected once: at the end of the program, service, or activity
Example: Level of participant knowledge on a survey after a training workshop

Design 2: Pre/Post Measures   

Data are collected twice: at the beginning to establish a baseline and at the end of the program
Example: Comparison of an organization’s documented fundraising success before and after receiving technical assistance

Design 3: Time Series   

Data are collected a number of times: during an ongoing program and in follow-up
Example: Monthly observations of an organization’s collaboration meetings to track changes in partnership development and communication

Design 4: Measures with a Comparison Group

Data are collected from two groups: one group that receives the intervention and one that doesn’t      
Example: Comparison of data on skill development from individuals who participated in training and those who have not yet taken your workshop
Note: Comparison groups can be very useful in demonstrating the success of your intervention.  The main question is, can you find a group of people or organizations that is just like the group with whom you are working?  In order to provide a valid comparison, the two groups must have the same general characteristics.  A similar group may be difficult to find.  However, if you are working with different groups at different times, and the groups are similar, this approach may work for you.

Design 5: Measures with a Comparative Standard

Data are collected once: at the end of the program, service, or activity, and are compared with a standard
Example: Comparison of this year’s data on organizations’ success in fundraising versus last year’s data
Note: Comparative standards are standards against which you can measure yourself. There are standards of success in some fields (e.g., health mortality and morbidity rates, student achievement scores, teen birth rates). For intermediaries, however, there are unlikely to be many regarding your program outcomes or indicators. You can, however, compare your results for one time period to an earlier one, as shown in the example above. You collect data for the first time period as your baseline and use it as your standard in the future.

Implement data collection procedures.

It will be vital to find reliable and trustworthy people to collect and manage the data. Consider who will collect the data and how you will recruit these people. What steps will they need to take to collect the data? How will you train them? Finally, who will be responsible for monitoring the data collection process to ensure you are getting what you need? It’s important to answer each of these questions during your planning. You don’t want to be surprised halfway through the process to discover your three-month follow-up surveys were not mailed out because you didn’t identify who would do so!
Click below to open interactivity Use a checklist to craft your data collection design.


Prepare your clients (FBOs and CBOs) for data collection.

Communicate with the organizations you serve or the program’s staff to inform them of this step in the evaluation process. Make sure they know that you will be collecting data, either at the time of service or in follow-up. Clarify why it is important to you and how you intend to use the data. Organizations often have outcome reporting requirements themselves, so they usually are responsive if they have been alerted to your needs ahead of time. Advising them in advance about your data collection plans will help increase their willingness to participate during implementation.


Protect individuals’ confidentiality and get informed consent.

Anonymous and confidential do not mean the same thing. “Anonymous” means you do not know who provided the responses. “Confidential” means you know or can find out who provided the responses, but you are committed to keeping the information to yourself.

You must ensure that you protect the confidentiality of any individual’s data or comment. It is easy to make your surveys anonymous, but if you want to track people over time, you’ll likely need to attach ID numbers to each person, keeping a list of the names and numbers in a locked file.

It is important to inform people that you are measuring your program’s outcomes and may use data they provide in some way. You must let them know that their participation is voluntary and explain how you will maintain the confidentiality of their data.

 

 4. Strategies for Quality Assurance and Boosting Response Rates



Use these strategies

Double entry.  This entails setting up a system to collect data twice and then compare for discrepancies.  This can be costly and time-consuming, but is the most thorough method for quality control.


Spot checking.  This entails reviewing a random sample of data and comparing it to the source document for discrepancies or other anomalies.  If discrepancies are found, the first step is to identify any patterns (data entered in a particular time period or by a specific staff person; data associated with a particular beneficiary organization; a specific type of data that is incorrect across many records—for example, if all data for additional persons served at an organization was formatted as a percentage instead of as a whole number).   The capacity builder may need to review all the data entered, especially if there is no discernible pattern to the errors.


Sort data to find missing, high, or low values.  If you are using a database or spreadsheet function, identifying outliers (those pieces of data at either extreme) is very easy, whether through the use of formulas or sorting functions.


Use automation, such as drop-down menus.  Automating data collection provides a uniform way to report information and makes sorting and analyzing data much easier.  For example, organizations reporting the number of additional persons served will all use the same language to report the outcome, whereas without such automation the language could vary significantly from report to report.  Additionally, more sophisticated forms can pre-populate performance goals from an existing database, which reduces data entry errors made by those filling out the forms.


Format a database to accept only numbers.  Whether organizations are filling out forms directly or your staff is entering data from a handwritten form, formatting your data fields to accept only numbers reduces errors related to typos.


Review data for anomalies.  This strategy requires that a staff person who is familiar with the organization’s capacity building interventions and who has a good eye for detail reviews the data collected and identifies anomalies.  Some of these anomalies may not appear with general sorting.


Discuss data discrepancies with the organization.  If, after implementing any of these quality assurance mechanisms, discrepancies remain unexplained, take the data back to the organization for discussion and clarification.

Click below to open interactivity Organize data using Microsoft Excel.


Boosting your response rates can help ensure sufficient and timely data.

It’s important to have response rates that are as high as possible considering your circumstances.  If response rates are low, you may be excluding valuable opinions, feedback, and responses that can help you shape future training and technical assistance programs.  You may also get an inaccurate picture of how your current program is proceeding.  The following strategies can help you increase your response rates.

Tie data collection to project milestones.  Throughout the course of the capacity building relationship, it is relatively simple to require organizations to report desired data.  For example, an evaluation could be due as a requirement for moving on to the next phase of the project, such as releasing funds for a capacity building project or approving a consultant to begin work.   However, once the organization exits the capacity building program, the capacity builder loses this leverage.


Conduct an exit interview once the engagement is complete.  Participation in this interview can be mandated in a memorandum of understanding.  An exit interview is close enough to the intervention that the organization may still be invested in maintaining its relationship with the capacity builder and follow through on the commitment.  However, the organization may not have realized all its possible outcomes and therefore, the data may not capture some of the ripple effects where outcomes are realized after the data has been collected.


 Stay in touch.  By holding monthly meetings or conference calls with organizations after they exit the program, the capacity builder can maintain more informal connections and provide reminders.  The organizations have access to advice and support and may be more likely to participate in a follow-up data collection effort.  Establishing a community of practice among organizations so they have even more reason to be in touch with each other (and you) is one way to implement this strategy.


Provide the outcome data to the organization.  Offer organizations a short, summary report card of the data you collect from them and demonstrate how it can be used as a marketing tool.  This summary can be invaluable to a program and may increase the number of responses you get to your data surveys.  If you can use the merging functions available in software like Microsoft Word and Outlook, generating report cards for tens or even hundreds of organizations may take just a few hours.


Offer multiple collection methods.  Be available to complete the survey on the phone with the organization.  Be available to go to the organization’s headquarters and do it in person.  Be prepared to offer language translation if necessary, offer the survey electronically, or mail the survey with a stamped envelope.  The easier it is for an individual to complete the survey, the more response rates will increase.


Be culturally competent.  Capacity builders may take great steps to ensure that training and technical assistance is culturally appropriate and should extend to data collection efforts.  Moreover, if you are engaging a third party to collect data—a consultant or a team of interns, for example—remember that being a third party means they have not had the benefit of getting to know an organization and its staff through the course of the capacity building engagement.  Language barriers, cultural differences, and individual preferences can influence whether you are likely to get a response.


Introduce your external data collectors.  If you are working with those third parties, introduce them to the organizations you are working with early on.  Maintaining a relationship helps improve response rates, but the lack of a relationship will hurt response rates.  As a caveat, be sure to maintain confidentiality about the results, especially if a third party is collecting direct feedback about your services.

 

 Summary



Thank you for taking the time to learn about data collection plans.

You should now have a better understanding of how to plan for and implement data collection for a specific program. Identifying the most appropriate and useful data collection method for a specific program will help you effectively ensure the integrity of the data you collect.
These resources can offer additional guidance in creating your plan.

Data Collection Plan Worksheet:
This worksheet will help you lay out your data collection plan step by step, from the outcomes to be measured to who will collect and manage data.
Download it here: Data Collection Plan Worksheet

The Outcome Measurement Resource Network (United Way of America):
The Resource Network offers information, downloadable documents, and links to resources related to the identification and measurement of program- and community-level outcomes.
http://www.liveunited.org/outcomes/

Outcome Indicators Project (The Urban Institute):
The Outcome Indicators Project provides a framework for tracking nonprofit performance.  It suggests candidate outcomes and outcome indicators to assist nonprofit organizations that seek to develop new outcome monitoring processes or improve their existing systems.
http://www.urban.org/center/cnp/projects/outcomeindicators.cfm