Building ACF Evidence Capacity with Evaluation and Monitoring 101

May 17, 2021
| Clare DiSalvo and Ciara Bridges
Eval 101 Banner Header

Last month, OPRE held our annual “Evaluation and Monitoring 101” training for our colleagues in the Administration for Children and Families (ACF).

ACF strives to be a learning organization with a culture of continuous improvement. The goal of this training is to strengthen this capacity by helping agency staff better understand how to design, conduct, and use findings from program evaluation and performance monitoring.  This mission is in line with our charge under the Foundations for Evidence-Based Policymaking Act of 2018 (the Evidence Act) to improve the use of evidence and data to inform federal policies and programs and to build the capacity of agency staff and program offices to use evaluation research and data analysis to improve agency operations.

The two-day, seven session training provides an in-depth and hands-on introduction to program evaluation and performance monitoring for ACF staff. Over a hundred employees from across the agency take part in the training each year, including program office leadership, staff who directly oversee ACF grantees, and those who play a behind-the-scenes role in our agency’s work. 

Over a dozen OPRE staff design and lead the training sessions, which include lecture presentations, group activities, and real-world case studies, and cover the following topics:

  • Introduction to OPRE, evaluation, and monitoring
  • How the ACF Evaluation Policy, which states that ACF seeks to promote rigor, relevance, transparency, independence, and ethics in the conduct of evaluations, guides our work 
  • Logic models and research questions
  • Performance measurement
  • Continuous quality improvement
  • Process and implementation evaluations
  • Outcome and impact evaluations
  • Planning and implementing program evaluation and monitoring approaches

Every year, we ask participants to name some of their key take-aways from the training. Here are a few of their responses:

  • “Know what questions you want to answer before deciding what monitoring or evaluation method to use.”
  • “The difference between outputs and outcomes.”
  • “How to develop a logic model and implement performance measures.”
  • “Difference between performance monitoring and program evaluation.”
  • “The importance of having a control group in an impact evaluation.”
  • “We want to know that programs work.  That’s why this is important.”

We also ask participants to name one way that they are going to use what they learned in their own work, and we are always gratified to hear their responses, which have included the following:

  • “Incorporate evaluation efforts into strategic planning.”
  • “Plan ahead and think hard about program budgets to conduct monitoring and evaluation.”
  • “I’m going to frame the discussion about data reporting as something that enhances grantee programs, not punitive.”
  • “I'm hoping that we can incorporate these concepts into compliance monitoring and slowly shift the question from ’are they following policy?’ to ’how effective are these programs at serving this population and how can we help improve operations?’”

We invite our colleagues in ACF to follow up with us to learn more about the “Evaluation and Monitoring 101” training, or to participate in future trainings! Please don’t hesitate to reach out to us at clare.disalvo@acf.hhs.gov or ciara.bridges@acf.hhs.gov

Types:
ACF Issues:

Next/Previous Posts