Methods Inquiries, 2013-2021

Project Overview

OPRE plays a central role in advancing understanding and disseminating knowledge about research and evaluation methods and tools that are, or could be, used to enhance knowledge about program and policy effectiveness. The purpose of the Methods Inquiries project is to organize meetings that bring together expertise from varying disciplines and policy fields and from academia, government, and the private sector to explore innovations in research design, analytic techniques, and data measurement that could be employed to advance the government’s use of rigorous research methods. These meetings ensure that OPRE‐supported research continues to represent the most scientifically advanced approaches to determining effectiveness and efficiency of ACF programs.

The contractor from 2013 to 2018 is RTI International. The contractor from 2016 to 2021 is Insight Policy Research, in partnership with Applied Engineering Management Corporation.

Visit the OPRE Innovative Methods Meetings website (www.opremethodsmeeting.org) to access the resources and materials from the meetings.

The point of contact is Anna Solmeyer.

  • Understanding Rapid Learning Methods: Frequently Asked Questions and Recommended Resources

    Published: November 19, 2019

    Social service program stakeholders need timely evidence to inform ongoing program decisions. Rapid learning methods, defined here as a set of approaches designed to quickly and/or iteratively test program improvements and evaluate program implementation or impact, can help inform such decisions. However, stakeholders may be unsure which rapid learning methods are most appropriate for a program’s specific challenges and how to best apply the methods...

  • Rapid Learning: Methods to Examine and Improve Social Programs

    Published: October 22, 2019
    Rapid learning methods aim to expedite program improvement and enhance program effectiveness. They use data to test implementation and improvement efforts in as close to real-time as possible. Many rapid learning methods leverage iterative cycles of learning, in which evaluators and implementers (and sometimes funders/policymakers) discuss findings, interpret them, and make adaptations to practice and measurement together. These methods can support data-driven decision-making in practice, in the spirit of ongoing improvement.
     
    On October 25 and 26, 2018, OPRE brought together a diverse group of participants from Federal agencies, research firms, academia, and other organizations for a meeting titled, Rapid Learning Methods for Testing and Evaluating Change in Social Programs. This brief is based on a presentation at the meeting.
  • Rapid Learning: Methods for Testing and Evaluating Change in Social Service Programs

    Published: September 19, 2019

    Social service program stakeholders need timely evidence to inform ongoing program decisions. Rapid learning methods, defined here as a set of approaches designed to quickly and/or iteratively test program improvements and evaluate program implementation or impact, can help inform such decisions. However, stakeholders may be unsure which rapid learning methods are most appropriate for a program’s specific challenges and how to best apply the methods. Additionally, they may be unsure how to cultivate a culture of continuous, iterative learning.

  • What are Bayesian Methods? - OPRE in 60 Seconds

    Published: February 11, 2019

    Bayesian methods allow researchers to describe findings in probabilistic terms. They also allow for incorporating prior knowledge and considering uncertainty in parameters.

     

    Bayesian methods may also help with transparency...

  • Causal Validity Considerations for Including High Quality Non-Experimental Evidence in Systematic Reviews

    Published: October 25, 2018

    Federally funded systematic reviews of research evidence play a central role in efforts to base policy decisions on evidence. Historically, evidence reviews have reserved the highest ratings of quality for studies that employ experimental designs, namely randomized control trials (RCTs). However, RCTs are not appropriate for evaluating all intervention programs. To develop an evidence base for those programs, evaluators may need to use non-experimental study designs.

  • Bayesian Methods for Social Policy Research and Evaluation

    Published: July 3, 2018

    Probability (p) values are widely used in social science research and evaluation to guide decisions on program and policy changes. However, they have some inherent limitations, sometimes leading to misuse, misinterpretation, or misinformed decisions. Bayesian methods...

  • Understanding Bayesian Statistics: Frequently Asked Questions and Recommended Resources

    Published: July 3, 2018

    There is a growing understanding that there are some inherent limitations in using p-values to guide decisions about programs and policies. Bayesian methods are emerging as the primary alternative to p-values and offer a number of advantages...

  • Building Strong Evidence in Challenging Contexts: Alternatives to Traditional Randomized Controlled Trials

    Published: February 20, 2018

    In the fall of 2016, OPRE brought together a diverse group of participants from federal agencies, research firms, foundations, and academia to discuss alternatives to randomized controlled trials and their assumptions, trade-offs, benefits, and challenges.

  • Evidence and Equity: Challenges for Research Design

    Published: February 20, 2018

    There is growing emphasis placed on evidence-based interventions, and opportunities to make programmatic decisions based on evidence reflect progress in promoting positive outcomes. However, some populations (e.g., ethnic and cultural minority communities, marginalized groups) may be left behind in efforts to build evidence, if they are more difficult to study. Over time, as evidence builds for the populations...

  • Unpacking the “Black Box” of Programs and Policies: A Conceptual Overview of Mediation Analysis

    Published: March 27, 2017

    Policymakers and practitioners have a growing interest in answering questions beyond simply “does a program work?” They are also interested in learning how programs work. Mediation analysis is one tool that researchers can use to identify elements of an intervention that do, or do not, lead to improved participant outcomes. Researchers can use the results of a mediation analysis to build knowledge to improve programs...

More Reports on this Project