Resource Library

Further refine results by entering a keyword or selecting filters.

Sort Results

Displaying 1 - 10 of 11

On October 28—29, 2020, the Administration for Children and Families’ Office of Planning, Research, and Evaluation convened a virtual meeting for participants from Federal agencies, research firms, academia, and other organizations to discuss core components approaches.

This summary document highlights key themes and presentations from the virtual meeting.

This brief has two main goals:

  • Describe the features of a well-designed and implemented subgroup analysis that uses a multiple regression framework.
  • Provide an overview of recent methodological developments and alternative approaches to conducting subgroup analyses.

The brief builds on a 2009 meeting of experts convened by the Administration for Children and Families’ Office of Planning, Research, and Evaluation and a corresponding 2012 publication in a special issue of Prevention Science (MacKinnon, Supplee, Kelly, & Barofsky, 2012).

“Open science” represents a broad movement to make all phases of research—from design to dissemination—more transparent and accessible. The scientific community and Federal agencies that support research have a growing interest in open science methods. In part this interest stems from highly publicized news stories and journal articles that cast doubt on research credibility...

 

“Open science” represents a broad movement to make all phases of research—from design to dissemination—more transparent and accessible. The scientific community and Federal agencies that support research have a growing interest in open science methods in response to highly publicized news stories and journal articles that cast doubt on research credibility...

Social service program stakeholders need timely evidence to inform ongoing program decisions. Rapid learning methods, defined here as a set of approaches designed to quickly and/or iteratively test program improvements and evaluate program implementation or impact, can help inform such decisions. However, stakeholders may be unsure which rapid learning methods are most appropriate for a program’s specific challenges and how to best apply the methods...

Rapid learning methods aim to expedite program improvement and enhance program effectiveness. They use data to test implementation and improvement efforts in as close to real-time as possible. Many rapid learning methods leverage iterative cycles of learning, in which evaluators and implementers (and sometimes funders/policymakers) discuss findings, interpret them, and make adaptations to practice and measurement together. These methods can support data-driven decision-making in practice, in the spirit of ongoing improvement.

On October 25 and 26, 2018, OPRE brought together a diverse group of participants from Federal agencies, research firms, academia, and other organizations for a meeting titled, Rapid Learning Methods for Testing and Evaluating Change in Social Programs . This brief is based on a presentation at the meeting.

Social service program stakeholders need timely evidence to inform ongoing program decisions. Rapid learning methods, defined here as a set of approaches designed to quickly and/or iteratively test program improvements and evaluate program implementation or impact, can help inform such decisions. However, stakeholders may be unsure which rapid learning methods are most appropriate for a program’s specific challenges and how to best apply the methods. Additionally, they may be unsure how to cultivate a culture of continuous, iterative learning.

For nearly 100 years, the null hypothesis significance testing (NHST) framework has been used to determine which findings are meaningful (Fisher 1925; Neyman and Pearson 1933). Under this framework, findings deemed meaningful are called “statistically significant.”  But the meaning of statistical significance is often...

Federally funded systematic reviews of research evidence play a central role in efforts to base policy decisions on evidence. Historically, evidence reviews have reserved the highest ratings of quality for studies that employ experimental designs, namely randomized control trials (RCTs). However, RCTs are not appropriate for evaluating all intervention programs. To develop an evidence base for those programs, evaluators may need to use non-experimental study designs.

Probability (p) values are widely used in social science research and evaluation to guide decisions on program and policy changes. However, they have some inherent limitations, sometimes leading to misuse, misinterpretation, or misinformed decisions. Bayesian methods...