Randomized experiments—in which study participants are randomly assigned to treatment and control groups within sites—give researchers a powerful method for understanding a program’s effectiveness. Once they know the direction (favorable or unfavorable) and magnitude (small or large) of a program’s impact, the next question is why the program produced its effect. Multi-site evaluations offer a chance to “get inside the black box” and explore that question.
This paper considers a new method, called Cross-Site Attributional Model Improved by Calibration to Within-Site Individual Randomization Findings (CAMIC), which seeks to reduce bias in analyses that researchers use to understand what about a program’s structure and implementation leads its impact to vary.
First, researchers estimate the overall impact of the program without selection bias or other sources of bias, and then use cross-site analyses to connect program structure (what is offered) and implementation (how it is offered) to the magnitude of the impacts. However, these estimates are non-experimental and may be biased.
The CAMIC method takes advantage of randomization of a program component in only some sites to improve estimating the effects of other program components and implementation features that are not or cannot be randomized. The paper describes the method for potential use in the Health Profession Opportunity Grants (HPOG) program evaluation.
A simulation analysis of CAMIC shows that the method does not consistently reduce bias and, in some cases, increases bias. Nevertheless, we argue that presenting details of the method is useful. We urge other researchers to consider other settings where the method might be successfully applied in order to help evaluators learn more about what works.