Addressing Unit Missingness in Social Policy Research: Summary of 2023 OPRE Methods Meeting

Publication Date: April 30, 2024
Addressing Unit Missingness in Social Policy Research: Summary of 2023 OPRE Methods Meeting

Download Report

Download PDF (1,510.74 KB)
  • File Size: 1,510.74 KB
  • Pages: 4
  • Published: 2024

Introduction

Survey nonresponse can occur for a variety of reasons and can results in bias in survey estimates. Such nonresponse bias can occur if the attitudes, characteristics, and experiences of respondents differ systematically from those of nonrespondents. As a result, survey findings may not be representative of the target population’s needs, perspectives, and experiences. However, researchers can act at each step of the study design process to avoid, mitigate, and address the impact of nonresponse. 

Purpose

On October 18—19, 2023, OPRE hosted a virtual meeting to address survey nonresponse in social policy survey research. Experts discussed the reasons behind declining survey response rates and the potential for increased nonresponse bias, research design and analysis strategies to reduce nonresponse and mitigate the impact of missing data on resulting estimates, and the use of administrative data sources to supplement and minimize the risk of missing data.  

This summary document highlights key themes and presentations  from the virtual meeting, which addressed the following questions: 

  • Why is nonresponse an important issue in survey research?  

  • How can researchers identify patterns of missing data, including which populations have disproportionate non-response rates? 

  • How do missing data affect the quality of survey estimates and what implications does this have for policymaking and programmatic decisions?  

  • How can study and questionnaire designs reduce survey non-response?  

  • What data analysis techniques and strategies can researchers use to address missing data in their projects? 

  • What strategies can researchers use to improve outreach to populations that experience barriers to completing surveys? 

Key Findings and Highlights

  • Survey nonresponse can create bias in survey estimates used to inform policy and programmatic decisions. Implementing strategies throughout the research life cycle that mitigate nonresponse can improve data quality.  

  • There are two types of nonresponse in survey research: unit nonresponse is the failure to obtain any survey information from a sampled person, whereas item nonresponse is the failure to obtain a response to a specific survey question. 

  • There are several design features that researchers can use to reduce unit nonresponse in their surveys. These design features include using multiple participation contact methods, abbreviated questionnaires, and participation incentives. 

  • After data collection, researchers can use survey weighting to address unit nonresponse and data imputation to address item nonresponse.  

  • Response rate is a poor indicator of nonresponse bias. Researchers can use several approaches, such as benchmark estimating, to measure nonresponse bias and adjust estimates accordingly.  

  • Unit missingness has important connections to equity in survey research. Respondent groups affected by inequitable systems and structures may be less likely to complete survey questionnaires. Researchers should work with members of communities of interest to identify potential barriers to response and develop solutions to minimize those challenges for prospective respondents.  

Citation

Mihovich, C., Martin, V., and Wager, H. (2024). Addressing Unit Missingness in Social Policy Research: Summary of 2023 OPRE Methods Meeting, OPRE Report # 2024-092, Washington, DC: Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services.  

Glossary

Unit nonresponse:
the failure to obtain any survey information from a sampled person.
Item nonresponse:
the failure to obtain a response to a specific survey question.
Response rate:
the proportion of estimated eligible sample members who complete the survey out of the total selected eligible sample.
Nonresponse bias:
the difference between the expected value of the estimate with the observed response rate and the value assuming a 100% response rate.
Nonresponse weighting:
a statistical intervention researchers use after data collection to adjust for nonresponse.
Respondent-driven sampling:
an approach researchers use to recruit hard-to-reach populations by starting with data collection “seeds” who are encouraged to share the survey with other members of their networks.
Benchmark estimating:
comparing survey estimates with estimates from external sources, such as another survey with a high response rate.
Auxiliary data:
information gathered from participants outside of the survey (e.g., screening or follow-up data) that is used to identify differences between respondents and nonrespondents.
Responsive survey design:
a design approach that uses incoming data from the field to implement planned changes in data collection.
Adaptive survey design:
a design approach that uses existing data to create different data collection designs across subgroups.
Types:
OPRE Research Topics: