
Introduction
Research Questions
- How valid is the MEFS AppTM as an assessment of executive function for Head Start preschool children? That is, do we find evidence that the MEFS AppTM measures executive function among children from families with low incomes?
- Do the latest editions of the cognitive assessments used in AIAN FACES 2019 show any systematic item bias against AIAN preschool children compared with White, non-Hispanic children in FACES?
Using nationally representative data from the Head Start Family and Child Experiences Survey (FACES 2019) and the American Indian and Alaska Native Head Start Family and Child Experiences Survey (AIAN FACES 2019), this research brief evaluates the performance of direct cognitive assessments of Head Start children. These assessments are among the measures used to provide a national picture of their readiness for school. The brief examines the validity of the Minnesota Executive Function Scale App (MEFS AppTM) for Head Start children. It also examines whether there was any systematic item bias in the latest editions of cognitive assessments between AIAN children (in AIAN FACES) compared with White, non-Hispanic children (in FACES).
Purpose
The purpose of this brief is to explore whether the updated direct cognitive assessments in FACES 2019 and AIAN FACES 2019 provide fair estimates of children’s skills and knowledge in the domains being measured.
Key Findings and Highlights
- The MEFS AppTM was more strongly correlated with cognitively demanding assessments (receptive vocabulary—which may reflect general cognitive ability—and early math) than it was with letter-word knowledge. However, correlations with expressive vocabulary varied depending on the subset of children who completed the assessment. This initial evidence of concurrent validity is consistent for the MEFS AppTM across racial and ethnic groups for children in FACES and AIAN FACES.
- Because information on the different assessments’ performance with AIAN preschoolers is limited, we also examined the latest editions of the PPVT—5 and WJ IV Applied Problems and Letter-Word Identification for any evidence of systematic item bias against AIAN children (in AIAN FACES) compared with White, non-Hispanic children (in FACES). There were a few items with potential differences in difficulty for either AIAN or White, non-Hispanic children, but the differences favored the AIAN children for some items and White, non-Hispanic children for other items within the same assessment. As detailed in the brief, our analyses and review of items suggested no systematic item bias against AIAN preschoolers on the PPVT—5 or WJ IV Applied Problems or Letter-Word Identification.
Methods
This brief includes children from two nationally representative samples. For FACES 2019, a sample of Head Start programs in Regions I-X was selected from the 2017—2018 Head Start Program Information Report, with 59 programs, 115 centers, 221 classrooms, and 2,260 children participating in the study in fall 2019. For AIAN FACES 2019, a sample of Region XI Head Start programs was selected from the 2016—2017 Head Start Program Information Report, with 22 programs, 40 centers, 85 classrooms, and 720 children participating in the study in fall 2019.
To answer the first research question, we calculated the correlations between the MEFS AppTM standard scores and standard scores on the cognitive assessments to see if related domains showed positive associations. We also ran regression models to test the associations between the MEFS AppTM and the cognitive assessments, accounting for certain characteristics of children and families. For these analyses, we examined the scores of 1,586 children in FACES and 466 children in AIAN FACES who completed the assessments in English.
To answer the second research question, we conducted differential item functioning (DIF) analysis with the PPVT—5 and the WJ IV to examine whether there was systematic bias in the items for AIAN children compared with White, non-Hispanic children (focusing on children with assessments, including 404 from AIAN FACES 2019 and 289 from FACES 2019). We used DIF to evaluate whether children from the two groups who have the same ability have similar probabilities of correctly answering the item.
Appendix
Appendix
File Type | File Name | File Size | Technical Appendix for Performance of New Cognitive Assessments with Head Start Children: Emerging Evidence from FACES and AIAN FACES 2019 | 733.78 KB |
---|
Citation
Nguyen, T., L. Malone, S. Atkins-Burnett, A. Larson, and J. Cannon. “Performance of New Cognitive Assessments with Head Start Children: Emerging Evidence from FACES and AIAN FACES 2019.” OPRE Report 2022-49, Washington, DC: U.S. Department of Health and Human Services, Administration for Children and Families, Office of Planning, Research, and Evaluation, 2022.
Glossary
- AIAN:
- American Indian and Alaska Native
- DIF:
- Differential Item Functioning
- FACES:
- Head Start Family and Child Experiences Survey
- MEFS AppTM:
- Minnesota Executive Function Scale App
- PPVT–5:
- Peabody Picture Vocabulary Test–5
- WJ IV:
- Woodcock-Johnson IV Tests of Achievement