Introduction
Research Questions
- What are the measurement properties of teacher reports of children’s approaches to learning and literacy skills?
- How are teacher reports of children’s approaches to learning and literacy skills associated with assessor reports and direct assessments of children’s skills?
- Is there bias in how teachers rate certain groups of children on approaches to learning or literacy skills?
In spring 2020, in response to the COVID-19 pandemic, many early care and education centers, including Head Start centers, closed their physical buildings and changed their operations to virtual. Because of health and safety restrictions, we were unable to directly assess children’s skills in spring 2020 for the Head Start Family and Child Experiences Survey (FACES 2019). However, teachers completed reports about individual children in their classrooms, as has been done in prior rounds of FACES. This research brief uses nationally representative data from FACES 2014 and 2019 to examine whether two teacher-reported scales of children’s (1) approaches to learning and (2) literacy skills have strong measurement properties and validly measure early learning skills in an unbiased way.
Purpose
The purpose of this brief is to understand whether teacher reports of children’s early learning skills can be used when in-person assessment is not feasible, such as in spring 2020 during the COVID-19 pandemic.
Key Findings and Highlights
- A teacher-reported scale of children’s approaches to learning has strong measurement properties. However, this scale is only weakly associated with assessor-reported cognitive/social behavior and directly assessed executive function. Therefore, teacher-reported approaches to learning scores are not an appropriate proxy for these particular skills.
- A teacher-reported scale of children’s literacy skills has strong measurement properties. This scale is moderately to strongly associated with directly assessed language and cognitive skills, which suggests it might be able to be used as a proxy for these skills. This scale might offer a way to measure children’s language and cognitive skills when in-person assessment is not feasible.
- There is potential bias in these teacher-reported scales because teacher reports (but not skills measured in the direct child assessment) are associated with some child background characteristics in FACES 2014. In the spring, after accounting for fall scores, English primary home language is associated with lower teacher-reported approaches to learning scores; this association is not found with assessor-reported cognitive/social behavior (attention). Also, being male is associated with lower teacher-reported literacy skills scores in spring, after accounting for fall skills, but this association is not found with a directly assessed language and cognitive skill (letter-word knowledge). Primary home language and child sex should be accounted for when using these teacher reports.
Methods
This brief includes children from two nationally representative samples. For FACES 2014, we selected a sample of Head Start programs from the 2012—2013 Head Start Program Information Report, with 60 programs, 119 centers, 247 classrooms, and 2,462 children. The sample used for this brief included 1,921 children. For FACES 2019, we selected a sample of Head Start programs from the 2017—2018 Head Start Program Information Report, with 59 programs, 115 centers, 221 classrooms, and 2,260 children participating in the study in fall 2019. The sample used for this brief included 1,162 children.
First, we examined the measurement properties of teacher-reported approaches to learning and literacy skills in fall and spring of FACES 2014 and 2019. Next, to evaluate whether the teacher-reported scales are valid assessments of skills usually measured by the direct child assessment, we examined concurrent validity or the correlations between the teacher-reported scales and assessor-reported cognitive/social behavior and directly assessed executive function and cognitive and language skills assessed at the same time point. Finally, to examine rater bias, we estimated associations between children’s background characteristics and (1) the teacher-reported scales and (2) the assessor report and direct assessment to which the teacher-reported scales were conceptually most similar in spring 2015, accounting for fall 2014 scores on the assessment to see whether the patterns are similar, as it is possible that teachers rate children differently with less in-person exposure to children.
Appendix
Appendix
File Type | File Name | File Size | Measuring Head Start Children’s Early Learning Skills Using Teacher Reports During the COVID-19 Pandemic: Technical Report | 1,818.85 KB |
---|
Citation
Harding, J., T. Nguyen, L. Malone, S. Atkins-Burnett, L. Tarullo, and N. Aikens. “Measuring Head Start Children’s Early Learning Skills Using Teacher Reports During the COVID-19 Pandemic.” OPRE Report 2022-13, Washington, DC: U.S. Department of Health and Human Services, Administration for Children and Families, Office of Planning, Research, and Evaluation, 2022.
Nguyen, T., J. Harding, L. Malone, S. Atkins-Burnett, A. Larson, J. Cannon, L. Tarullo, and N. Aikens. “Measuring Head Start Children’s Early Learning Skills Using Teacher Reports During the COVID-19 Pandemic: Technical Report.” OPRE Report 2022-14, Washington, DC: U.S. Department of Health and Human Services, Administration for Children and Families, Office of Planning, Research, and Evaluation, 2022.
Glossary
- COVID-19:
- Coronavirus disease 2019
- FACES:
- Head Start Family and Child Experiences Survey
- Strong measurement properties:
- Scales are reliable, do not have floor or ceiling effects, and show limited rater effects