< Back to Search

State and Territory Administrators Meeting (STAM 2013) Monitoring Plenary Highlights Part I

Using Key Data Indicators To Maximize Oversight of Health and Safety in Child Care

Published: May 6, 2014
Topics:
States/Territories, Tribes
Types:
Videos

TRANSCRIPT

Rick Fiene
Research Director, Pennsylvania State University

Let me start off with the methods for achieving quality child care. On the nonregulatory side, we’ve got training of caregivers and directors. Resource and referral centers have really come on line; now we’ve got a national organization that is totally devoted to resource and referral centers, and they’re doing wonderful work and are really all part of this overall system for, hopefully, improving quality child care.

Caring for Our Children is something that most people will pick up as their reference document because there are performance standards when it comes to health and safety.

Stepping Stones is really a risk assessment tool of approximately 120 or 130 standards that are gleaned from Caring for Our Children, but they’re the particular standards that place children at greatest risk for mortality and morbidity.

Then, lastly, are the key indicators, and the key indicators are very, very different from risk assessment tools. What they do is they statistically predict overall compliance with all rules or standards. So, they’re different from a risk assessment tool, and key indicators are generated from actual licensing or quality rating and improvement standard data. It’s not something that’s done through a Likert approach, where you’re looking at the risk to children from being out of compliance with them.

So, let me talk about differential monitoring. Key elements—program compliance. When I talk about program compliance, generally, what I’m talking about is a State’s child care licensing health and safety system or, at the national level, Caring for Our Children. When I talk about program quality, generally, this is represented by a State’s quality rating and improvement system or, at the national level, by accreditation, Head Start performance standards, environmental rating scales, or the CLASS.

Risk assessment—and I said previously—is generally represented by a State’s most critical rules in which children are at risk of mortality or morbidity, and at the national level, we’re talking about Stepping Stones. Key indicators are generally represented by a State’s abbreviated tool of statistically predictive rules or, at the national level, by the 13 Indicators of Quality Child Care and NACCRRA’s We Can Do Better reports.

Professional development generally is represented by a State’s technical assistance, training, and professional development system for staff. Then lastly, what we’re all about is child outcomes, generally represented by a State’s early learning standards or guidelines.

So what are some of the benefits for differential monitoring? If you don’t do differential monitoring, then essentially what you’re saying is, you’re going to apply the way that you monitor exactly the same to everyone. It does provide a systematic way of tying distinct State systems together into a cost-effective and efficient, unified, valid, and reliable logic model and algorithm.

When risk assessment tools and key indicators are used together, you have a very cost-effective and efficient approach to program monitoring, because what you’re doing is you’re distilling down those particular rules or standards that statistically predict overall compliance while at the same time making sure that you review those particular rules or standards that will place children at greatest risk. So my recommendation is always if you can use the two in conjunction, you use the two in conjunction.

If you use an absolute system and not a relative or differential monitoring system, there’s no need for key indicators because you’re going to look at all the rules, and all programs have to be 100% in compliance. There’s absolutely no reason for having risk assessment; there’s absolutely no reason for having key indicators.

Once I discovered that there was a curvilinear relationship between quality and program compliance, the next logical step was to begin to ask, “Are there particular regulations that are more important than others?” Once you start down that slippery slope, it starts leading you to things like key indicators and risk assessment models and everything.

All right, provider outcomes determine differential monitoring; keep in mind, these are your best providers. This is really for your best programs; so, they’re fully licensed. In other words, they’re in substantial or full compliance. They are accredited or potentially are accredited. They’re at the highest star rating. They provide a cost-effective and efficient delivery system. There’s little turnover with staff and the director. They’re fully enrolled. Maybe they’ve got a fund surplus; you know, they’re doing a really good job. On the basis of this, you decide the number of times to visit, and what you’re going to review, and the resources that you’re going to allocate toward it, OK?

So the flip side is you have providers struggling; they’re really having difficulties. Those are the folks you want to really work with. So by saving with the good providers, rather than spending a full day, you get everything done in 2 or 3 hours. You reallocate staff time to spend more time with those particular providers who are having difficulty, OK? Differential monitoring is cost-neutral. You’re not going to save money overall; you’re just reallocating resources.

OK, professional development is very, very important. It has to be—and it is—part of the overall differential monitoring model that, you know, I’m suggesting here.

All staff take 24 hours in-service training per year. Mentoring or coaching of staff occurs. There’s a training or professional development fund for all staff. Hopefully, the professional development training and technical assistance system should be linked to differential monitoring results so that the people who need the resources get it, and the people who are doing a really, really good job, you get out of their way, and you let them do their really, really good job.

States are beginning to get on line when it comes to looking at outcomes for kids, and we have to tie all of this together. What I’m trying to do is to encourage individuals as they design systems to make sure that we have an ability to talk across systems. That’s what differential monitoring and the model that I’m suggesting is all about—that we can take these disparate systems and put them all together.