SELF-ASSESSMENT CORE WORKGROUP REPORT

MARCH 1998

SELF-ASSESSMENT CORE WORKGROUP REPORT

INTRODUCTION

This report summarizes the activities, processes and recommendations of the Self-Assessment Core Workgroup (Workgroup), which was convened by the Office of Child Support Enforcement (OCSE) in the Administration for Children and Families (ACF) at the U. S. Department of Health and Human Services (DHHS). The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA) requires the States to develop their own self-assessment capabilities. The purpose of the Workgroup was: to explore the limits of the legislation and determine what criteria States would be required to address in their annual report; to establish a process or methodology to be used to review the criteria; and to develop a vehicle for reporting the results of these reviews.

A Core Workgroup, which consisted of 24 representatives of State IV-D programs, ACF regional offices, and the OCSE central office met in May and August 1997. In concert with these meetings, as well as numerous conference calls, the Core Workgroup circulated its decisions and recommendations among all of the other States, region by region, and received feedback and reactions to ideas that were then incorporated into the discussions and conclusions of the Workgroup. This report is based on recommendations reached through consensus of the members of the Workgroup.

It was recognized that meaningful discussion of the self-assessment issues would be contingent upon finalization of work performed by the Incentive Funding Workgroup. Thus, the Self-Assessment Workgroup activities were delayed until that Workgroup issued its report in January 1997.

BACKGROUND

OCSE audits State Child Support Enforcement programs to ensure that they meet Federal requirements. In the past, Federal law specified that States that had been audited and found not to be in substantial compliance with Federal requirements were subject to a financial penalty. The penalty could be held in abeyance for up to one year to allow States the opportunity to implement corrective actions to remedy the program deficiency(s). At the end of the corrective action period, a follow-up audit was conducted. If the follow-up audit showed that the deficiency had been corrected, the penalty was rescinded.

The rules for auditing State Child Support Enforcement programs have changed for OCSE, and now additional requirements are being placed on States to assess their own performance. Under PRWORA, audit requirements emphasize performance outcomes instead of process. PRWORA revised Federal audit requirements from a process-based system to a performance-based system. This means that the Federal government's oversight responsibilities are balanced with States' responsibilities for child support service delivery and fiscal accountability.

The new law requires State child support agencies to submit an annual report on their operations to assess whether they are meeting Federal requirements for providing child support services. Section 454(15)(A) of the Social Security Act (the Act), revised by PRWORA, provides for:

a process for annual reviews of and reports to the Secretary on the State program operated under the State plan approved under this part, including such information as may be necessary to measure State compliance with Federal requirements for expedited procedures, using such standards and procedures as are required by the Secretary, under which the State agency will determine the extent to which the program is operated in compliance with this part.

Section 452(a)(4) of the Act specifies that Federal staff will "review annual reports submitted pursuant to section 454(15)(A) and, as appropriate, provide to the State comments, recommendations for additional or alternative corrective actions, and technical assistance."

FORMATION OF SELF-ASSESSMENT WORKGROUP

During the fall of 1996, three Welfare Reform Forums were held and solicitation for the Workgroup was sought from attendees, who represented States, ACF regional offices and the OCSE central office. While some vendors had requested to participate in this Workgroup, the decision was made to exclude them from the process. No such requests to participate were received from the various child support advocacy groups. The IV-D Director's National Council of State Child Support Enforcement Administrators was also contacted to solicit State representation. Additionally, during OCSE regional conference calls, volunteers were recruited for the Workgroup.

The Workgroup was subsequently formed, and consisted of 53 (35 State and 18 Federal) volunteers (See EXHIBIT 3). Members included staff from State IV-D programs, ACF regional offices and the OCSE central office. Ultimately, it was determined that there should be a Core Workgroup that would coordinate with all 53 members, while providing for a smaller, more manageable group to be directly involved in drafting the self-assessment guidelines. The Self-Assessment Workgroup was pared down to a Core Workgroup of 24 (10 State and 14 Federal) representatives.

Building upon the Incentive Funding Workgroup Report to the Secretary, which was issued in January 1997, the Workgroup considered the defined performance measurements as a basis for the Workgroup to develop its final product. The Incentive Funding Workgroup broadened the scope of performance measurements to encompass interests of each State's IV-D management, recipients of child support, and other interested stakeholder. The Self-Assessment Workgroup also considered, as appropriate, the goals, objectives, and outcome measures set forth in OCSE's strategic plan, which had been endorsed by the States.

Operating under an agreed set of conference call ground rules adopted on April 3, 1997, the Workgroup determined that a quorum was needed for major decisions. It was also determined that there would be no vote taken on issues pending; moreover, consensus (although, not necessarily, unanimous agreements) would be sought. The approach taken by the Workgroup was to elicit the State representatives to contact their counterparts in other States within the regions they represented and disseminate all information; and, on an ongoing basis, conduct State conference calls or meetings to solicit their comments and consensus.

It was determined that the annual report developed by each State would be a fluid, dynamic document, subject to ongoing analysis. It is envisioned that the Core Workgroup may reconvene to address areas that need to be changed or revised.

PLANNING AND DEVELOPMENT

The Workgroup recognized the logic and necessity of correlating the goals and objectives of OCSE's strategic plan which was endorsed by the States on February 28, 1995, as well as the outcome measures endorsed on July 18, 1996, as a preamble to implementation of PRWORA provisions, with the self-assessment initiative. The Workgroup also acknowledged that the self-assessment process should address meaningful program results on the one hand, and not duplicate program information, such as performance measurement data, that was already going to be analyzed and reported by the States. Ultimately, all States should be focused on the same goals and moving in the same direction in their self-assessment process.

Consideration was also given to States' concerns that the process should not be too extensive or resource consuming and would not place the States in jeopardy of financial sanctions. For example, States generally felt that the process did not need to duplicate the program results/performance measurements audits that had previously been conducted by the OCSE Division of Audit, or the audits that OCSE will conduct under the provisions of PRWORA. This notwithstanding, it was also recognized that some of the criteria included in these audits, particularly case processing time frames, were pertinent and should be included to provide meaningful assessment of State programs. At a minimum, the criteria recommended by the Workgroup would clearly define the compliance definitions, and relate directly to the objectives and outcome measures as set forth in the strategic plan. In addition, the Workgroup agreed that States should be provided the flexibility to expand their self-assessment reviews as deemed appropriate to serve their own management needs.

The Core Workgroup held a series of conference calls during which a list of topics for inclusion in the list of self-assessment criteria was developed. The listing was comprised of items that were identified from the OCSE strategic plan, the Incentive Funding Workgroup Report, the 45 CFR Part 305 audit criteria, and other areas identified and proposed by the Workgroup members as a result of work done previously in their individual States. The representatives discussed how they were currently reviewing or evaluating the performance of the IV-D program in their States, some of the techniques they have used to facilitate their reviews and evaluations, and potential problem areas which they believed would have an impact; i.e., State progress with automation, staff resources, funding constraints.

The State representatives of the Core Workgroup were charged with the task of contacting their counterparts in other States within their region to share all information discussed during the conference calls and to solicit their comments and consensus. The State members of the Workgroup disseminated pertinent information to their States through the use of E-Mail and faxes for comment.

Meetings were held in Arlington, VA and Denver, CO. After the first meeting in Arlington, through a series of Regional/State conference calls, the Core Workgroup members distributed all information from the meeting and solicited comments and responses from each State. At national and regional meetings, including the American Public Welfare Association (APWA) and National Child Support Enforcement Association (NCSEA), Self-Assessment Workgroup information was disseminated. Information from Core Workgroup meetings was discussed during OCSE regional conference calls. Informal comments were also solicited from several child support advocacy groups. After the above input was evaluated and, where appropriate, incorporated, the report was sent to each State's IV-D director for their review and comment. Again, all comments and suggested revisions or changes received were reviewed, considered, and made if appropriate.

SELF-ASSESSMENT IMPLEMENTATION METHODOLOGY

The Workgroup recognized that there were several issues that needed to be addressed concerning work product development such as: Organizational Placement; Sampling; Scope of Review; Review Period; Due Dates; and Reporting. These issues are discussed below.

Organizational Placement - The Workgroup recognized that the self-assessment requirements set forth in PRWORA specify that each State must develop a self-assessment process. However, PRWORA neither addresses the establishment of units dedicated to this function nor requires these units to be placed within the IV-D agency or umbrella organization. Other questions the group addressed were: (1) whether the States should have the prerogative to contract the function to a private vendor or other governmental unit; and (2) the degree of control the IV-D agency should be expected to exert, regardless of where or how the function is performed.

The Core Workgroup solicited and received comments from several States concerning the three issues set forth above. States generally professed, and the Workgroup agreed, that the self-assessment process should entail a hands-on, detailed analysis of the data to be reported. The Workgroup felt that States should be discouraged from simply extracting data from their automated systems to satisfy their self-assessment responsibilities. There was general consensus that a formal unit need not be established, but that staff be assigned to the function of conducting self-assessment reviews.

In regard to the organizational placement of the self-assessment unit there was strong agreement among States and Workgroup members that this capability be placed within the IV-D agency. This way, the expertise needed to perform meaningful program analysis could be developed and maintained through the direct involvement of experienced IV-D staff. One State's IV-D Director responding to the draft report expressed the following: "I cannot emphasize too strongly that the unit must be within the IV-D agency - this is a self-assessment, a continuous ongoing process. It would lose its usefulness as a tool if it were conducted as an 'audit' and imposed by an outside entiimposedty." Another IV-D Director stated: "Contracting out for services or relying upon staff without IV-D experience may serve to limit the subtle insights gained through use of IV-D staff. IV-D staff are more likely to understand not only what the numbers say, but what they really mean."

The Workgroup recognized that regulations exist which support this position. State plan requirements set forth in 45 CFR 302.10 specify that the State IV-D agency will conduct "regular planned examination and evaluation of operations in local offices by regularly assigned State staff." The Workgroup believes that this regulation should be construed to apply to the self-assessment function and thereby, ensure that it remain under control of the IV-D agency in all States.

The above notwithstanding, the Workgroup acknowledged that the law does not preclude States from privatizing or otherwise contracting the self-assessment function, should they so choose. In fact, precedence has already been established by States that have contracted out selected program functions, yet are not in violation of their State plan responsibilities. Therefore, while the Workgroup believes that States would be better served if they conducted their self-assessments in-house, delegation of that function would not violate the spirit of the law, provided that the IV-D agency maintain control of the contracting process, including monitoring the evaluation process, the due dates, and contents of the annual report.

Sampling - The Workgroup recognizes that it may not be feasible to draft a single sampling plan that would accommodate the needs and particular circumstances of every State. In addition to the required compliance criteria included in Exhibit 1, there may also be data that individual States will want to analyze to complement or expand their self-assessment process for their own management purposes. In addition, the varying levels of automation among the States will dictate different approaches to selecting IV-D cases.

Consideration was given to the sampling approach historically used by the OCSE Division of Audit in their program results/performance measurements audits of the States. The Division of Audit caseload sampling plan was designed to look at a minimum number of cases, while being representative of the State's total caseload. The resulting sample size was often 500 or more cases. Each case was evaluated for all needed child support services during a defined audit period.

This sampling approach evolved because, for the most part, States historically were unable to provide caseload universes by the specific criteria or functions included in the audit's scope. Nevertheless, there were limitations to this approach. Actual numbers of cases reviewed were generally much lower than initially anticipated because many sampled cases were eliminated, primarily as a result of no services being required during the audit period. In addition, and directly related to the above, the resultant number of cases actually reviewed frequently were not sufficient for some of the criteria to permit reliable projection of program performance at the targeted confidence level.

The Workgroup believes that, with the progress States have made and continue to make in automating their child support case management systems, the approach historically used by the OCSE Division of Audit may not necessarily represent the best approach for State self-assessments. The scope of the self-assessments, at least considering the criteria that is proposed to be mandatory as listed in EXHIBIT 1, will be much smaller than the former OCSE audits. Also, and more significantly, most States have, or soon will have, statewide certified automated systems that would allow them to take separate, focused samples for individual criterion.

Another factor of the prior OCSE audits was that the sample size was designed to provide a high precision and level of confidence (95 percent), which would stand up to legal challenges as being representative of each State's program, in the event financial sanctions would be imposed as a result of those audits. The self-assessment requirement set forth in PRWORA does not provide for financial penalties based on the results of State self-assessments. Therefore, the Workgroup believes that the samples taken for self-assessment purposes need not necessarily meet the rigidity or precision requirements of the OCSE audits.

In consideration of the above, the Workgroup concluded that sampling needs and the approach used by each State for their self-assessment purposes, should be left to each State's design and discretion. However, the Workgroup believes that: a minimum confidence level of 90 percent must be prescribed: statistically valid samples must be selected; and each State must provide assurance that no segment of the IV-D universe is being systematically omitted from the sample selection process.

Some States may not yet be at a level of automation that would provide for focused sampling by specific criterion. The Workgroup believes that in these instances, the sampling approach used by the OCSE Division of Audit could still be considered. Technical expertise of the OCSE audit staff would be made available to States that request their assistance.

The Workgroup believes that States with the capability of using their automated systems to focus their samples on the individual functions to be reviewed should do so. In response to a request for comments, one State indicated the following: "We believe for those states with automated systems capable of identifying "focused samples" that supplemental program reviews throughout the year, in addition to the required federal model, will significantly increase the ability of the self-assessment staff to better identify potential compliance problems at lower levels within the IV-D agency, best (and sometimes worst) practices, system or legal bottlenecks, and the correlation to the performance reports upon which incentives are tied." The States that can not do this by function should take a statewide sample. This sample could be selected by utilizing their own sampling expertise, or by requesting Federal technical assistance to achieve a confidence level of 90 percent. This is not intended to prohibit a State from developing other sampling or review strategies to address other issues specific to their program or State, or to initiate sampling plans that will generate higher confidence levels.

Scope of Review - The Workgroup recognized that while most States already had some self-assessment capabilities or experience, others may require accommodation for their lack of experience in this area, in developing self-assessment sampling plans. The concept of staging the plans so that some, but not all criteria would be reviewed each year on a rotational basis was discussed. During subsequent discussion, aided by State input, the workgroup arrived a general consensus that the goal of a successful and meaningful self-assessment review would result in all required criteria being reviewed by all states each year.

However, to facilitate the development of State's self-assessment capacity, for the first annual review only, a State may request a waiver delaying review of no more than 4 of the Category1, Required Compliance Criteria. A waiver request, with a detailed explanation of which criteria will not be reviewed, must be sent to the Director, OCSE Division of Audit as soon as possible, but no later than September 30, 1998.

It was also decided by the Workgroup that States would not be required to synchronize their review periods to coincide exactly with the period covered by the annual report, provided that the case samples are selected from the period being reviewed and reported on. This would allow the States to review their cases in increments throughout the reporting period, and not necessitate waiting for the reporting period to end before they begin their assessment. Among the obvious benefits to this approach would be that the review results would evolve during the reporting period, and any problems identified could be dealt with on an ongoing basis. Also, information provided to management from the reviews would be more current than if the reviews were performed after the reporting period. For many States this would expedite preparation of their annual report.

To accommodate those States that choose to review their case samples in increments throughout the period to be reported, the Workgroup proposes that the cases selected at any time during the reporting period be reviewed for appropriate action for a 12-month period preceding the date the case was selected for review.

Review Period - The effective date for the PRWORA provisions for Federal and State reviews is October 1, 1997 (calendar quarter beginning 12 months or more after the date of enactment of PRWORA, or August 22, 1996). The proposed review period for the first required State self-assessments will be a 12-month period, beginning no later than October 1, 1997, and each 12-month period thereafter. The 12-month review period should give States sufficient time to evaluate the case processing time frames.

Due Dates - The Workgroup proposes that written reports would be due within 6 months after the end of the review period. For example, if the review period ends September 30, 1998, the first report is due by March 31, 1999.

Rather than evaluating a statewide sample, if a State samples all counties or regions independently with a 90 percent confidence and combines the results into one statewide report, then the State may request a waiver for up to a maximum 6-month delay. The waiver request, with a detailed explanation of the reason for a delay, must be submitted to the Director, OCSE Division of Audit no later than September 30, or 6 months prior to the reporting due date.

Reporting - PRWORA requires that an annual report regarding State self-assessment activity be submitted to the Secretary of the DHHS. The Workgroup believes that the report should be signed and certified by the State IV-D Director. The Workgroup also believes that these reports should be submitted to the Commissioner of OCSE, with a copy to the cognizant ACF regional office and OCSE Area Audit Office.

REQUIRED PROGRAM COMPLIANCE CRITERIA

The Workgroup reached consensus that the self-assessment reviews should encompass three areas of review:

The Required Program Compliance Criteria category will be mandatory areas to review so that the State may determine compliance with Federal State plan requirements and case processing time frames. The Program Direction Review will be the State's assessment as to whether there is a relationship between its case results for the compliance criteria requirements with outcome measurements to determine whether they are meeting the goals and objectives of the program. Program Service Enhancements Review will be an evaluation of innovative practices and creative use of resources that are being utilized by the State to better serve its customers and improve its child support program. The following sections will discuss these three categories in more detail.

Category 1: Required Program Compliance Criteria

The program criteria presented below represents selected child support areas that have previously been covered by Federal audits, and which are addressed in regulations at 45 CFR Parts 302 and 303. It was the consensus of the Workgroup that these criteria represent the current program requirements that most directly relate to the major child support functions, which must be monitored to assess program performance. Also, they bear a direct correlation to the goals and objectives set forth in OCSE's strategic plan, which has been endorsed by the States, as well as the 15 outcome measurements set forth in that plan.

The Workgroup believes that these criteria, as set forth in EXHIBIT 1, represent the minimum that States must include in their self-assessment reviews and must address in their reports to the Secretary. This does not preclude States from expanding their reviews to include program areas not deemed mandatory by the Workgroup to accommodate their specific management needs.

For the most part, the requirements referenced under each criterion in EXHIBIT 1 highlight program standards (time frames), or other requirements, as set forth in the appropriate 45 CFR 302 or 303 regulations. It is intended that these criteria will be evaluated in a manner that will allow them to be quantified in a format, such as that presented in EXHIBIT 1, with the resultant numeric data summarized and included in Category 1 of the annual report.

The Required Program Compliance Criteria, which must be reviewed annually are as follows:

1. Case Closure;

2. Establishment of Paternity and Support Orders;

3. Expedited Process;

4. Enforcement of Orders;

5. Disbursement of Collections;

6. Securing and Enforcing Medical Support;

7. Review and Adjustment; and

8. Interstate Services.

In keeping with the previous OCSE's definition of substantial compliance in 45 CFR 305.20, the Workgroup has decided to evaluate cases using benchmarks of 90 percent to evaluate "Case Closure," 75 and 90 percent for "Expedited Process," and 75 percent for all other Review Criteria. We believe that these standards have been determined to be fair and equitable and have been set through the regulatory process. We believe that States should have benchmarks to evaluate cases to make a determination if they are in compliance with the Federal requirements and to determine when corrective actions are needed to improve their performance. The case reviews will not be used as a basis for determining substantial compliance or for determination of any child support penalties.

Time standards related to "Provision of Services in Interstate IV-D Cases" will be evaluated separately; however, the extent to which child support services such as establishing orders, enforcing orders, disbursing collections, medical support, are provided or not provided should be evaluated under the appropriate Review Criteria.

Opening a case and locating non-custodial parents will be evaluated as part of "Establishment of Paternity and Support," "Enforcement of Support Orders," and "Review and Adjustment." These requirements are not an end in itself, but are, in fact, often the initial steps in providing other major program services, such as paternity and support establishment and enforcement.

In moving towards a more results-oriented review, if the State achieved a successful outcome (i.e., order established), the State will consider the case to be an Action case and will not evaluate required time frames for the review period for that Review Criterion (i.e., Establishment of Paternity and Support). Successful outcomes will be considered to have occurred for the following Review Criteria: "Establishing Paternity and Support," "Enforcement of Support Obligations," and "Review and Adjustment."

If the State did not successfully complete an outcome for a case for a Review Criterion and time standards must be evaluated, the Workgroup is recommending that the reviewer should evaluate the latest required action that occurred during the review period for which the time frame can be evaluated. Therefore, only one time standard will be evaluated for a case for a Review Criterion. (If the time standard would normally expire after the review period, but the action was completed/successful within the review period, then this action should be counted.) We believe that concentrating on the latest required time standard will avoid creating a disincentive not to work a case because a time standard has been missed. This approach focuses more on results obtained.

EXHIBIT 1 defines specifically what Federal requirements and time standards that the States will be required to evaluate annually. It also provides general rules for evaluating cases. It is envisioned that the States will move towards automating the case evaluations utilizing its statewide child support enforcement system. However, in the meantime, we have provided a spreadsheet matrix, which the States may use as a tool to gather the data relating to the review criteria, EXHIBIT 2.

Category 2: Program Direction (Optional)

This segment of the self-assessment evaluation should be an analysis of the relationships between case results relating to program compliance areas, and performance and program outcome indicators. While this review area is optional, States have the opportunity to demonstrate how they are trying to manage their resources to achieve the best performance possible. This evaluation should explain the data and how the state adjusted their resources and processes to meet their goals and improve performance. In this section, States are encouraged to discuss new laws and enforcement techniques, etc., that are contributing to increased performance. Barriers to success, such as State statutes, may also be discussed in this section.

This section is intended to provide the State with an opportunity to evaluate and discuss such factors as: how to improve its child support program; how to determine where technical assistance may be needed; and where its program is working well.

Category 3: Program Service Enhancements (Optional)

This review area is envisioned as a report of practices initiated by the States that are contributing to improving program performance and customer service. This optional area is an opportunity for States to promote their programs and innovative practices. Some examples of innovative activities that States may elect to discuss in the report include such things as:

This review area should also discuss whether the State has a process being implemented which provides for timely dissemination of non-AFDC applications, when requested, and child support program information to recipients referred to the IV-D program, as required by 45 CFR 303.2(a).

The Workgroup believes that this reporting category could be used by Federal staff to provide technical assistance to other States and disseminate "best practices" to other States.

FEDERAL ROLE

The Federal role is to review annual reports submitted pursuant to section 454(15)(A) of the Act and, as appropriate, provide to the States comments, recommendations for additional or alternative corrective action(s), and provide technical assistance.

It was decided that the Federal involvement should include, but not be limited to:

REPORT FORMAT

The Workgroup reached consensus and determined that the required report should have three sections. Category 1, Required Compliance Criteria, will be mandatory to determine compliance with specifically cited Federal requirements. Category 2, Program Direction and Category 3, Program Service Enhancements, would be optional.

Category 1, Required Compliance Criteria, must be presented for all review criteria in a schedule (See EXHIBIT 1). At a minimum, deficiencies and recommendations would only be discussed on an exception basis for those criteria failing to meet the compliance standard. However, States may also elect to address their positive program outcomes in this section of the report.

Category 2, Program Direction, will be comprised of a narrative that shows cause and effect relationships as the State relates data from Category 1 to emphasize performance and program outcomes. States have the option to demonstrate how they are trying to manage their resources to achieve the best performance possible. This narrative will explain the data and how the State adjusted their resources and processes to meet their goals and improve performance. In this section, States are encouraged to discuss new laws and enforcement techniques, etc., which are contributing to increased performance. Barriers to success, such as State laws and resource limitations, may also be discussed. Results that do not meet State's expectations could also serve as a basis for requesting Federal technical assistance.

Category 3, Program Service Enhancements, is envisioned as presenting innovative practices or creative use of IV-D resources by the State to improve the Child Support Enforcement program. Such topics discussed may include outreach; in-hospital paternity; increase office hours to service customers, etc. This narrative must be related to program improvements or assessments.

PRESENTATION OF RESULTS

The report should present the case results for all required review criteria: Case Closure; Establishment of Paternity and Support Orders; Expedited Processes; Enforcement of Orders (including wage withholding and tax offsets); Review and Adjustment; Securing and Enforcing Medical; Interstate Services; and Disbursement of Collections. For those criteria that fail to meet the appropriate targeted benchmark (75 and 90 percent), the report should analyze the reasons for the case deficiencies, draw conclusions, and make recommendations as to what corrective action(s) should be taken by the State. Subsequent annual reports should address any deficiencies from prior year's reports, and whether corrective action(s) taken stimulated program improvement.

The State should attempt to determine if the problems appear to be statewide or isolated to certain regions or counties. If the problems are significant, the State will want to analyze how the process works, and determine if there are barriers to getting the desired results (such as if the staff is organized effectively to achieve results or if the State's automated system could more fully automate the function).

Ideally, while doing analysis and corrective action proposals, the State should provide a link to what was done to accomplish increased performance. States are encouraged to present "Best Practices" that contributed to their success in improved program performance.

The States should be vigilant to establish processes to use the level of automation that they have. Where appropriate, the automated system should be used to capture required data. In using the automated system to develop the required data, there should be some verification to ensure the reliability of the data.

IMPLEMENTATION STRATEGY

The Self-Assessment Workgroup recognizes that the guidelines and recommendations proposed herein, will not, in themselves, end the involvement of this group or the appropriate Federal oversight agencies, as States move forward with their self-assessment strategies. There remain specific steps that must be initiated within defined time frames, and which are critical to ensure that all States can implement a process that will comply with PRWORA directives and provide meaningful and consistent analysis of State's programs.

The Federal office, in this endeavor, plans to lend all support necessary to their State partners and work closely with them to make their self-assessment initiatives meaningful. In addition to continuing the involvement of this Workgroup on an ongoing and as-needed basis, the Federal partners are committed to lending all support necessary to assist States in implementing their self-assessment functions. Examples of some types of assistance the States may receive from the Federal staff include; training, technical assistance and coordination among the various States.

It is also intended that all interested stakeholders, including the various advocacy groups, will be invited to involve themselves in the self-assessment process. The process will be assessed on an ongoing basis and adjusted, as appropriate, to incorporate newly mandated requirements and to accommodate the needs of all of the partners. Through these initiatives, it is envisioned that the changes brought about by PRWORA regarding States' self-assessment will become reality in an expeditious fashion, and will serve both the States and their Federal partners in moving the child support program forward as we continue to serve America's children.